Research Reports of the Max Planck Institute for Informatics
2023
Report on the Security State of Networks of Max-Planck Institutes : Findings and Recommendations
T. Fiebig
Technical Report, 2023
T. Fiebig
Technical Report, 2023
Export
BibTeX
@techreport{Fiebig_Report23,
TITLE = {Report on the Security State of Networks of Max-Planck Institutes : Findings and Recommendations},
AUTHOR = {Fiebig, Tobias},
LANGUAGE = {eng},
DOI = {10.17617/2.3532055},
INSTITUTION = {Max Planck Society},
ADDRESS = {M{\"u}nchen},
YEAR = {2023},
MARGINALMARK = {$\bullet$},
}
Endnote
%0 Report
%A Fiebig, Tobias
%+ Internet Architecture, MPI for Informatics, Max Planck Society
%T Report on the Security State of Networks of Max-Planck Institutes : Findings and Recommendations :
%G eng
%U http://hdl.handle.net/21.11116/0000-000D-C4C9-3
%R 10.17617/2.3532055
%Y Max Planck Society
%C München
%D 2023
%P 70 p.
2020
Parametric Hand Texture Model for 3D Hand Reconstruction and Personalization
N. Qian, J. Wang, F. Mueller, F. Bernard, V. Golyanik and C. Theobalt
Technical Report, 2020
N. Qian, J. Wang, F. Mueller, F. Bernard, V. Golyanik and C. Theobalt
Technical Report, 2020
Abstract
3D hand reconstruction from image data is a widely-studied problem in com-<br>puter vision and graphics, and has a particularly high relevance for virtual<br>and augmented reality. Although several 3D hand reconstruction approaches<br>leverage hand models as a strong prior to resolve ambiguities and achieve a<br>more robust reconstruction, most existing models account only for the hand<br>shape and poses and do not model the texture. To fill this gap, in this work<br>we present the first parametric texture model of human hands. Our model<br>spans several dimensions of hand appearance variability (e.g., related to gen-<br>der, ethnicity, or age) and only requires a commodity camera for data acqui-<br>sition. Experimentally, we demonstrate that our appearance model can be<br>used to tackle a range of challenging problems such as 3D hand reconstruc-<br>tion from a single monocular image. Furthermore, our appearance model<br>can be used to define a neural rendering layer that enables training with a<br>self-supervised photometric loss. We make our model publicly available.
Export
BibTeX
@techreport{Qian_report2020,
TITLE = {Parametric Hand Texture Model for {3D} Hand Reconstruction and Personalization},
AUTHOR = {Qian, Neng and Wang, Jiayi and Mueller, Franziska and Bernard, Florian and Golyanik, Vladislav and Theobalt, Christian},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2020-4-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2020},
ABSTRACT = {3D hand reconstruction from image data is a widely-studied problem in com-<br>puter vision and graphics, and has a particularly high relevance for virtual<br>and augmented reality. Although several 3D hand reconstruction approaches<br>leverage hand models as a strong prior to resolve ambiguities and achieve a<br>more robust reconstruction, most existing models account only for the hand<br>shape and poses and do not model the texture. To {fi}ll this gap, in this work<br>we present the {fi}rst parametric texture model of human hands. Our model<br>spans several dimensions of hand appearance variability (e.g., related to gen-<br>der, ethnicity, or age) and only requires a commodity camera for data acqui-<br>sition. Experimentally, we demonstrate that our appearance model can be<br>used to tackle a range of challenging problems such as 3D hand reconstruc-<br>tion from a single monocular image. Furthermore, our appearance model<br>can be used to de{fi}ne a neural rendering layer that enables training with a<br>self-supervised photometric loss. We make our model publicly available.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Qian, Neng
%A Wang, Jiayi
%A Mueller, Franziska
%A Bernard, Florian
%A Golyanik, Vladislav
%A Theobalt, Christian
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Parametric Hand Texture Model for 3D Hand Reconstruction and
Personalization :
%G eng
%U http://hdl.handle.net/21.11116/0000-0006-9128-9
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2020
%P 37 p.
%X 3D hand reconstruction from image data is a widely-studied problem in com-<br>puter vision and graphics, and has a particularly high relevance for virtual<br>and augmented reality. Although several 3D hand reconstruction approaches<br>leverage hand models as a strong prior to resolve ambiguities and achieve a<br>more robust reconstruction, most existing models account only for the hand<br>shape and poses and do not model the texture. To fill this gap, in this work<br>we present the first parametric texture model of human hands. Our model<br>spans several dimensions of hand appearance variability (e.g., related to gen-<br>der, ethnicity, or age) and only requires a commodity camera for data acqui-<br>sition. Experimentally, we demonstrate that our appearance model can be<br>used to tackle a range of challenging problems such as 3D hand reconstruc-<br>tion from a single monocular image. Furthermore, our appearance model<br>can be used to define a neural rendering layer that enables training with a<br>self-supervised photometric loss. We make our model publicly available.
%K hand texture model, appearance modeling, hand tracking, 3D hand recon-
struction
%B Research Report
%@ false
2017
Live User-guided Intrinsic Video For Static Scenes
G. Fox, A. Meka, M. Zollhöfer, C. Richardt and C. Theobalt
Technical Report, 2017
G. Fox, A. Meka, M. Zollhöfer, C. Richardt and C. Theobalt
Technical Report, 2017
Abstract
We present a novel real-time approach for user-guided intrinsic decomposition
of static scenes captured by an RGB-D sensor. In the
first step, we acquire a three-dimensional representation of the scene
using a dense volumetric reconstruction framework. The obtained
reconstruction serves as a proxy to densely fuse reflectance estimates
and to store user-provided constraints in three-dimensional space.
User constraints, in the form of constant shading and reflectance
strokes, can be placed directly on the real-world geometry using
an intuitive touch-based interaction metaphor, or using interactive
mouse strokes. Fusing the decomposition results and constraints in
three-dimensional space allows for robust propagation of this information
to novel views by re-projection.We leverage this information
to improve on the decomposition quality of existing intrinsic video
decomposition techniques by further constraining the ill-posed decomposition
problem. In addition to improved decomposition quality,
we show a variety of live augmented reality applications such as
recoloring of objects, relighting of scenes and editing of material
appearance.
Export
BibTeX
@techreport{Report2017-4-001,
TITLE = {Live User-guided Intrinsic Video For Static Scenes},
AUTHOR = {Fox, Gereon and Meka, Abhimitra and Zollh{\"o}fer, Michael and Richardt, Christian and Theobalt, Christian},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2017-4-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2017},
ABSTRACT = {We present a novel real-time approach for user-guided intrinsic decomposition of static scenes captured by an RGB-D sensor. In the first step, we acquire a three-dimensional representation of the scene using a dense volumetric reconstruction framework. The obtained reconstruction serves as a proxy to densely fuse reflectance estimates and to store user-provided constraints in three-dimensional space. User constraints, in the form of constant shading and reflectance strokes, can be placed directly on the real-world geometry using an intuitive touch-based interaction metaphor, or using interactive mouse strokes. Fusing the decomposition results and constraints in three-dimensional space allows for robust propagation of this information to novel views by re-projection.We leverage this information to improve on the decomposition quality of existing intrinsic video decomposition techniques by further constraining the ill-posed decomposition problem. In addition to improved decomposition quality, we show a variety of live augmented reality applications such as recoloring of objects, relighting of scenes and editing of material appearance.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Fox, Gereon
%A Meka, Abhimitra
%A Zollhöfer, Michael
%A Richardt, Christian
%A Theobalt, Christian
%+ External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Live User-guided Intrinsic Video For Static Scenes :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-002C-5DA7-3
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2017
%P 12 p.
%X We present a novel real-time approach for user-guided intrinsic decomposition
of static scenes captured by an RGB-D sensor. In the
first step, we acquire a three-dimensional representation of the scene
using a dense volumetric reconstruction framework. The obtained
reconstruction serves as a proxy to densely fuse reflectance estimates
and to store user-provided constraints in three-dimensional space.
User constraints, in the form of constant shading and reflectance
strokes, can be placed directly on the real-world geometry using
an intuitive touch-based interaction metaphor, or using interactive
mouse strokes. Fusing the decomposition results and constraints in
three-dimensional space allows for robust propagation of this information
to novel views by re-projection.We leverage this information
to improve on the decomposition quality of existing intrinsic video
decomposition techniques by further constraining the ill-posed decomposition
problem. In addition to improved decomposition quality,
we show a variety of live augmented reality applications such as
recoloring of objects, relighting of scenes and editing of material
appearance.
%B Research Report
%@ false
Generating Semantic Aspects for Queries
D. Gupta, K. Berberich, J. Strötgen and D. Zeinalipour-Yazti
Technical Report, 2017
D. Gupta, K. Berberich, J. Strötgen and D. Zeinalipour-Yazti
Technical Report, 2017
Abstract
Ambiguous information needs expressed in a limited number of keywords<br>often result in long-winded query sessions and many query reformulations.<br>In this work, we tackle ambiguous queries by providing automatically gen-<br>erated semantic aspects that can guide users to satisfying results regarding<br>their information needs. To generate semantic aspects, we use semantic an-<br>notations available in the documents and leverage models representing the<br>semantic relationships between annotations of the same type. The aspects in<br>turn provide us a foundation for representing text in a completely structured<br>manner, thereby allowing for a semantically-motivated organization of search<br>results. We evaluate our approach on a testbed of over 5,000 aspects on Web<br>scale document collections amounting to more than 450 million documents,<br>with temporal, geographic, and named entity annotations as example dimen-<br>sions. Our experimental results show that our general approach is Web-scale<br>ready and finds relevant aspects for highly ambiguous queries.
Export
BibTeX
@techreport{Guptareport2007,
TITLE = {Generating Semantic Aspects for Queries},
AUTHOR = {Gupta, Dhruv and Berberich, Klaus and Str{\"o}tgen, Jannik and Zeinalipour-Yazti, Demetrios},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2017-5-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2017},
ABSTRACT = {Ambiguous information needs expressed in a limited number of keywords<br>often result in long-winded query sessions and many query reformulations.<br>In this work, we tackle ambiguous queries by providing automatically gen-<br>erated semantic aspects that can guide users to satisfying results regarding<br>their information needs. To generate semantic aspects, we use semantic an-<br>notations available in the documents and leverage models representing the<br>semantic relationships between annotations of the same type. The aspects in<br>turn provide us a foundation for representing text in a completely structured<br>manner, thereby allowing for a semantically-motivated organization of search<br>results. We evaluate our approach on a testbed of over 5,000 aspects on Web<br>scale document collections amounting to more than 450 million documents,<br>with temporal, geographic, and named entity annotations as example dimen-<br>sions. Our experimental results show that our general approach is Web-scale<br>ready and finds relevant aspects for highly ambiguous queries.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Gupta, Dhruv
%A Berberich, Klaus
%A Strötgen, Jannik
%A Zeinalipour-Yazti, Demetrios
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Generating Semantic Aspects for Queries :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-002E-07DD-0
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2017
%P 39 p.
%X Ambiguous information needs expressed in a limited number of keywords<br>often result in long-winded query sessions and many query reformulations.<br>In this work, we tackle ambiguous queries by providing automatically gen-<br>erated semantic aspects that can guide users to satisfying results regarding<br>their information needs. To generate semantic aspects, we use semantic an-<br>notations available in the documents and leverage models representing the<br>semantic relationships between annotations of the same type. The aspects in<br>turn provide us a foundation for representing text in a completely structured<br>manner, thereby allowing for a semantically-motivated organization of search<br>results. We evaluate our approach on a testbed of over 5,000 aspects on Web<br>scale document collections amounting to more than 450 million documents,<br>with temporal, geographic, and named entity annotations as example dimen-<br>sions. Our experimental results show that our general approach is Web-scale<br>ready and finds relevant aspects for highly ambiguous queries.
%B Research Report
%@ false
WatchSense: On- and Above-Skin Input Sensing through a Wearable Depth Sensor
S. Sridhar, A. Markussen, A. Oulasvirta, C. Theobalt and S. Boring
Technical Report, 2017
S. Sridhar, A. Markussen, A. Oulasvirta, C. Theobalt and S. Boring
Technical Report, 2017
Abstract
This paper contributes a novel sensing approach to support on- and above-skin finger input for interaction on the move. WatchSense uses a depth sensor embedded in a wearable device to expand the input space to neighboring areas of skin and the space above it. Our approach addresses challenging camera-based tracking conditions, such as oblique viewing angles and occlusions. It can accurately detect fingertips, their locations, and whether they are touching the skin or hovering above it. It extends previous work that supported either mid-air or multitouch input by simultaneously supporting both. We demonstrate feasibility with a compact, wearable prototype attached to a user's forearm (simulating an integrated depth sensor). Our prototype---which runs in real-time on consumer mobile devices---enables a 3D input space on the back of the hand. We evaluated the accuracy and robustness of the approach in a user study. We also show how WatchSense increases the expressiveness of input by interweaving mid-air and multitouch for several interactive applications.
Export
BibTeX
@techreport{sridharwatch17,
TITLE = {{WatchSense}: On- and Above-Skin Input Sensing through a Wearable Depth Sensor},
AUTHOR = {Sridhar, Srinath and Markussen, Anders and Oulasvirta, Antti and Theobalt, Christian and Boring, Sebastian},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2016-4-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2017},
ABSTRACT = {This paper contributes a novel sensing approach to support on- and above-skin finger input for interaction on the move. WatchSense uses a depth sensor embedded in a wearable device to expand the input space to neighboring areas of skin and the space above it. Our approach addresses challenging camera-based tracking conditions, such as oblique viewing angles and occlusions. It can accurately detect fingertips, their locations, and whether they are touching the skin or hovering above it. It extends previous work that supported either mid-air or multitouch input by simultaneously supporting both. We demonstrate feasibility with a compact, wearable prototype attached to a user's forearm (simulating an integrated depth sensor). Our prototype---which runs in real-time on consumer mobile devices---enables a 3D input space on the back of the hand. We evaluated the accuracy and robustness of the approach in a user study. We also show how WatchSense increases the expressiveness of input by interweaving mid-air and multitouch for several interactive applications.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Sridhar, Srinath
%A Markussen, Anders
%A Oulasvirta, Antti
%A Theobalt, Christian
%A Boring, Sebastian
%+ Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
%T WatchSense: On- and Above-Skin Input Sensing through a Wearable Depth Sensor :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-002C-402E-D
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2017
%P 17 p.
%X This paper contributes a novel sensing approach to support on- and above-skin finger input for interaction on the move. WatchSense uses a depth sensor embedded in a wearable device to expand the input space to neighboring areas of skin and the space above it. Our approach addresses challenging camera-based tracking conditions, such as oblique viewing angles and occlusions. It can accurately detect fingertips, their locations, and whether they are touching the skin or hovering above it. It extends previous work that supported either mid-air or multitouch input by simultaneously supporting both. We demonstrate feasibility with a compact, wearable prototype attached to a user's forearm (simulating an integrated depth sensor). Our prototype---which runs in real-time on consumer mobile devices---enables a 3D input space on the back of the hand. We evaluated the accuracy and robustness of the approach in a user study. We also show how WatchSense increases the expressiveness of input by interweaving mid-air and multitouch for several interactive applications.
%B Research Report
%@ false
2016
Verification of Linear Hybrid Systems with Large Discrete State Spaces: Exploring the Design Space for Optimization
E. Althaus, B. Beber, W. Damm, S. Disch, W. Hagemann, A. Rakow, C. Scholl, U. Waldmann and B. Wirtz
Technical Report, 2016
E. Althaus, B. Beber, W. Damm, S. Disch, W. Hagemann, A. Rakow, C. Scholl, U. Waldmann and B. Wirtz
Technical Report, 2016
Abstract
This paper provides a suite of optimization techniques for
the verification of safety properties of linear hybrid
automata with large discrete state spaces, such as
naturally arising when incorporating health state
monitoring and degradation levels into the controller
design. Such models can -- in contrast to purely functional
controller models -- not analyzed with hybrid verification
engines relying on explicit representations of modes, but
require fully symbolic representations for both the
continuous and discrete part of the state space. The
optimization techniques shown yield consistently a speedup
of about 20 against previously published results for a
similar benchmark suite, and complement these with new
results on counterexample guided abstraction refinement. In
combination with the methods guaranteeing preciseness of
abstractions, this allows to significantly extend the class
of models for which safety can be established, covering in
particular models with 23 continuous variables and 2 to the
71 discrete states, 20 continuous variables and 2 to the
199 discrete states, and 9 continuous variables and 2 to
the 271 discrete states.
Export
BibTeX
@techreport{AlthausBeberDammEtAl2016ATR,
TITLE = {Verification of Linear Hybrid Systems with Large Discrete State Spaces: Exploring the Design Space for Optimization},
AUTHOR = {Althaus, Ernst and Beber, Bj{\"o}rn and Damm, Werner and Disch, Stefan and Hagemann, Willem and Rakow, Astrid and Scholl, Christoph and Waldmann, Uwe and Wirtz, Boris},
LANGUAGE = {eng},
ISSN = {1860-9821},
NUMBER = {ATR103},
INSTITUTION = {SFB/TR 14 AVACS},
YEAR = {2016},
DATE = {2016},
ABSTRACT = {This paper provides a suite of optimization techniques for the verification of safety properties of linear hybrid automata with large discrete state spaces, such as naturally arising when incorporating health state monitoring and degradation levels into the controller design. Such models can -- in contrast to purely functional controller models -- not analyzed with hybrid verification engines relying on explicit representations of modes, but require fully symbolic representations for both the continuous and discrete part of the state space. The optimization techniques shown yield consistently a speedup of about 20 against previously published results for a similar benchmark suite, and complement these with new results on counterexample guided abstraction refinement. In combination with the methods guaranteeing preciseness of abstractions, this allows to significantly extend the class of models for which safety can be established, covering in particular models with 23 continuous variables and 2 to the 71 discrete states, 20 continuous variables and 2 to the 199 discrete states, and 9 continuous variables and 2 to the 271 discrete states.},
TYPE = {AVACS Technical Report},
VOLUME = {103},
}
Endnote
%0 Report
%A Althaus, Ernst
%A Beber, Björn
%A Damm, Werner
%A Disch, Stefan
%A Hagemann, Willem
%A Rakow, Astrid
%A Scholl, Christoph
%A Waldmann, Uwe
%A Wirtz, Boris
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
Automation of Logic, MPI for Informatics, Max Planck Society
International Max Planck Research School, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
Automation of Logic, MPI for Informatics, Max Planck Society
External Organizations
%T Verification of Linear Hybrid Systems with Large Discrete
State Spaces: Exploring the Design Space for Optimization :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-002C-4540-0
%Y SFB/TR 14 AVACS
%D 2016
%P 93 p.
%X This paper provides a suite of optimization techniques for
the verification of safety properties of linear hybrid
automata with large discrete state spaces, such as
naturally arising when incorporating health state
monitoring and degradation levels into the controller
design. Such models can -- in contrast to purely functional
controller models -- not analyzed with hybrid verification
engines relying on explicit representations of modes, but
require fully symbolic representations for both the
continuous and discrete part of the state space. The
optimization techniques shown yield consistently a speedup
of about 20 against previously published results for a
similar benchmark suite, and complement these with new
results on counterexample guided abstraction refinement. In
combination with the methods guaranteeing preciseness of
abstractions, this allows to significantly extend the class
of models for which safety can be established, covering in
particular models with 23 continuous variables and 2 to the
71 discrete states, 20 continuous variables and 2 to the
199 discrete states, and 9 continuous variables and 2 to
the 271 discrete states.
%B AVACS Technical Report
%N 103
%@ false
%U http://www.avacs.org/fileadmin/Publikationen/Open/avacs_technical_report_103.pdf
Diversifying Search Results Using Time
D. Gupta and K. Berberich
Technical Report, 2016
D. Gupta and K. Berberich
Technical Report, 2016
Abstract
Getting an overview of a historic entity or event can be difficult in search results, especially if important dates concerning the entity or event are not known beforehand. For such information needs, users would benefit if returned results covered diverse dates, thus giving an overview of what has happened throughout history. Diversifying search results based on important dates can be a building block for applications, for instance, in digital humanities. Historians would thus be able to quickly explore
longitudinal document collections by querying for entities or events without knowing associated important dates apriori.
In this work, we describe an approach to diversify search results using temporal expressions (e.g., in the 1990s) from their
contents. Our approach first identifies time intervals of interest to the given keyword query based on pseudo-relevant documents. It then re-ranks query results so as to maximize the coverage of
identified time intervals. We present a novel and objective evaluation for our proposed
approach. We test the effectiveness of our methods on the New York Times Annotated corpus and the Living Knowledge corpus, collectively consisting of around 6 million documents. Using history-oriented queries and encyclopedic resources we show that our method indeed is able to present
search results diversified along time.
Export
BibTeX
@techreport{GuptaReport2016-5-001,
TITLE = {Diversifying Search Results Using Time},
AUTHOR = {Gupta, Dhruv and Berberich, Klaus},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2016-5-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2016},
ABSTRACT = {Getting an overview of a historic entity or event can be difficult in search results, especially if important dates concerning the entity or event are not known beforehand. For such information needs, users would benefit if returned results covered diverse dates, thus giving an overview of what has happened throughout history. Diversifying search results based on important dates can be a building block for applications, for instance, in digital humanities. Historians would thus be able to quickly explore longitudinal document collections by querying for entities or events without knowing associated important dates apriori. In this work, we describe an approach to diversify search results using temporal expressions (e.g., in the 1990s) from their contents. Our approach first identifies time intervals of interest to the given keyword query based on pseudo-relevant documents. It then re-ranks query results so as to maximize the coverage of identified time intervals. We present a novel and objective evaluation for our proposed approach. We test the effectiveness of our methods on the New York Times Annotated corpus and the Living Knowledge corpus, collectively consisting of around 6 million documents. Using history-oriented queries and encyclopedic resources we show that our method indeed is able to present search results diversified along time.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Gupta, Dhruv
%A Berberich, Klaus
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Diversifying Search Results Using Time :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-002A-0AA4-C
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2016
%P 51 p.
%X Getting an overview of a historic entity or event can be difficult in search results, especially if important dates concerning the entity or event are not known beforehand. For such information needs, users would benefit if returned results covered diverse dates, thus giving an overview of what has happened throughout history. Diversifying search results based on important dates can be a building block for applications, for instance, in digital humanities. Historians would thus be able to quickly explore
longitudinal document collections by querying for entities or events without knowing associated important dates apriori.
In this work, we describe an approach to diversify search results using temporal expressions (e.g., in the 1990s) from their
contents. Our approach first identifies time intervals of interest to the given keyword query based on pseudo-relevant documents. It then re-ranks query results so as to maximize the coverage of
identified time intervals. We present a novel and objective evaluation for our proposed
approach. We test the effectiveness of our methods on the New York Times Annotated corpus and the Living Knowledge corpus, collectively consisting of around 6 million documents. Using history-oriented queries and encyclopedic resources we show that our method indeed is able to present
search results diversified along time.
%B Research Report
%@ false
Leveraging Semantic Annotations to Link Wikipedia and News Archives
A. Mishra and K. Berberich
Technical Report, 2016
A. Mishra and K. Berberich
Technical Report, 2016
Abstract
The incomprehensible amount of information available online has made it difficult to retrospect on past events. We propose a novel linking problem to connect excerpts from Wikipedia summarizing events to online news articles elaborating on them.
To address the linking problem, we cast it into an information retrieval task by treating a given excerpt as a user query with the goal to retrieve a ranked list of relevant news articles. We find that Wikipedia excerpts often come with additional semantics, in their textual descriptions, representing the time, geolocations, and named entities involved in the event. Our retrieval model leverages text and semantic annotations as different dimensions of an event by estimating independent query models to rank documents. In our experiments on two datasets, we compare methods that consider different combinations of dimensions and find that the approach that leverages all dimensions suits our problem best.
Export
BibTeX
@techreport{MishraBerberich16,
TITLE = {Leveraging Semantic Annotations to Link Wikipedia and News Archives},
AUTHOR = {Mishra, Arunav and Berberich, Klaus},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2016-5-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2016},
ABSTRACT = {The incomprehensible amount of information available online has made it difficult to retrospect on past events. We propose a novel linking problem to connect excerpts from Wikipedia summarizing events to online news articles elaborating on them. To address the linking problem, we cast it into an information retrieval task by treating a given excerpt as a user query with the goal to retrieve a ranked list of relevant news articles. We find that Wikipedia excerpts often come with additional semantics, in their textual descriptions, representing the time, geolocations, and named entities involved in the event. Our retrieval model leverages text and semantic annotations as different dimensions of an event by estimating independent query models to rank documents. In our experiments on two datasets, we compare methods that consider different combinations of dimensions and find that the approach that leverages all dimensions suits our problem best.},
TYPE = {Research Reports},
}
Endnote
%0 Report
%A Mishra, Arunav
%A Berberich, Klaus
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Leveraging Semantic Annotations to Link Wikipedia and News Archives :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0029-5FF0-A
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2016
%P 21 p.
%X The incomprehensible amount of information available online has made it difficult to retrospect on past events. We propose a novel linking problem to connect excerpts from Wikipedia summarizing events to online news articles elaborating on them.
To address the linking problem, we cast it into an information retrieval task by treating a given excerpt as a user query with the goal to retrieve a ranked list of relevant news articles. We find that Wikipedia excerpts often come with additional semantics, in their textual descriptions, representing the time, geolocations, and named entities involved in the event. Our retrieval model leverages text and semantic annotations as different dimensions of an event by estimating independent query models to rank documents. In our experiments on two datasets, we compare methods that consider different combinations of dimensions and find that the approach that leverages all dimensions suits our problem best.
%B Research Reports
%@ false
Real-time Joint Tracking of a Hand Manipulating an Object from RGB-D Input
S. Sridhar, F. Mueller, M. Zollhöfer, D. Casas, A. Oulasvirta and C. Theobalt
Technical Report, 2016
S. Sridhar, F. Mueller, M. Zollhöfer, D. Casas, A. Oulasvirta and C. Theobalt
Technical Report, 2016
Abstract
Real-time simultaneous tracking of hands manipulating and interacting with
external objects has many potential applications in augmented reality, tangible
computing, and wearable computing. However, due to dicult occlusions,
fast motions, and uniform hand appearance, jointly tracking hand and object
pose is more challenging than tracking either of the two separately. Many
previous approaches resort to complex multi-camera setups to remedy the occlusion
problem and often employ expensive segmentation and optimization
steps which makes real-time tracking impossible. In this paper, we propose
a real-time solution that uses a single commodity RGB-D camera. The core
of our approach is a 3D articulated Gaussian mixture alignment strategy tailored
to hand-object tracking that allows fast pose optimization. The alignment
energy uses novel regularizers to address occlusions and hand-object
contacts. For added robustness, we guide the optimization with discriminative
part classication of the hand and segmentation of the object. We
conducted extensive experiments on several existing datasets and introduce
a new annotated hand-object dataset. Quantitative and qualitative results
show the key advantages of our method: speed, accuracy, and robustness.
Export
BibTeX
@techreport{Report2016-4-001,
TITLE = {Real-time Joint Tracking of a Hand Manipulating an Object from {RGB-D} Input},
AUTHOR = {Sridhar, Srinath and Mueller, Franziska and Zollh{\"o}fer, Michael and Casas, Dan and Oulasvirta, Antti and Theobalt, Christian},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2016-4-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2016},
ABSTRACT = {Real-time simultaneous tracking of hands manipulating and interacting with external objects has many potential applications in augmented reality, tangible computing, and wearable computing. However, due to dicult occlusions, fast motions, and uniform hand appearance, jointly tracking hand and object pose is more challenging than tracking either of the two separately. Many previous approaches resort to complex multi-camera setups to remedy the occlusion problem and often employ expensive segmentation and optimization steps which makes real-time tracking impossible. In this paper, we propose a real-time solution that uses a single commodity RGB-D camera. The core of our approach is a 3D articulated Gaussian mixture alignment strategy tailored to hand-object tracking that allows fast pose optimization. The alignment energy uses novel regularizers to address occlusions and hand-object contacts. For added robustness, we guide the optimization with discriminative part classication of the hand and segmentation of the object. We conducted extensive experiments on several existing datasets and introduce a new annotated hand-object dataset. Quantitative and qualitative results show the key advantages of our method: speed, accuracy, and robustness.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Sridhar, Srinath
%A Mueller, Franziska
%A Zollhöfer, Michael
%A Casas, Dan
%A Oulasvirta, Antti
%A Theobalt, Christian
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Real-time Joint Tracking of a Hand Manipulating an Object from RGB-D Input :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-002B-5510-A
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2016
%P 31 p.
%X Real-time simultaneous tracking of hands manipulating and interacting with
external objects has many potential applications in augmented reality, tangible
computing, and wearable computing. However, due to dicult occlusions,
fast motions, and uniform hand appearance, jointly tracking hand and object
pose is more challenging than tracking either of the two separately. Many
previous approaches resort to complex multi-camera setups to remedy the occlusion
problem and often employ expensive segmentation and optimization
steps which makes real-time tracking impossible. In this paper, we propose
a real-time solution that uses a single commodity RGB-D camera. The core
of our approach is a 3D articulated Gaussian mixture alignment strategy tailored
to hand-object tracking that allows fast pose optimization. The alignment
energy uses novel regularizers to address occlusions and hand-object
contacts. For added robustness, we guide the optimization with discriminative
part classication of the hand and segmentation of the object. We
conducted extensive experiments on several existing datasets and introduce
a new annotated hand-object dataset. Quantitative and qualitative results
show the key advantages of our method: speed, accuracy, and robustness.
%B Research Report
%@ false
FullHand: Markerless Skeleton-based Tracking for Free-Hand Interaction
S. Sridhar, G. Bailly, E. Heydrich, A. Oulasvirta and C. Theobalt
Technical Report, 2016
S. Sridhar, G. Bailly, E. Heydrich, A. Oulasvirta and C. Theobalt
Technical Report, 2016
Abstract
This paper advances a novel markerless hand tracking method for interactive applications. FullHand uses input from RGB and depth cameras in a desktop setting. It combines, in a voting scheme, a discriminative, part-based pose retrieval with a generative pose estimation method based on local optimization. We develop this approach to enable: (1) capturing hand articulations with high number of degrees of freedom, including the motion of all fingers, (2) sufficient precision, shown in a dataset of user-generated gestures, and (3) a high framerate of 50 fps for one hand. We discuss the design of free-hand interactions with the tracker and present several demonstrations ranging from simple (few DOFs) to complex (finger individuation plus global hand motion), including mouse operation, a first-person shooter and virtual globe navigation. A user study on the latter shows that free-hand interactions implemented for the tracker can equal mouse-based interactions in user performance.
Export
BibTeX
@techreport{Report2016-4-002,
TITLE = {{FullHand}: {M}arkerless Skeleton-based Tracking for Free-Hand Interaction},
AUTHOR = {Sridhar, Srinath and Bailly, Gilles and Heydrich, Elias and Oulasvirta, Antti and Theobalt, Christian},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2016-4-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2016},
ABSTRACT = {This paper advances a novel markerless hand tracking method for interactive applications. FullHand uses input from RGB and depth cameras in a desktop setting. It combines, in a voting scheme, a discriminative, part-based pose retrieval with a generative pose estimation method based on local optimization. We develop this approach to enable: (1) capturing hand articulations with high number of degrees of freedom, including the motion of all fingers, (2) sufficient precision, shown in a dataset of user-generated gestures, and (3) a high framerate of 50 fps for one hand. We discuss the design of free-hand interactions with the tracker and present several demonstrations ranging from simple (few DOFs) to complex (finger individuation plus global hand motion), including mouse operation, a first-person shooter and virtual globe navigation. A user study on the latter shows that free-hand interactions implemented for the tracker can equal mouse-based interactions in user performance.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Sridhar, Srinath
%A Bailly, Gilles
%A Heydrich, Elias
%A Oulasvirta, Antti
%A Theobalt, Christian
%+ Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
%T FullHand: Markerless Skeleton-based Tracking for Free-Hand Interaction :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-002B-7456-7
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2016
%P 11 p.
%X This paper advances a novel markerless hand tracking method for interactive applications. FullHand uses input from RGB and depth cameras in a desktop setting. It combines, in a voting scheme, a discriminative, part-based pose retrieval with a generative pose estimation method based on local optimization. We develop this approach to enable: (1) capturing hand articulations with high number of degrees of freedom, including the motion of all fingers, (2) sufficient precision, shown in a dataset of user-generated gestures, and (3) a high framerate of 50 fps for one hand. We discuss the design of free-hand interactions with the tracker and present several demonstrations ranging from simple (few DOFs) to complex (finger individuation plus global hand motion), including mouse operation, a first-person shooter and virtual globe navigation. A user study on the latter shows that free-hand interactions implemented for the tracker can equal mouse-based interactions in user performance.
%B Research Report
%@ false
2015
Computational Modelling and Prediction of Gaze Estimation Error for Head-mounted Eye Trackers
M. Barz, A. Bulling and F. Daiber
Technical Report, 2015
M. Barz, A. Bulling and F. Daiber
Technical Report, 2015
Abstract
Head-mounted eye tracking has significant potential for
mobile gaze-based interaction with ambient displays but current
interfaces lack information about the tracker\'s gaze estimation error.
Consequently, current interfaces do not exploit the full potential of
gaze input as the inherent estimation error can not be dealt with. The
error depends on the physical properties of the display and constantly
varies with changes in position and distance of the user to the display.
In this work we present a computational model of gaze estimation error
for head-mounted eye trackers. Our model covers the full processing
pipeline for mobile gaze estimation, namely mapping of pupil positions
to scene camera coordinates, marker-based display detection, and display
mapping. We build the model based on a series of controlled measurements
of a sample state-of-the-art monocular head-mounted eye tracker. Results
show that our model can predict gaze estimation error with a root mean
squared error of 17.99~px ($1.96^\\circ$).
Export
BibTeX
@techreport{Barz_Rep15,
TITLE = {Computational Modelling and Prediction of Gaze Estimation Error for Head-mounted Eye Trackers},
AUTHOR = {Barz, Michael and Bulling, Andreas and Daiber, Florian},
LANGUAGE = {eng},
URL = {https://perceptual.mpi-inf.mpg.de/files/2015/01/gazequality.pdf},
NUMBER = {15-01},
INSTITUTION = {DFKI},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2015},
ABSTRACT = {Head-mounted eye tracking has significant potential for mobile gaze-based interaction with ambient displays but current interfaces lack information about the tracker\'s gaze estimation error. Consequently, current interfaces do not exploit the full potential of gaze input as the inherent estimation error can not be dealt with. The error depends on the physical properties of the display and constantly varies with changes in position and distance of the user to the display. In this work we present a computational model of gaze estimation error for head-mounted eye trackers. Our model covers the full processing pipeline for mobile gaze estimation, namely mapping of pupil positions to scene camera coordinates, marker-based display detection, and display mapping. We build the model based on a series of controlled measurements of a sample state-of-the-art monocular head-mounted eye tracker. Results show that our model can predict gaze estimation error with a root mean squared error of 17.99~px ($1.96^\\circ$).},
TYPE = {DFKI Research Report},
}
Endnote
%0 Report
%A Barz, Michael
%A Bulling, Andreas
%A Daiber, Florian
%+ External Organizations
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society
External Organizations
%T Computational Modelling and Prediction of Gaze Estimation Error
for Head-mounted Eye Trackers :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-B972-8
%U https://perceptual.mpi-inf.mpg.de/files/2015/01/gazequality.pdf
%Y DFKI
%C Saarbrücken
%D 2015
%8 01.01.2015
%P 10 p.
%X Head-mounted eye tracking has significant potential for
mobile gaze-based interaction with ambient displays but current
interfaces lack information about the tracker\'s gaze estimation error.
Consequently, current interfaces do not exploit the full potential of
gaze input as the inherent estimation error can not be dealt with. The
error depends on the physical properties of the display and constantly
varies with changes in position and distance of the user to the display.
In this work we present a computational model of gaze estimation error
for head-mounted eye trackers. Our model covers the full processing
pipeline for mobile gaze estimation, namely mapping of pupil positions
to scene camera coordinates, marker-based display detection, and display
mapping. We build the model based on a series of controlled measurements
of a sample state-of-the-art monocular head-mounted eye tracker. Results
show that our model can predict gaze estimation error with a root mean
squared error of 17.99~px ($1.96^\\circ$).
%B DFKI Research Report
%U http://www.dfki.de/web/forschung/publikationen/renameFileForDownload?filename=gazequality.pdf&file_id=uploads_2388
Decidability of Verification of Safety Properties of Spatial Families of Linear Hybrid Automata
W. Damm, M. Horbach and V. Sofronie-Stokkermans
Technical Report, 2015
W. Damm, M. Horbach and V. Sofronie-Stokkermans
Technical Report, 2015
Export
BibTeX
@techreport{atr111,
TITLE = {Decidability of Verification of Safety Properties of Spatial Families of Linear Hybrid Automata},
AUTHOR = {Damm, Werner and Horbach, Matthias and Sofronie-Stokkermans, Viorica},
LANGUAGE = {eng},
ISSN = {1860-9821},
NUMBER = {ATR111},
INSTITUTION = {SFB/TR 14 AVACS},
YEAR = {2015},
TYPE = {AVACS Technical Report},
VOLUME = {111},
}
Endnote
%0 Report
%A Damm, Werner
%A Horbach, Matthias
%A Sofronie-Stokkermans, Viorica
%+ External Organizations
Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T Decidability of Verification of Safety Properties of Spatial Families of Linear Hybrid Automata :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-002A-0805-6
%Y SFB/TR 14 AVACS
%D 2015
%P 52 p.
%B AVACS Technical Report
%N 111
%@ false
GazeProjector: Location-independent Gaze Interaction on and Across Multiple Displays
C. Lander, S. Gehring, A. Krüger, S. Boring and A. Bulling
Technical Report, 2015
C. Lander, S. Gehring, A. Krüger, S. Boring and A. Bulling
Technical Report, 2015
Abstract
Mobile gaze-based interaction with multiple displays may
occur from arbitrary positions and orientations. However, maintaining
high gaze estimation accuracy still represents a significant challenge.
To address this, we present GazeProjector, a system that combines
accurate point-of-gaze estimation with natural feature tracking on
displays to determine the mobile eye tracker’s position relative to a
display. The detected eye positions are transformed onto that display
allowing for gaze-based interaction. This allows for seamless gaze
estimation and interaction on (1) multiple displays of arbitrary sizes,
(2) independently of the user’s position and orientation to the display.
In a user study with 12 participants we compared GazeProjector to
existing well- established methods such as visual on-screen markers and
a state-of-the-art motion capture system. Our results show that our
approach is robust to varying head poses, orientations, and distances to
the display, while still providing high gaze estimation accuracy across
multiple displays without re-calibration. The system represents an
important step towards the vision of pervasive gaze-based interfaces.
Export
BibTeX
@techreport{Lander_Rep15,
TITLE = {{GazeProjector}: Location-independent Gaze Interaction on and Across Multiple Displays},
AUTHOR = {Lander, Christian and Gehring, Sven and Kr{\"u}ger, Antonio and Boring, Sebastian and Bulling, Andreas},
LANGUAGE = {eng},
URL = {http://www.dfki.de/web/research/publications?pubid=7618},
NUMBER = {15-01},
INSTITUTION = {DFKI},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2015},
ABSTRACT = {Mobile gaze-based interaction with multiple displays may occur from arbitrary positions and orientations. However, maintaining high gaze estimation accuracy still represents a significant challenge. To address this, we present GazeProjector, a system that combines accurate point-of-gaze estimation with natural feature tracking on displays to determine the mobile eye tracker{\textquoteright}s position relative to a display. The detected eye positions are transformed onto that display allowing for gaze-based interaction. This allows for seamless gaze estimation and interaction on (1) multiple displays of arbitrary sizes, (2) independently of the user{\textquoteright}s position and orientation to the display. In a user study with 12 participants we compared GazeProjector to existing well- established methods such as visual on-screen markers and a state-of-the-art motion capture system. Our results show that our approach is robust to varying head poses, orientations, and distances to the display, while still providing high gaze estimation accuracy across multiple displays without re-calibration. The system represents an important step towards the vision of pervasive gaze-based interfaces.},
TYPE = {DFKI Research Report},
}
Endnote
%0 Report
%A Lander, Christian
%A Gehring, Sven
%A Krüger, Antonio
%A Boring, Sebastian
%A Bulling, Andreas
%+ External Organizations
External Organizations
External Organizations
External Organizations
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society
%T GazeProjector: Location-independent Gaze Interaction on and
Across Multiple Displays :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-B947-A
%U http://www.dfki.de/web/research/publications?pubid=7618
%Y DFKI
%C Saarbrücken
%D 2015
%8 01.01.2015
%P 10 p.
%X Mobile gaze-based interaction with multiple displays may
occur from arbitrary positions and orientations. However, maintaining
high gaze estimation accuracy still represents a significant challenge.
To address this, we present GazeProjector, a system that combines
accurate point-of-gaze estimation with natural feature tracking on
displays to determine the mobile eye tracker’s position relative to a
display. The detected eye positions are transformed onto that display
allowing for gaze-based interaction. This allows for seamless gaze
estimation and interaction on (1) multiple displays of arbitrary sizes,
(2) independently of the user’s position and orientation to the display.
In a user study with 12 participants we compared GazeProjector to
existing well- established methods such as visual on-screen markers and
a state-of-the-art motion capture system. Our results show that our
approach is robust to varying head poses, orientations, and distances to
the display, while still providing high gaze estimation accuracy across
multiple displays without re-calibration. The system represents an
important step towards the vision of pervasive gaze-based interfaces.
%B DFKI Research Report
Modal Tableau Systems with Blocking and Congruence Closure
R. A. Schmidt and U. Waldmann
Technical Report, 2015
R. A. Schmidt and U. Waldmann
Technical Report, 2015
Export
BibTeX
@techreport{SchmidtTR2015,
TITLE = {Modal Tableau Systems with Blocking and Congruence Closure},
AUTHOR = {Schmidt, Renate A. and Waldmann, Uwe},
LANGUAGE = {eng},
NUMBER = {uk-ac-man-scw:268816},
INSTITUTION = {University of Manchester},
ADDRESS = {Manchester},
YEAR = {2015},
TYPE = {eScholar},
}
Endnote
%0 Report
%A Schmidt, Renate A.
%A Waldmann, Uwe
%+ External Organizations
Automation of Logic, MPI for Informatics, Max Planck Society
%T Modal Tableau Systems with Blocking and Congruence Closure :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-002A-08BC-A
%Y University of Manchester
%C Manchester
%D 2015
%P 22 p.
%B eScholar
%U https://www.escholar.manchester.ac.uk/uk-ac-man-scw:268816https://www.research.manchester.ac.uk/portal/files/32297317/FULL_TEXT.PDF
2014
Phrase Query Optimization on Inverted Indexes
A. Anand, I. Mele, S. Bedathur and K. Berberich
Technical Report, 2014
A. Anand, I. Mele, S. Bedathur and K. Berberich
Technical Report, 2014
Abstract
Phrase queries are a key functionality of modern search engines. Beyond that, they increasingly serve as an important building block for applications such as entity-oriented search, text analytics, and plagiarism detection. Processing phrase queries is costly, though, since positional information has to be kept in the index and all words, including stopwords, need to be considered.
We consider an augmented inverted index that indexes selected variable-length multi-word sequences in addition to single words. We study how arbitrary phrase queries can be processed efficiently on such an augmented inverted index. We show that the underlying optimization problem is NP-hard in the general case and describe an exact exponential algorithm and an approximation algorithm to its solution. Experiments on ClueWeb09 and The New York Times with different real-world query workloads examine the practical performance of our methods.
Export
BibTeX
@techreport{AnandMeleBedathurBerberich2014,
TITLE = {Phrase Query Optimization on Inverted Indexes},
AUTHOR = {Anand, Avishek and Mele, Ida and Bedathur, Srikanta and Berberich, Klaus},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2014-5-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2014},
ABSTRACT = {Phrase queries are a key functionality of modern search engines. Beyond that, they increasingly serve as an important building block for applications such as entity-oriented search, text analytics, and plagiarism detection. Processing phrase queries is costly, though, since positional information has to be kept in the index and all words, including stopwords, need to be considered. We consider an augmented inverted index that indexes selected variable-length multi-word sequences in addition to single words. We study how arbitrary phrase queries can be processed efficiently on such an augmented inverted index. We show that the underlying optimization problem is NP-hard in the general case and describe an exact exponential algorithm and an approximation algorithm to its solution. Experiments on ClueWeb09 and The New York Times with different real-world query workloads examine the practical performance of our methods.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Anand, Avishek
%A Mele, Ida
%A Bedathur, Srikanta
%A Berberich, Klaus
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Phrase Query Optimization on Inverted Indexes :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-022A-3
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2014
%P 20 p.
%X Phrase queries are a key functionality of modern search engines. Beyond that, they increasingly serve as an important building block for applications such as entity-oriented search, text analytics, and plagiarism detection. Processing phrase queries is costly, though, since positional information has to be kept in the index and all words, including stopwords, need to be considered.
We consider an augmented inverted index that indexes selected variable-length multi-word sequences in addition to single words. We study how arbitrary phrase queries can be processed efficiently on such an augmented inverted index. We show that the underlying optimization problem is NP-hard in the general case and describe an exact exponential algorithm and an approximation algorithm to its solution. Experiments on ClueWeb09 and The New York Times with different real-world query workloads examine the practical performance of our methods.
%B Research Report
%@ false
Learning Tuple Probabilities in Probabilistic Databases
M. Dylla and M. Theobald
Technical Report, 2014
M. Dylla and M. Theobald
Technical Report, 2014
Abstract
Learning the parameters of complex probabilistic-relational models from labeled
training data is a standard technique in machine learning, which has been
intensively studied in the subfield of Statistical Relational Learning (SRL),
but---so far---this is still an under-investigated topic in the context of
Probabilistic Databases (PDBs). In this paper, we focus on learning the
probability values of base tuples in a PDB from query answers, the latter of
which are represented as labeled lineage formulas. Specifically, we consider
labels in the form of pairs, each consisting of a Boolean lineage formula and a
marginal probability that comes attached to the corresponding query answer. The
resulting learning problem can be viewed as the inverse problem to confidence
computations in PDBs: given a set of labeled query answers, learn the
probability values of the base tuples, such that the marginal probabilities of
the query answers again yield in the assigned probability labels. We analyze
the learning problem from a theoretical perspective, devise two
optimization-based objectives, and provide an efficient algorithm (based on
Stochastic Gradient Descent) for solving these objectives. Finally, we conclude
this work by an experimental evaluation on three real-world and one synthetic
dataset, while competing with various techniques from SRL, reasoning in
information extraction, and optimization.
Export
BibTeX
@techreport{Dylla-Learning2014,
TITLE = {Learning Tuple Probabilities in Probabilistic Databases},
AUTHOR = {Dylla, Maximilian and Theobald, Martin},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2014-5-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2014},
ABSTRACT = {Learning the parameters of complex probabilistic-relational models from labeled training data is a standard technique in machine learning, which has been intensively studied in the subfield of Statistical Relational Learning (SRL), but---so far---this is still an under-investigated topic in the context of Probabilistic Databases (PDBs). In this paper, we focus on learning the probability values of base tuples in a PDB from query answers, the latter of which are represented as labeled lineage formulas. Specifically, we consider labels in the form of pairs, each consisting of a Boolean lineage formula and a marginal probability that comes attached to the corresponding query answer. The resulting learning problem can be viewed as the inverse problem to confidence computations in PDBs: given a set of labeled query answers, learn the probability values of the base tuples, such that the marginal probabilities of the query answers again yield in the assigned probability labels. We analyze the learning problem from a theoretical perspective, devise two optimization-based objectives, and provide an efficient algorithm (based on Stochastic Gradient Descent) for solving these objectives. Finally, we conclude this work by an experimental evaluation on three real-world and one synthetic dataset, while competing with various techniques from SRL, reasoning in information extraction, and optimization.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Dylla, Maximilian
%A Theobald, Martin
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Learning Tuple Probabilities in Probabilistic Databases :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0019-8492-6
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2014
%P 51 p.
%X Learning the parameters of complex probabilistic-relational models from labeled
training data is a standard technique in machine learning, which has been
intensively studied in the subfield of Statistical Relational Learning (SRL),
but---so far---this is still an under-investigated topic in the context of
Probabilistic Databases (PDBs). In this paper, we focus on learning the
probability values of base tuples in a PDB from query answers, the latter of
which are represented as labeled lineage formulas. Specifically, we consider
labels in the form of pairs, each consisting of a Boolean lineage formula and a
marginal probability that comes attached to the corresponding query answer. The
resulting learning problem can be viewed as the inverse problem to confidence
computations in PDBs: given a set of labeled query answers, learn the
probability values of the base tuples, such that the marginal probabilities of
the query answers again yield in the assigned probability labels. We analyze
the learning problem from a theoretical perspective, devise two
optimization-based objectives, and provide an efficient algorithm (based on
Stochastic Gradient Descent) for solving these objectives. Finally, we conclude
this work by an experimental evaluation on three real-world and one synthetic
dataset, while competing with various techniques from SRL, reasoning in
information extraction, and optimization.
%B Research Report
%@ false
Obtaining Finite Local Theory Axiomatizations via Saturation
M. Horbach and V. Sofronie-Stokkermans
Technical Report, 2014
M. Horbach and V. Sofronie-Stokkermans
Technical Report, 2014
Abstract
In this paper we study theory combinations over non-disjoint
signatures in which hierarchical and modular reasoning is
possible. We use a notion of locality of a theory extension
parameterized by a closure operator on ground terms.
We give criteria for recognizing these types of theory
extensions. We then show that combinations of extensions of
theories which are local in this extended sense have also a
locality property and hence allow modular and hierarchical
reasoning. We thus obtain parameterized decidability and
complexity results for many (combinations of) theories
important in verification.
Export
BibTeX
@techreport{atr093,
TITLE = {Obtaining Finite Local Theory Axiomatizations via Saturation},
AUTHOR = {Horbach, Matthias and Sofronie-Stokkermans, Viorica},
LANGUAGE = {eng},
ISSN = {1860-9821},
NUMBER = {ATR93},
INSTITUTION = {SFB/TR 14 AVACS},
YEAR = {2014},
ABSTRACT = {In this paper we study theory combinations over non-disjoint signatures in which hierarchical and modular reasoning is possible. We use a notion of locality of a theory extension parameterized by a closure operator on ground terms. We give criteria for recognizing these types of theory extensions. We then show that combinations of extensions of theories which are local in this extended sense have also a locality property and hence allow modular and hierarchical reasoning. We thus obtain parameterized decidability and complexity results for many (combinations of) theories important in verification.},
TYPE = {AVACS Technical Report},
VOLUME = {93},
}
Endnote
%0 Report
%A Horbach, Matthias
%A Sofronie-Stokkermans, Viorica
%+ Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T Obtaining Finite Local Theory Axiomatizations via Saturation :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-C90C-F
%Y SFB/TR 14 AVACS
%D 2014
%P 26 p.
%X In this paper we study theory combinations over non-disjoint
signatures in which hierarchical and modular reasoning is
possible. We use a notion of locality of a theory extension
parameterized by a closure operator on ground terms.
We give criteria for recognizing these types of theory
extensions. We then show that combinations of extensions of
theories which are local in this extended sense have also a
locality property and hence allow modular and hierarchical
reasoning. We thus obtain parameterized decidability and
complexity results for many (combinations of) theories
important in verification.
%B AVACS Technical Report
%N 93
%@ false
%U http://www.avacs.org/Publikationen/Open/avacs_technical_report_093.pdf
Local High-order Regularization on Data Manifolds
K. I. Kim, J. Tompkin and C. Theobalt
Technical Report, 2014
K. I. Kim, J. Tompkin and C. Theobalt
Technical Report, 2014
Export
BibTeX
@techreport{KimTR2014,
TITLE = {Local High-order Regularization on Data Manifolds},
AUTHOR = {Kim, Kwang In and Tompkin, James and Theobalt, Christian},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2014-4-001},
INSTITUTION = {Max-Planck Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2014},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Kim, Kwang In
%A Tompkin, James
%A Theobalt, Christian
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Local High-order Regularization on Data Manifolds :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-B210-7
%Y Max-Planck Institut für Informatik
%C Saarbrücken
%D 2014
%P 12 p.
%B Research Report
%@ false
Fast Tracking of Hand and Finger Articulations Using a Single Depth Camera
S. Sridhar, A. Oulasvirta and C. Theobalt
Technical Report, 2014
S. Sridhar, A. Oulasvirta and C. Theobalt
Technical Report, 2014
Export
BibTeX
@techreport{Sridhar2014,
TITLE = {Fast Tracking of Hand and Finger Articulations Using a Single Depth Camera},
AUTHOR = {Sridhar, Srinath and Oulasvirta, Antti and Theobalt, Christian},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2014-4-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2014},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Sridhar, Srinath
%A Oulasvirta, Antti
%A Theobalt, Christian
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Fast Tracking of Hand and Finger Articulations Using a Single Depth Camera :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-B5B8-8
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2014
%P 14 p.
%B Research Report
%@ false
2013
Hierarchic Superposition with Weak Abstraction
P. Baumgartner and U. Waldmann
Technical Report, 2013
P. Baumgartner and U. Waldmann
Technical Report, 2013
Abstract
Many applications of automated deduction require reasoning in
first-order logic modulo background theories, in particular some
form of integer arithmetic. A major unsolved research challenge
is to design theorem provers that are "reasonably complete"
even in the presence of free function symbols ranging into a
background theory sort. The hierarchic superposition calculus
of Bachmair, Ganzinger, and Waldmann already supports such
symbols, but, as we demonstrate, not optimally. This paper aims
to rectify the situation by introducing a novel form of clause
abstraction, a core component in the hierarchic superposition
calculus for transforming clauses into a form needed for internal
operation. We argue for the benefits of the resulting calculus
and provide a new completeness result for the fragment where
all background-sorted terms are ground.
Export
BibTeX
@techreport{Waldmann2013,
TITLE = {Hierarchic Superposition with Weak Abstraction},
AUTHOR = {Baumgartner, Peter and Waldmann, Uwe},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2014-RG1-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2013},
ABSTRACT = {Many applications of automated deduction require reasoning in first-order logic modulo background theories, in particular some form of integer arithmetic. A major unsolved research challenge is to design theorem provers that are "reasonably complete" even in the presence of free function symbols ranging into a background theory sort. The hierarchic superposition calculus of Bachmair, Ganzinger, and Waldmann already supports such symbols, but, as we demonstrate, not optimally. This paper aims to rectify the situation by introducing a novel form of clause abstraction, a core component in the hierarchic superposition calculus for transforming clauses into a form needed for internal operation. We argue for the benefits of the resulting calculus and provide a new completeness result for the fragment where all background-sorted terms are ground.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Baumgartner, Peter
%A Waldmann, Uwe
%+ External Organizations
Automation of Logic, MPI for Informatics, Max Planck Society
%T Hierarchic Superposition with Weak Abstraction :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-03A8-0
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2013
%P 45 p.
%X Many applications of automated deduction require reasoning in
first-order logic modulo background theories, in particular some
form of integer arithmetic. A major unsolved research challenge
is to design theorem provers that are "reasonably complete"
even in the presence of free function symbols ranging into a
background theory sort. The hierarchic superposition calculus
of Bachmair, Ganzinger, and Waldmann already supports such
symbols, but, as we demonstrate, not optimally. This paper aims
to rectify the situation by introducing a novel form of clause
abstraction, a core component in the hierarchic superposition
calculus for transforming clauses into a form needed for internal
operation. We argue for the benefits of the resulting calculus
and provide a new completeness result for the fragment where
all background-sorted terms are ground.
%B Research Report
%@ false
New Results for Non-preemptive Speed Scaling
C.-C. Huang and S. Ott
Technical Report, 2013
C.-C. Huang and S. Ott
Technical Report, 2013
Abstract
We consider the speed scaling problem introduced in the seminal paper of Yao et al.. In this problem, a number of jobs, each with its own processing volume, release time, and deadline needs to be executed on a speed-scalable processor. The power consumption of this processor is $P(s) = s^\alpha$, where $s$ is the processing speed, and $\alpha > 1$ is a constant. The total energy consumption is power integrated over time, and the goal is to process all jobs while minimizing the energy consumption.
The preemptive version of the problem, along with its many variants, has been extensively studied over the years. However, little is known about the non-preemptive version of the problem, except that it is strongly NP-hard and allows a constant factor approximation. Up until now, the (general) complexity of this problem is unknown. In the present paper, we study an important special case of the problem, where the job intervals form a laminar family, and present a quasipolynomial-time approximation scheme for it, thereby showing that (at least) this special case is not APX-hard, unless $NP \subseteq DTIME(2^{poly(\log n)})$.
The second contribution of this work is a polynomial-time algorithm for the special case of equal-volume jobs, where previously only a $2^\alpha$ approximation was known. In addition, we show that two other special cases of this problem allow fully polynomial-time approximation schemes (FPTASs).
Export
BibTeX
@techreport{HuangOtt2013,
TITLE = {New Results for Non-preemptive Speed Scaling},
AUTHOR = {Huang, Chien-Chung and Ott, Sebastian},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2013-1-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2013},
ABSTRACT = {We consider the speed scaling problem introduced in the seminal paper of Yao et al.. In this problem, a number of jobs, each with its own processing volume, release time, and deadline needs to be executed on a speed-scalable processor. The power consumption of this processor is $P(s) = s^\alpha$, where $s$ is the processing speed, and $\alpha > 1$ is a constant. The total energy consumption is power integrated over time, and the goal is to process all jobs while minimizing the energy consumption. The preemptive version of the problem, along with its many variants, has been extensively studied over the years. However, little is known about the non-preemptive version of the problem, except that it is strongly NP-hard and allows a constant factor approximation. Up until now, the (general) complexity of this problem is unknown. In the present paper, we study an important special case of the problem, where the job intervals form a laminar family, and present a quasipolynomial-time approximation scheme for it, thereby showing that (at least) this special case is not APX-hard, unless $NP \subseteq DTIME(2^{poly(\log n)})$. The second contribution of this work is a polynomial-time algorithm for the special case of equal-volume jobs, where previously only a $2^\alpha$ approximation was known. In addition, we show that two other special cases of this problem allow fully polynomial-time approximation schemes (FPTASs).},
TYPE = {Research Reports},
}
Endnote
%0 Report
%A Huang, Chien-Chung
%A Ott, Sebastian
%+ External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T New Results for Non-preemptive Speed Scaling :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-03BF-D
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2013
%P 32 p.
%X We consider the speed scaling problem introduced in the seminal paper of Yao et al.. In this problem, a number of jobs, each with its own processing volume, release time, and deadline needs to be executed on a speed-scalable processor. The power consumption of this processor is $P(s) = s^\alpha$, where $s$ is the processing speed, and $\alpha > 1$ is a constant. The total energy consumption is power integrated over time, and the goal is to process all jobs while minimizing the energy consumption.
The preemptive version of the problem, along with its many variants, has been extensively studied over the years. However, little is known about the non-preemptive version of the problem, except that it is strongly NP-hard and allows a constant factor approximation. Up until now, the (general) complexity of this problem is unknown. In the present paper, we study an important special case of the problem, where the job intervals form a laminar family, and present a quasipolynomial-time approximation scheme for it, thereby showing that (at least) this special case is not APX-hard, unless $NP \subseteq DTIME(2^{poly(\log n)})$.
The second contribution of this work is a polynomial-time algorithm for the special case of equal-volume jobs, where previously only a $2^\alpha$ approximation was known. In addition, we show that two other special cases of this problem allow fully polynomial-time approximation schemes (FPTASs).
%B Research Reports
%@ false
A Distributed Algorithm for Large-scale Generalized Matching
F. Makari, B. Awerbuch, R. Gemulla, R. Khandekar, J. Mestre and M. Sozio
Technical Report, 2013
F. Makari, B. Awerbuch, R. Gemulla, R. Khandekar, J. Mestre and M. Sozio
Technical Report, 2013
Abstract
Generalized matching problems arise in a number of applications, including
computational advertising, recommender systems, and trade markets. Consider,
for example, the problem of recommending multimedia items (e.g., DVDs) to
users such that (1) users are recommended items that they are likely to be
interested in, (2) every user gets neither too few nor too many
recommendations, and (3) only items available in stock are recommended to
users. State-of-the-art matching algorithms fail at coping with large
real-world instances, which may involve millions of users and items. We
propose the first distributed algorithm for computing near-optimal solutions
to large-scale generalized matching problems like the one above. Our algorithm
is designed to run on a small cluster of commodity nodes (or in a MapReduce
environment), has strong approximation guarantees, and requires only a
poly-logarithmic number of passes over the input. In particular, we propose a
novel distributed algorithm to approximately solve mixed packing-covering
linear programs, which include but are not limited to generalized matching
problems. Experiments on real-world and synthetic data suggest that our
algorithm scales to very large problem sizes and can be orders of magnitude
faster than alternative approaches.
Export
BibTeX
@techreport{MakariAwerbuchGemullaKhandekarMestreSozio2013,
TITLE = {A Distributed Algorithm for Large-scale Generalized Matching},
AUTHOR = {Makari, Faraz and Awerbuch, Baruch and Gemulla, Rainer and Khandekar, Rohit and Mestre, Julian and Sozio, Mauro},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2013-5-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2013},
ABSTRACT = {Generalized matching problems arise in a number of applications, including computational advertising, recommender systems, and trade markets. Consider, for example, the problem of recommending multimedia items (e.g., DVDs) to users such that (1) users are recommended items that they are likely to be interested in, (2) every user gets neither too few nor too many recommendations, and (3) only items available in stock are recommended to users. State-of-the-art matching algorithms fail at coping with large real-world instances, which may involve millions of users and items. We propose the first distributed algorithm for computing near-optimal solutions to large-scale generalized matching problems like the one above. Our algorithm is designed to run on a small cluster of commodity nodes (or in a MapReduce environment), has strong approximation guarantees, and requires only a poly-logarithmic number of passes over the input. In particular, we propose a novel distributed algorithm to approximately solve mixed packing-covering linear programs, which include but are not limited to generalized matching problems. Experiments on real-world and synthetic data suggest that our algorithm scales to very large problem sizes and can be orders of magnitude faster than alternative approaches.},
TYPE = {Research Reports},
}
Endnote
%0 Report
%A Makari, Faraz
%A Awerbuch, Baruch
%A Gemulla, Rainer
%A Khandekar, Rohit
%A Mestre, Julian
%A Sozio, Mauro
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
External Organizations
Databases and Information Systems, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T A Distributed Algorithm for Large-scale Generalized Matching :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-03B4-3
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2013
%P 39 p.
%X Generalized matching problems arise in a number of applications, including
computational advertising, recommender systems, and trade markets. Consider,
for example, the problem of recommending multimedia items (e.g., DVDs) to
users such that (1) users are recommended items that they are likely to be
interested in, (2) every user gets neither too few nor too many
recommendations, and (3) only items available in stock are recommended to
users. State-of-the-art matching algorithms fail at coping with large
real-world instances, which may involve millions of users and items. We
propose the first distributed algorithm for computing near-optimal solutions
to large-scale generalized matching problems like the one above. Our algorithm
is designed to run on a small cluster of commodity nodes (or in a MapReduce
environment), has strong approximation guarantees, and requires only a
poly-logarithmic number of passes over the input. In particular, we propose a
novel distributed algorithm to approximately solve mixed packing-covering
linear programs, which include but are not limited to generalized matching
problems. Experiments on real-world and synthetic data suggest that our
algorithm scales to very large problem sizes and can be orders of magnitude
faster than alternative approaches.
%B Research Reports
%@ false
2012
Building and Maintaining Halls of Fame Over a Database
F. Alvanaki, S. Michel and A. Stupar
Technical Report, 2012
F. Alvanaki, S. Michel and A. Stupar
Technical Report, 2012
Abstract
Halls of Fame are fascinating constructs. They represent the elite of an often
very large amount of entities|persons, companies, products, countries etc.
Beyond their practical use as static rankings, changes to them are particularly
interesting|for decision making processes, as input to common media or
novel narrative science applications, or simply consumed by users. In this
work, we aim at detecting events that can be characterized by changes to a
Hall of Fame ranking in an automated way. We describe how the schema and
data of a database can be used to generate Halls of Fame. In this database
scenario, by Hall of Fame we refer to distinguished tuples; entities, whose
characteristics set them apart from the majority. We dene every Hall of
Fame as one specic instance of an SQL query, such that a change in its
result is considered a noteworthy event. Identied changes (i.e., events) are
ranked using lexicographic tradeos over event and query properties and
presented to users or fed in higher-level applications. We have implemented
a full-edged prototype system that uses either database triggers or a Java
based middleware for event identication. We report on an experimental
evaluation using a real-world dataset of basketball statistics.
Export
BibTeX
@techreport{AlvanakiMichelStupar2012,
TITLE = {Building and Maintaining Halls of Fame Over a Database},
AUTHOR = {Alvanaki, Foteini and Michel, Sebastian and Stupar, Aleksandar},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2012-5-004},
INSTITUTION = {Max-Plankc-Institute f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2012},
ABSTRACT = {Halls of Fame are fascinating constructs. They represent the elite of an often very large amount of entities|persons, companies, products, countries etc. Beyond their practical use as static rankings, changes to them are particularly interesting|for decision making processes, as input to common media or novel narrative science applications, or simply consumed by users. In this work, we aim at detecting events that can be characterized by changes to a Hall of Fame ranking in an automated way. We describe how the schema and data of a database can be used to generate Halls of Fame. In this database scenario, by Hall of Fame we refer to distinguished tuples; entities, whose characteristics set them apart from the majority. We dene every Hall of Fame as one specic instance of an SQL query, such that a change in its result is considered a noteworthy event. Identied changes (i.e., events) are ranked using lexicographic tradeos over event and query properties and presented to users or fed in higher-level applications. We have implemented a full-edged prototype system that uses either database triggers or a Java based middleware for event identication. We report on an experimental evaluation using a real-world dataset of basketball statistics.},
TYPE = {Research Reports},
}
Endnote
%0 Report
%A Alvanaki, Foteini
%A Michel, Sebastian
%A Stupar, Aleksandar
%+ Cluster of Excellence Multimodal Computing and Interaction
Databases and Information Systems, MPI for Informatics, Max Planck Society
Cluster of Excellence Multimodal Computing and Interaction
%T Building and Maintaining Halls of Fame Over a Database :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-03E9-D
%Y Max-Plankc-Institute für Informatik
%C Saarbrücken
%D 2012
%X Halls of Fame are fascinating constructs. They represent the elite of an often
very large amount of entities|persons, companies, products, countries etc.
Beyond their practical use as static rankings, changes to them are particularly
interesting|for decision making processes, as input to common media or
novel narrative science applications, or simply consumed by users. In this
work, we aim at detecting events that can be characterized by changes to a
Hall of Fame ranking in an automated way. We describe how the schema and
data of a database can be used to generate Halls of Fame. In this database
scenario, by Hall of Fame we refer to distinguished tuples; entities, whose
characteristics set them apart from the majority. We dene every Hall of
Fame as one specic instance of an SQL query, such that a change in its
result is considered a noteworthy event. Identied changes (i.e., events) are
ranked using lexicographic tradeos over event and query properties and
presented to users or fed in higher-level applications. We have implemented
a full-edged prototype system that uses either database triggers or a Java
based middleware for event identication. We report on an experimental
evaluation using a real-world dataset of basketball statistics.
%B Research Reports
%@ false
Computing n-Gram Statistics in MapReduce
K. Berberich and S. Bedathur
Technical Report, 2012
K. Berberich and S. Bedathur
Technical Report, 2012
Export
BibTeX
@techreport{BerberichBedathur2012,
TITLE = {Computing n--Gram Statistics in {MapReduce}},
AUTHOR = {Berberich, Klaus and Bedathur, Srikanta},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2012-5-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saa},
YEAR = {2012},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Berberich, Klaus
%A Bedathur, Srikanta
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
External Organizations
%T Computing n-Gram Statistics in MapReduce :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-0416-A
%Y Max-Planck-Institut für Informatik
%C Saa
%D 2012
%P 39 p.
%B Research Report
%@ false
Top-k Query Processing in Probabilistic Databases with Non-materialized Views
M. Dylla, I. Miliaraki and M. Theobald
Technical Report, 2012
M. Dylla, I. Miliaraki and M. Theobald
Technical Report, 2012
Export
BibTeX
@techreport{DyllaTopk2012,
TITLE = {Top-k Query Processing in Probabilistic Databases with Non-materialized Views},
AUTHOR = {Dylla, Maximilian and Miliaraki, Iris and Theobald, Martin},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2012-5-002},
LOCALID = {Local-ID: 62EC1C9C96B8EFF4C1257B560029F18C-DyllaTopk2012},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2012},
DATE = {2012},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Dylla, Maximilian
%A Miliaraki, Iris
%A Theobald, Martin
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Top-k Query Processing in Probabilistic Databases with Non-materialized Views :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B02F-2
%F OTHER: Local-ID: 62EC1C9C96B8EFF4C1257B560029F18C-DyllaTopk2012
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2012
%B Research Report
%@ false
Automatic Generation of Invariants for Circular Derivations in SUP(LA) 1
A. Fietzke, E. Kruglov and C. Weidenbach
Technical Report, 2012
A. Fietzke, E. Kruglov and C. Weidenbach
Technical Report, 2012
Abstract
The hierarchic combination of linear arithmetic and firstorder
logic with free function symbols, FOL(LA), results in a strictly
more expressive logic than its two parts. The SUP(LA) calculus can be
turned into a decision procedure for interesting fragments of FOL(LA).
For example, reachability problems for timed automata can be decided
by SUP(LA) using an appropriate translation into FOL(LA). In this paper,
we extend the SUP(LA) calculus with an additional inference rule,
automatically generating inductive invariants from partial SUP(LA)
derivations. The rule enables decidability of more expressive fragments,
including reachability for timed automata with unbounded integer variables.
We have implemented the rule in the SPASS(LA) theorem prover
with promising results, showing that it can considerably speed up proof
search and enable termination of saturation for practically relevant
problems.
Export
BibTeX
@techreport{FietzkeKruglovWeidenbach2012,
TITLE = {Automatic Generation of Invariants for Circular Derivations in {SUP(LA)} 1},
AUTHOR = {Fietzke, Arnaud and Kruglov, Evgeny and Weidenbach, Christoph},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2012-RG1-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2012},
ABSTRACT = {The hierarchic combination of linear arithmetic and firstorder logic with free function symbols, FOL(LA), results in a strictly more expressive logic than its two parts. The SUP(LA) calculus can be turned into a decision procedure for interesting fragments of FOL(LA). For example, reachability problems for timed automata can be decided by SUP(LA) using an appropriate translation into FOL(LA). In this paper, we extend the SUP(LA) calculus with an additional inference rule, automatically generating inductive invariants from partial SUP(LA) derivations. The rule enables decidability of more expressive fragments, including reachability for timed automata with unbounded integer variables. We have implemented the rule in the SPASS(LA) theorem prover with promising results, showing that it can considerably speed up proof search and enable termination of saturation for practically relevant problems.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Fietzke, Arnaud
%A Kruglov, Evgeny
%A Weidenbach, Christoph
%+ Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T Automatic Generation of Invariants for Circular Derivations in SUP(LA) 1 :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-03CF-9
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2012
%P 26 p.
%X The hierarchic combination of linear arithmetic and firstorder
logic with free function symbols, FOL(LA), results in a strictly
more expressive logic than its two parts. The SUP(LA) calculus can be
turned into a decision procedure for interesting fragments of FOL(LA).
For example, reachability problems for timed automata can be decided
by SUP(LA) using an appropriate translation into FOL(LA). In this paper,
we extend the SUP(LA) calculus with an additional inference rule,
automatically generating inductive invariants from partial SUP(LA)
derivations. The rule enables decidability of more expressive fragments,
including reachability for timed automata with unbounded integer variables.
We have implemented the rule in the SPASS(LA) theorem prover
with promising results, showing that it can considerably speed up proof
search and enable termination of saturation for practically relevant
problems.
%B Research Report
%@ false
Symmetry Detection in Large Scale City Scans
J. Kerber, M. Wand, M. Bokeloh and H.-P. Seidel
Technical Report, 2012
J. Kerber, M. Wand, M. Bokeloh and H.-P. Seidel
Technical Report, 2012
Abstract
In this report we present a novel method for detecting partial symmetries
in very large point clouds of 3D city scans. Unlike previous work, which
was limited to data sets of a few hundred megabytes maximum, our method
scales to very large scenes. We map the detection problem to a nearestneighbor
search in a low-dimensional feature space, followed by a cascade of
tests for geometric clustering of potential matches. Our algorithm robustly
handles noisy real-world scanner data, obtaining a recognition performance
comparable to state-of-the-art methods. In practice, it scales linearly with
the scene size and achieves a high absolute throughput, processing half a
terabyte of raw scanner data over night on a dual socket commodity PC.
Export
BibTeX
@techreport{KerberBokelohWandSeidel2012,
TITLE = {Symmetry Detection in Large Scale City Scans},
AUTHOR = {Kerber, Jens and Wand, Michael and Bokeloh, Martin and Seidel, Hans-Peter},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2012-4-001},
YEAR = {2012},
ABSTRACT = {In this report we present a novel method for detecting partial symmetries in very large point clouds of 3D city scans. Unlike previous work, which was limited to data sets of a few hundred megabytes maximum, our method scales to very large scenes. We map the detection problem to a nearestneighbor search in a low-dimensional feature space, followed by a cascade of tests for geometric clustering of potential matches. Our algorithm robustly handles noisy real-world scanner data, obtaining a recognition performance comparable to state-of-the-art methods. In practice, it scales linearly with the scene size and achieves a high absolute throughput, processing half a terabyte of raw scanner data over night on a dual socket commodity PC.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Kerber, Jens
%A Wand, Michael
%A Bokeloh, Martin
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
%T Symmetry Detection in Large Scale City Scans :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-0427-4
%D 2012
%P 32 p.
%X In this report we present a novel method for detecting partial symmetries
in very large point clouds of 3D city scans. Unlike previous work, which
was limited to data sets of a few hundred megabytes maximum, our method
scales to very large scenes. We map the detection problem to a nearestneighbor
search in a low-dimensional feature space, followed by a cascade of
tests for geometric clustering of potential matches. Our algorithm robustly
handles noisy real-world scanner data, obtaining a recognition performance
comparable to state-of-the-art methods. In practice, it scales linearly with
the scene size and achieves a high absolute throughput, processing half a
terabyte of raw scanner data over night on a dual socket commodity PC.
%B Research Report
%@ false
MDL4BMF: Minimum Description Length for Boolean Matrix Factorization
P. Miettinen and J. Vreeken
Technical Report, 2012
P. Miettinen and J. Vreeken
Technical Report, 2012
Abstract
Matrix factorizations—where a given data matrix is approximated by a prod- uct of two or more factor matrices—are powerful data mining tools. Among other tasks, matrix factorizations are often used to separate global structure from noise. This, however, requires solving the ‘model order selection problem’ of determining where fine-grained structure stops, and noise starts, i.e., what is the proper size of the factor matrices.
Boolean matrix factorization (BMF)—where data, factors, and matrix product are Boolean—has received increased attention from the data mining community in recent years. The technique has desirable properties, such as high interpretability and natural sparsity. However, so far no method for selecting the correct model order for BMF has been available. In this paper we propose to use the Minimum Description Length (MDL) principle for this task. Besides solving the problem, this well-founded approach has numerous benefits, e.g., it is automatic, does not require a likelihood function, is fast, and, as experiments show, is highly accurate.
We formulate the description length function for BMF in general—making it applicable for any BMF algorithm. We discuss how to construct an appropriate encoding, starting from a simple and intuitive approach, we arrive at a highly efficient data-to-model based encoding for BMF. We extend an existing algorithm for BMF to use MDL to identify the best Boolean matrix factorization, analyze the complexity of the problem, and perform an extensive experimental evaluation to study its behavior.
Export
BibTeX
@techreport{MiettinenVreeken,
TITLE = {{MDL4BMF}: Minimum Description Length for Boolean Matrix Factorization},
AUTHOR = {Miettinen, Pauli and Vreeken, Jilles},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2012-5-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2012},
ABSTRACT = {Matrix factorizations---where a given data matrix is approximated by a prod- uct of two or more factor matrices---are powerful data mining tools. Among other tasks, matrix factorizations are often used to separate global structure from noise. This, however, requires solving the {\textquoteleft}model order selection problem{\textquoteright} of determining where fine-grained structure stops, and noise starts, i.e., what is the proper size of the factor matrices. Boolean matrix factorization (BMF)---where data, factors, and matrix product are Boolean---has received increased attention from the data mining community in recent years. The technique has desirable properties, such as high interpretability and natural sparsity. However, so far no method for selecting the correct model order for BMF has been available. In this paper we propose to use the Minimum Description Length (MDL) principle for this task. Besides solving the problem, this well-founded approach has numerous benefits, e.g., it is automatic, does not require a likelihood function, is fast, and, as experiments show, is highly accurate. We formulate the description length function for BMF in general---making it applicable for any BMF algorithm. We discuss how to construct an appropriate encoding, starting from a simple and intuitive approach, we arrive at a highly efficient data-to-model based encoding for BMF. We extend an existing algorithm for BMF to use MDL to identify the best Boolean matrix factorization, analyze the complexity of the problem, and perform an extensive experimental evaluation to study its behavior.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Miettinen, Pauli
%A Vreeken, Jilles
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
External Organizations
%T MDL4BMF: Minimum Description Length for Boolean Matrix Factorization :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-0422-E
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2012
%P 48 p.
%X Matrix factorizations—where a given data matrix is approximated by a prod- uct of two or more factor matrices—are powerful data mining tools. Among other tasks, matrix factorizations are often used to separate global structure from noise. This, however, requires solving the ‘model order selection problem’ of determining where fine-grained structure stops, and noise starts, i.e., what is the proper size of the factor matrices.
Boolean matrix factorization (BMF)—where data, factors, and matrix product are Boolean—has received increased attention from the data mining community in recent years. The technique has desirable properties, such as high interpretability and natural sparsity. However, so far no method for selecting the correct model order for BMF has been available. In this paper we propose to use the Minimum Description Length (MDL) principle for this task. Besides solving the problem, this well-founded approach has numerous benefits, e.g., it is automatic, does not require a likelihood function, is fast, and, as experiments show, is highly accurate.
We formulate the description length function for BMF in general—making it applicable for any BMF algorithm. We discuss how to construct an appropriate encoding, starting from a simple and intuitive approach, we arrive at a highly efficient data-to-model based encoding for BMF. We extend an existing algorithm for BMF to use MDL to identify the best Boolean matrix factorization, analyze the complexity of the problem, and perform an extensive experimental evaluation to study its behavior.
%B Research Report
%@ false
Labelled Superposition for PLTL
M. Suda and C. Weidenbach
Technical Report, 2012
M. Suda and C. Weidenbach
Technical Report, 2012
Abstract
This paper introduces a new decision procedure for PLTL based on labelled
superposition.
Its main idea is to treat temporal formulas as infinite sets of purely
propositional clauses over an extended signature. These infinite sets are then
represented by finite sets of labelled propositional clauses. The new
representation enables the replacement of the complex temporal resolution
rule, suggested by existing resolution calculi for PLTL, by a fine grained
repetition check of finitely saturated labelled clause sets followed by a
simple inference. The completeness argument is based on the standard model
building idea from superposition. It inherently justifies ordering
restrictions, redundancy elimination and effective partial model building. The
latter can be directly used to effectively generate counterexamples of
non-valid PLTL conjectures out of saturated labelled clause sets in a
straightforward way.
Export
BibTeX
@techreport{SudaWeidenbachLPAR2012,
TITLE = {Labelled Superposition for {PLTL}},
AUTHOR = {Suda, Martin and Weidenbach, Christoph},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2012-RG1-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2012},
ABSTRACT = {This paper introduces a new decision procedure for PLTL based on labelled superposition. Its main idea is to treat temporal formulas as infinite sets of purely propositional clauses over an extended signature. These infinite sets are then represented by finite sets of labelled propositional clauses. The new representation enables the replacement of the complex temporal resolution rule, suggested by existing resolution calculi for PLTL, by a fine grained repetition check of finitely saturated labelled clause sets followed by a simple inference. The completeness argument is based on the standard model building idea from superposition. It inherently justifies ordering restrictions, redundancy elimination and effective partial model building. The latter can be directly used to effectively generate counterexamples of non-valid PLTL conjectures out of saturated labelled clause sets in a straightforward way.},
TYPE = {Research Reports},
}
Endnote
%0 Report
%A Suda, Martin
%A Weidenbach, Christoph
%+ Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T Labelled Superposition for PLTL :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0024-03DC-B
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2012
%P 42 p.
%X This paper introduces a new decision procedure for PLTL based on labelled
superposition.
Its main idea is to treat temporal formulas as infinite sets of purely
propositional clauses over an extended signature. These infinite sets are then
represented by finite sets of labelled propositional clauses. The new
representation enables the replacement of the complex temporal resolution
rule, suggested by existing resolution calculi for PLTL, by a fine grained
repetition check of finitely saturated labelled clause sets followed by a
simple inference. The completeness argument is based on the standard model
building idea from superposition. It inherently justifies ordering
restrictions, redundancy elimination and effective partial model building. The
latter can be directly used to effectively generate counterexamples of
non-valid PLTL conjectures out of saturated labelled clause sets in a
straightforward way.
%B Research Reports
%@ false
2011
Temporal Index Sharding for Space-time Efficiency in Archive Search
A. Anand, S. Bedathur, K. Berberich and R. Schenkel
Technical Report, 2011
A. Anand, S. Bedathur, K. Berberich and R. Schenkel
Technical Report, 2011
Abstract
Time-travel queries that couple temporal constraints with keyword
queries are useful in searching large-scale archives of time-evolving
content such as the Web, document collections, wikis, and so
on. Typical approaches for efficient evaluation of these queries
involve \emph{slicing} along the time-axis either the entire
collection~\cite{253349}, or individual index
lists~\cite{kberberi:sigir2007}. Both these methods are not
satisfactory since they sacrifice compactness of index for processing
efficiency making them either too big or, otherwise, too slow.
We present a novel index organization scheme that \emph{shards} the
index with \emph{zero increase in index size}, still minimizing the
cost of reading index index entries during query processing. Based on
the optimal sharding thus obtained, we develop practically efficient
sharding that takes into account the different costs of random and
sequential accesses. Our algorithm merges shards from the optimal
solution carefully to allow for few extra sequential accesses while
gaining significantly by reducing the random accesses. Finally, we
empirically establish the effectiveness of our novel sharding scheme
via detailed experiments over the edit history of the English version
of Wikipedia between 2001-2005 ($\approx$ 700 GB) and an archive of
the UK governmental web sites ($\approx$ 400 GB). Our results
demonstrate the feasibility of faster time-travel query processing
with no space overhead.
Export
BibTeX
@techreport{Bedathur2011,
TITLE = {Temporal Index Sharding for Space-time Efficiency in Archive Search},
AUTHOR = {Anand, Avishek and Bedathur, Srikanta and Berberich, Klaus and Schenkel, Ralf},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2011-5-001},
INSTITUTION = {Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2011},
DATE = {2011},
ABSTRACT = {Time-travel queries that couple temporal constraints with keyword queries are useful in searching large-scale archives of time-evolving content such as the Web, document collections, wikis, and so on. Typical approaches for efficient evaluation of these queries involve \emph{slicing} along the time-axis either the entire collection~\cite{253349}, or individual index lists~\cite{kberberi:sigir2007}. Both these methods are not satisfactory since they sacrifice compactness of index for processing efficiency making them either too big or, otherwise, too slow. We present a novel index organization scheme that \emph{shards} the index with \emph{zero increase in index size}, still minimizing the cost of reading index index entries during query processing. Based on the optimal sharding thus obtained, we develop practically efficient sharding that takes into account the different costs of random and sequential accesses. Our algorithm merges shards from the optimal solution carefully to allow for few extra sequential accesses while gaining significantly by reducing the random accesses. Finally, we empirically establish the effectiveness of our novel sharding scheme via detailed experiments over the edit history of the English version of Wikipedia between 2001-2005 ($\approx$ 700 GB) and an archive of the UK governmental web sites ($\approx$ 400 GB). Our results demonstrate the feasibility of faster time-travel query processing with no space overhead.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Anand, Avishek
%A Bedathur, Srikanta
%A Berberich, Klaus
%A Schenkel, Ralf
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Temporal Index Sharding for Space-time Efficiency in Archive Search :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0025-7311-D
%Y Universität des Saarlandes
%C Saarbrücken
%D 2011
%X Time-travel queries that couple temporal constraints with keyword
queries are useful in searching large-scale archives of time-evolving
content such as the Web, document collections, wikis, and so
on. Typical approaches for efficient evaluation of these queries
involve \emph{slicing} along the time-axis either the entire
collection~\cite{253349}, or individual index
lists~\cite{kberberi:sigir2007}. Both these methods are not
satisfactory since they sacrifice compactness of index for processing
efficiency making them either too big or, otherwise, too slow.
We present a novel index organization scheme that \emph{shards} the
index with \emph{zero increase in index size}, still minimizing the
cost of reading index index entries during query processing. Based on
the optimal sharding thus obtained, we develop practically efficient
sharding that takes into account the different costs of random and
sequential accesses. Our algorithm merges shards from the optimal
solution carefully to allow for few extra sequential accesses while
gaining significantly by reducing the random accesses. Finally, we
empirically establish the effectiveness of our novel sharding scheme
via detailed experiments over the edit history of the English version
of Wikipedia between 2001-2005 ($\approx$ 700 GB) and an archive of
the UK governmental web sites ($\approx$ 400 GB). Our results
demonstrate the feasibility of faster time-travel query processing
with no space overhead.
%B Research Report
%@ false
A Morphable Part Model for Shape Manipulation
A. Berner, O. Burghard, M. Wand, N. Mitra, R. Klein and H.-P. Seidel
Technical Report, 2011
A. Berner, O. Burghard, M. Wand, N. Mitra, R. Klein and H.-P. Seidel
Technical Report, 2011
Abstract
We introduce morphable part models for smart shape manipulation using an assembly
of deformable parts with appropriate boundary conditions. In an analysis
phase, we characterize the continuous allowable variations both for the individual
parts and their interconnections using Gaussian shape models with low
rank covariance. The discrete aspect of how parts can be assembled is captured
using a shape grammar. The parts and their interconnection rules are learned
semi-automatically from symmetries within a single object or from semantically
corresponding parts across a larger set of example models. The learned discrete
and continuous structure is encoded as a graph. In the interaction phase, we
obtain an interactive yet intuitive shape deformation framework producing realistic
deformations on classes of objects that are difficult to edit using existing
structure-aware deformation techniques. Unlike previous techniques, our method
uses self-similarities from a single model as training input and allows the user
to reassemble the identified parts in new configurations, thus exploiting both the
discrete and continuous learned variations while ensuring appropriate boundary
conditions across part boundaries.
Export
BibTeX
@techreport{BernerBurghardWandMitraKleinSeidel2011,
TITLE = {A Morphable Part Model for Shape Manipulation},
AUTHOR = {Berner, Alexander and Burghard, Oliver and Wand, Michael and Mitra, Niloy and Klein, Reinhard and Seidel, Hans-Peter},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2011-4-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2011},
DATE = {2011},
ABSTRACT = {We introduce morphable part models for smart shape manipulation using an assembly of deformable parts with appropriate boundary conditions. In an analysis phase, we characterize the continuous allowable variations both for the individual parts and their interconnections using Gaussian shape models with low rank covariance. The discrete aspect of how parts can be assembled is captured using a shape grammar. The parts and their interconnection rules are learned semi-automatically from symmetries within a single object or from semantically corresponding parts across a larger set of example models. The learned discrete and continuous structure is encoded as a graph. In the interaction phase, we obtain an interactive yet intuitive shape deformation framework producing realistic deformations on classes of objects that are difficult to edit using existing structure-aware deformation techniques. Unlike previous techniques, our method uses self-similarities from a single model as training input and allows the user to reassemble the identified parts in new configurations, thus exploiting both the discrete and continuous learned variations while ensuring appropriate boundary conditions across part boundaries.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Berner, Alexander
%A Burghard, Oliver
%A Wand, Michael
%A Mitra, Niloy
%A Klein, Reinhard
%A Seidel, Hans-Peter
%+ External Organizations
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
%T A Morphable Part Model for Shape Manipulation :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6972-0
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2011
%P 33 p.
%X We introduce morphable part models for smart shape manipulation using an assembly
of deformable parts with appropriate boundary conditions. In an analysis
phase, we characterize the continuous allowable variations both for the individual
parts and their interconnections using Gaussian shape models with low
rank covariance. The discrete aspect of how parts can be assembled is captured
using a shape grammar. The parts and their interconnection rules are learned
semi-automatically from symmetries within a single object or from semantically
corresponding parts across a larger set of example models. The learned discrete
and continuous structure is encoded as a graph. In the interaction phase, we
obtain an interactive yet intuitive shape deformation framework producing realistic
deformations on classes of objects that are difficult to edit using existing
structure-aware deformation techniques. Unlike previous techniques, our method
uses self-similarities from a single model as training input and allows the user
to reassemble the identified parts in new configurations, thus exploiting both the
discrete and continuous learned variations while ensuring appropriate boundary
conditions across part boundaries.
%B Research Report
%@ false
PTIME Parametric Verification of Safety Properties for Reasonable Linear Hybrid Automata
W. Damm, C. Ihlemann and V. Sofronie-Stokkermans
Technical Report, 2011
W. Damm, C. Ihlemann and V. Sofronie-Stokkermans
Technical Report, 2011
Abstract
This paper identifies an industrially relevant class of
linear hybrid automata (LHA) called reasonable LHA for
which parametric verification of convex safety properties
with exhaustive entry states can be verified in polynomial
time and time-bounded reachability can be decided
in nondeterministic polynomial time for non-parametric
verification and in exponential time for
parametric verification. Properties with exhaustive entry
states are restricted to runs originating in
a (specified) inner envelope of some mode-invariant.
Deciding whether an LHA is reasonable is
shown to be decidable in polynomial time.
Export
BibTeX
@techreport{Damm-Ihlemann-Sofronie-Stokkermans2011-report,
TITLE = {{PTIME} Parametric Verification of Safety Properties for Reasonable Linear Hybrid Automata},
AUTHOR = {Damm, Werner and Ihlemann, Carsten and Sofronie-Stokkermans, Viorica},
LANGUAGE = {eng},
ISSN = {1860-9821},
NUMBER = {ATR70},
LOCALID = {Local-ID: C125716C0050FB51-DEB90D4E9EAE27B7C1257855003AF8EE-Damm-Ihlemann-Sofronie-Stokkermans2011-report},
INSTITUTION = {SFB/TR 14 AVACS},
YEAR = {2011},
DATE = {2011},
ABSTRACT = {This paper identifies an industrially relevant class of linear hybrid automata (LHA) called reasonable LHA for which parametric verification of convex safety properties with exhaustive entry states can be verified in polynomial time and time-bounded reachability can be decided in nondeterministic polynomial time for non-parametric verification and in exponential time for parametric verification. Properties with exhaustive entry states are restricted to runs originating in a (specified) inner envelope of some mode-invariant. Deciding whether an LHA is reasonable is shown to be decidable in polynomial time.},
TYPE = {AVACS Technical Report},
VOLUME = {70},
}
Endnote
%0 Report
%A Damm, Werner
%A Ihlemann, Carsten
%A Sofronie-Stokkermans, Viorica
%+ External Organizations
Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T PTIME Parametric Verification of Safety Properties for Reasonable Linear Hybrid Automata :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0010-14F5-F
%F EDOC: 619013
%F OTHER: Local-ID: C125716C0050FB51-DEB90D4E9EAE27B7C1257855003AF8EE-Damm-Ihlemann-Sofronie-Stokkermans2011-report
%Y SFB/TR 14 AVACS
%D 2011
%P 31 p.
%X This paper identifies an industrially relevant class of
linear hybrid automata (LHA) called reasonable LHA for
which parametric verification of convex safety properties
with exhaustive entry states can be verified in polynomial
time and time-bounded reachability can be decided
in nondeterministic polynomial time for non-parametric
verification and in exponential time for
parametric verification. Properties with exhaustive entry
states are restricted to runs originating in
a (specified) inner envelope of some mode-invariant.
Deciding whether an LHA is reasonable is
shown to be decidable in polynomial time.
%B AVACS Technical Report
%N 70
%@ false
%U http://www.avacs.org/fileadmin/Publikationen/Open/avacs_technical_report_070.pdf
Integrating Incremental Flow Pipes into a Symbolic Model Checker for Hybrid Systems
W. Damm, S. Disch, W. Hagemann, C. Scholl, U. Waldmann and B. Wirtz
Technical Report, 2011
W. Damm, S. Disch, W. Hagemann, C. Scholl, U. Waldmann and B. Wirtz
Technical Report, 2011
Abstract
We describe an approach to integrate incremental ow pipe computation into a
fully symbolic backward model checker for hybrid systems. Our method combines
the advantages of symbolic state set representation, such as the ability to
deal with large numbers of boolean variables, with an effcient way to handle
continuous ows dened by linear differential equations, possibly including
bounded disturbances.
Export
BibTeX
@techreport{DammDierksHagemannEtAl2011,
TITLE = {Integrating Incremental Flow Pipes into a Symbolic Model Checker for Hybrid Systems},
AUTHOR = {Damm, Werner and Disch, Stefan and Hagemann, Willem and Scholl, Christoph and Waldmann, Uwe and Wirtz, Boris},
EDITOR = {Becker, Bernd and Damm, Werner and Finkbeiner, Bernd and Fr{\"a}nzle, Martin and Olderog, Ernst-R{\"u}diger and Podelski, Andreas},
LANGUAGE = {eng},
ISSN = {1860-9821},
NUMBER = {ATR76},
INSTITUTION = {SFB/TR 14 AVACS},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2011},
DATE = {2011},
ABSTRACT = {We describe an approach to integrate incremental ow pipe computation into a fully symbolic backward model checker for hybrid systems. Our method combines the advantages of symbolic state set representation, such as the ability to deal with large numbers of boolean variables, with an effcient way to handle continuous ows dened by linear differential equations, possibly including bounded disturbances.},
TYPE = {AVACS Technical Report},
VOLUME = {76},
}
Endnote
%0 Report
%A Damm, Werner
%A Disch, Stefan
%A Hagemann, Willem
%A Scholl, Christoph
%A Waldmann, Uwe
%A Wirtz, Boris
%E Becker, Bernd
%E Damm, Werner
%E Finkbeiner, Bernd
%E Fränzle, Martin
%E Olderog, Ernst-Rüdiger
%E Podelski, Andreas
%+ External Organizations
External Organizations
Automation of Logic, MPI for Informatics, Max Planck Society
External Organizations
Automation of Logic, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
External Organizations
External Organizations
External Organizations
External Organizations
External Organizations
%T Integrating Incremental Flow Pipes into a Symbolic Model Checker for Hybrid Systems :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-001A-150E-7
%Y SFB/TR 14 AVACS
%C Saarbrücken
%D 2011
%X We describe an approach to integrate incremental ow pipe computation into a
fully symbolic backward model checker for hybrid systems. Our method combines
the advantages of symbolic state set representation, such as the ability to
deal with large numbers of boolean variables, with an effcient way to handle
continuous ows dened by linear differential equations, possibly including
bounded disturbances.
%B AVACS Technical Report
%N 76
%@ false
Large-scale Matrix Factorization with Distributed Stochastic Gradient Descent
R. Gemulla, P. J. Haas, E. Nijkamp and Y. Sismanis
Technical Report, 2011
R. Gemulla, P. J. Haas, E. Nijkamp and Y. Sismanis
Technical Report, 2011
Abstract
As Web 2.0 and enterprise-cloud applications have proliferated, data mining
algorithms increasingly need to be (re)designed to handle web-scale
datasets. For this reason, low-rank matrix factorization has received a lot
of attention in recent years, since it is fundamental to a variety of mining
tasks, such as topic detection and collaborative filtering, that are
increasingly being applied to massive datasets. We provide a novel algorithm
to approximately factor large matrices with millions of rows, millions of
columns, and billions of nonzero elements. Our approach rests on stochastic
gradient descent (SGD), an iterative stochastic optimization algorithm; the
idea is to exploit the special structure of the matrix factorization problem
to develop a new ``stratified'' SGD variant that can be fully distributed
and run on web-scale datasets using, e.g., MapReduce. The resulting
distributed SGD factorization algorithm, called DSGD, provides good speed-up
and handles a wide variety of matrix factorizations. We establish
convergence properties of DSGD using results from stochastic approximation
theory and regenerative process theory, and also describe the practical
techniques used to optimize performance in our DSGD
implementation. Experiments suggest that DSGD converges significantly faster
and has better scalability properties than alternative algorithms.
Export
BibTeX
@techreport{gemulla11,
TITLE = {Large-scale Matrix Factorization with Distributed Stochastic Gradient Descent},
AUTHOR = {Gemulla, Rainer and Haas, Peter J. and Nijkamp, Erik and Sismanis, Yannis},
LANGUAGE = {eng},
URL = {http://www.almaden.ibm.com/cs/people/peterh/dsgdTechRep.pdf},
LOCALID = {Local-ID: C1256DBF005F876D-5B618B1FF070E981C125784D0044B0D1-gemulla11},
INSTITUTION = {IBM Research Division},
ADDRESS = {San Jose, CA},
YEAR = {2011},
ABSTRACT = {As Web 2.0 and enterprise-cloud applications have proliferated, data mining algorithms increasingly need to be (re)designed to handle web-scale datasets. For this reason, low-rank matrix factorization has received a lot of attention in recent years, since it is fundamental to a variety of mining tasks, such as topic detection and collaborative filtering, that are increasingly being applied to massive datasets. We provide a novel algorithm to approximately factor large matrices with millions of rows, millions of columns, and billions of nonzero elements. Our approach rests on stochastic gradient descent (SGD), an iterative stochastic optimization algorithm; the idea is to exploit the special structure of the matrix factorization problem to develop a new ``stratified'' SGD variant that can be fully distributed and run on web-scale datasets using, e.g., MapReduce. The resulting distributed SGD factorization algorithm, called DSGD, provides good speed-up and handles a wide variety of matrix factorizations. We establish convergence properties of DSGD using results from stochastic approximation theory and regenerative process theory, and also describe the practical techniques used to optimize performance in our DSGD implementation. Experiments suggest that DSGD converges significantly faster and has better scalability properties than alternative algorithms.},
TYPE = {IBM Research Report},
VOLUME = {RJ10481},
}
Endnote
%0 Report
%A Gemulla, Rainer
%A Haas, Peter J.
%A Nijkamp, Erik
%A Sismanis, Yannis
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
External Organizations
%T Large-scale Matrix Factorization with Distributed Stochastic Gradient Descent :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0010-147F-E
%F EDOC: 618949
%U http://www.almaden.ibm.com/cs/people/peterh/dsgdTechRep.pdf
%F OTHER: Local-ID: C1256DBF005F876D-5B618B1FF070E981C125784D0044B0D1-gemulla11
%Y IBM Research Division
%C San Jose, CA
%D 2011
%X As Web 2.0 and enterprise-cloud applications have proliferated, data mining
algorithms increasingly need to be (re)designed to handle web-scale
datasets. For this reason, low-rank matrix factorization has received a lot
of attention in recent years, since it is fundamental to a variety of mining
tasks, such as topic detection and collaborative filtering, that are
increasingly being applied to massive datasets. We provide a novel algorithm
to approximately factor large matrices with millions of rows, millions of
columns, and billions of nonzero elements. Our approach rests on stochastic
gradient descent (SGD), an iterative stochastic optimization algorithm; the
idea is to exploit the special structure of the matrix factorization problem
to develop a new ``stratified'' SGD variant that can be fully distributed
and run on web-scale datasets using, e.g., MapReduce. The resulting
distributed SGD factorization algorithm, called DSGD, provides good speed-up
and handles a wide variety of matrix factorizations. We establish
convergence properties of DSGD using results from stochastic approximation
theory and regenerative process theory, and also describe the practical
techniques used to optimize performance in our DSGD
implementation. Experiments suggest that DSGD converges significantly faster
and has better scalability properties than alternative algorithms.
%B IBM Research Report
%N RJ10481
How Not to Be Seen -- Inpainting Dynamic Objects in Crowded Scenes
M. Granados, J. Tompkin, K. Kim, O. Grau, J. Kautz and C. Theobalt
Technical Report, 2011
M. Granados, J. Tompkin, K. Kim, O. Grau, J. Kautz and C. Theobalt
Technical Report, 2011
Abstract
Removing dynamic objects from videos is an extremely challenging problem that
even visual effects professionals often solve with time-consuming manual
frame-by-frame editing.
We propose a new approach to video completion that can deal with complex scenes
containing dynamic background and non-periodical moving objects.
We build upon the idea that the spatio-temporal hole left by a removed object
can be filled with data available on other regions of the video where the
occluded objects were visible.
Video completion is performed by solving a large combinatorial problem that
searches for an optimal pattern of pixel offsets from occluded to unoccluded
regions.
Our contribution includes an energy functional that generalizes well over
different scenes with stable parameters, and that has the desirable convergence
properties for a graph-cut-based optimization.
We provide an interface to guide the completion process that both reduces
computation time and allows for efficient correction of small errors in the
result.
We demonstrate that our approach can effectively complete complex,
high-resolution occlusions that are greater in difficulty than what existing
methods have shown.
Export
BibTeX
@techreport{Granados2011TR,
TITLE = {How Not to Be Seen -- Inpainting Dynamic Objects in Crowded Scenes},
AUTHOR = {Granados, Miguel and Tompkin, James and Kim, Kwang and Grau, O. and Kautz, Jan and Theobalt, Christian},
LANGUAGE = {eng},
NUMBER = {MPI-I-2011-4-001},
INSTITUTION = {MPI f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2011},
ABSTRACT = {Removing dynamic objects from videos is an extremely challenging problem that even visual effects professionals often solve with time-consuming manual frame-by-frame editing. We propose a new approach to video completion that can deal with complex scenes containing dynamic background and non-periodical moving objects. We build upon the idea that the spatio-temporal hole left by a removed object can be filled with data available on other regions of the video where the occluded objects were visible. Video completion is performed by solving a large combinatorial problem that searches for an optimal pattern of pixel offsets from occluded to unoccluded regions. Our contribution includes an energy functional that generalizes well over different scenes with stable parameters, and that has the desirable convergence properties for a graph-cut-based optimization. We provide an interface to guide the completion process that both reduces computation time and allows for efficient correction of small errors in the result. We demonstrate that our approach can effectively complete complex, high-resolution occlusions that are greater in difficulty than what existing methods have shown.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Granados, Miguel
%A Tompkin, James
%A Kim, Kwang
%A Grau, O.
%A Kautz, Jan
%A Theobalt, Christian
%+ Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
%T How Not to Be Seen -- Inpainting Dynamic Objects in Crowded Scenes :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0010-13C5-3
%F EDOC: 618872
%Y MPI für Informatik
%C Saarbrücken
%D 2011
%P 35 p.
%X Removing dynamic objects from videos is an extremely challenging problem that
even visual effects professionals often solve with time-consuming manual
frame-by-frame editing.
We propose a new approach to video completion that can deal with complex scenes
containing dynamic background and non-periodical moving objects.
We build upon the idea that the spatio-temporal hole left by a removed object
can be filled with data available on other regions of the video where the
occluded objects were visible.
Video completion is performed by solving a large combinatorial problem that
searches for an optimal pattern of pixel offsets from occluded to unoccluded
regions.
Our contribution includes an energy functional that generalizes well over
different scenes with stable parameters, and that has the desirable convergence
properties for a graph-cut-based optimization.
We provide an interface to guide the completion process that both reduces
computation time and allows for efficient correction of small errors in the
result.
We demonstrate that our approach can effectively complete complex,
high-resolution occlusions that are greater in difficulty than what existing
methods have shown.
%B Research Report
Efficient Learning-based Image Enhancement : Application to Compression Artifact Removal and Super-resolution
K. I. Kim, Y. Kwon, J. H. Kim and C. Theobalt
Technical Report, 2011
K. I. Kim, Y. Kwon, J. H. Kim and C. Theobalt
Technical Report, 2011
Abstract
Many computer vision and computational photography applications
essentially solve an image enhancement problem. The image has been
deteriorated by a specific noise process, such as aberrations from camera
optics and compression artifacts, that we would like to remove. We
describe a framework for learning-based image enhancement. At the core of
our algorithm lies a generic regularization framework that comprises a
prior on natural images, as well as an application-specific conditional
model based on Gaussian processes. In contrast to prior learning-based
approaches, our algorithm can instantly learn task-specific degradation
models from sample images which enables users to easily adapt the
algorithm to a specific problem and data set of interest. This is
facilitated by our efficient approximation scheme of large-scale Gaussian
processes. We demonstrate the efficiency and effectiveness of our approach
by applying it to example enhancement applications including single-image
super-resolution, as well as artifact removal in JPEG- and JPEG
2000-encoded images.
Export
BibTeX
@techreport{KimKwonKimTheobalt2011,
TITLE = {Efficient Learning-based Image Enhancement : Application to Compression Artifact Removal and Super-resolution},
AUTHOR = {Kim, Kwang In and Kwon, Younghee and Kim, Jin Hyung and Theobalt, Christian},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2011-4-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2011},
ABSTRACT = {Many computer vision and computational photography applications essentially solve an image enhancement problem. The image has been deteriorated by a specific noise process, such as aberrations from camera optics and compression artifacts, that we would like to remove. We describe a framework for learning-based image enhancement. At the core of our algorithm lies a generic regularization framework that comprises a prior on natural images, as well as an application-specific conditional model based on Gaussian processes. In contrast to prior learning-based approaches, our algorithm can instantly learn task-specific degradation models from sample images which enables users to easily adapt the algorithm to a specific problem and data set of interest. This is facilitated by our efficient approximation scheme of large-scale Gaussian processes. We demonstrate the efficiency and effectiveness of our approach by applying it to example enhancement applications including single-image super-resolution, as well as artifact removal in JPEG- and JPEG 2000-encoded images.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Kim, Kwang In
%A Kwon, Younghee
%A Kim, Jin Hyung
%A Theobalt, Christian
%+ Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
%T Efficient Learning-based Image Enhancement : Application to
Compression Artifact Removal and Super-resolution :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0027-13A3-E
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2011
%X Many computer vision and computational photography applications
essentially solve an image enhancement problem. The image has been
deteriorated by a specific noise process, such as aberrations from camera
optics and compression artifacts, that we would like to remove. We
describe a framework for learning-based image enhancement. At the core of
our algorithm lies a generic regularization framework that comprises a
prior on natural images, as well as an application-specific conditional
model based on Gaussian processes. In contrast to prior learning-based
approaches, our algorithm can instantly learn task-specific degradation
models from sample images which enables users to easily adapt the
algorithm to a specific problem and data set of interest. This is
facilitated by our efficient approximation scheme of large-scale Gaussian
processes. We demonstrate the efficiency and effectiveness of our approach
by applying it to example enhancement applications including single-image
super-resolution, as well as artifact removal in JPEG- and JPEG
2000-encoded images.
%B Research Report
%@ false
Towards Verification of the Pastry Protocol using TLA+
T. Lu, S. Merz and C. Weidenbach
Technical Report, 2011
T. Lu, S. Merz and C. Weidenbach
Technical Report, 2011
Abstract
Pastry is an algorithm that provides a scalable distributed hash table over
an underlying P2P network. Several implementations of Pastry are available
and have been applied in practice, but no attempt has so far been made to
formally describe the algorithm or to verify its properties. Since Pastry combines
rather complex data structures, asynchronous communication, concurrency,
resilience to churn and fault tolerance, it makes an interesting target
for verication. We have modeled Pastry's core routing algorithms and communication
protocol in the specication language TLA+. In order to validate
the model and to search for bugs we employed the TLA+ model checker tlc
to analyze several qualitative properties. We obtained non-trivial insights in
the behavior of Pastry through the model checking analysis. Furthermore,
we started to verify Pastry using the very same model and the interactive
theorem prover tlaps for TLA+. A rst result is the reduction of global
Pastry correctness properties to invariants of the underlying data structures.
Export
BibTeX
@techreport{LuMerzWeidenbach2011,
TITLE = {Towards Verification of the {Pastry} Protocol using {TLA+}},
AUTHOR = {Lu, Tianxiang and Merz, Stephan and Weidenbach, Christoph},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2011-RG1-002},
NUMBER = {MPI-I-2011-RG1-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2011},
DATE = {2011},
ABSTRACT = {Pastry is an algorithm that provides a scalable distributed hash table over an underlying P2P network. Several implementations of Pastry are available and have been applied in practice, but no attempt has so far been made to formally describe the algorithm or to verify its properties. Since Pastry combines rather complex data structures, asynchronous communication, concurrency, resilience to churn and fault tolerance, it makes an interesting target for verication. We have modeled Pastry's core routing algorithms and communication protocol in the specication language TLA+. In order to validate the model and to search for bugs we employed the TLA+ model checker tlc to analyze several qualitative properties. We obtained non-trivial insights in the behavior of Pastry through the model checking analysis. Furthermore, we started to verify Pastry using the very same model and the interactive theorem prover tlaps for TLA+. A rst result is the reduction of global Pastry correctness properties to invariants of the underlying data structures.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Lu, Tianxiang
%A Merz, Stephan
%A Weidenbach, Christoph
%+ Automation of Logic, MPI for Informatics, Max Planck Society
External Organizations
Automation of Logic, MPI for Informatics, Max Planck Society
%T Towards Verification of the Pastry Protocol using TLA+ :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6975-A
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2011-RG1-002
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2011
%P 51 p.
%X Pastry is an algorithm that provides a scalable distributed hash table over
an underlying P2P network. Several implementations of Pastry are available
and have been applied in practice, but no attempt has so far been made to
formally describe the algorithm or to verify its properties. Since Pastry combines
rather complex data structures, asynchronous communication, concurrency,
resilience to churn and fault tolerance, it makes an interesting target
for verication. We have modeled Pastry's core routing algorithms and communication
protocol in the specication language TLA+. In order to validate
the model and to search for bugs we employed the TLA+ model checker tlc
to analyze several qualitative properties. We obtained non-trivial insights in
the behavior of Pastry through the model checking analysis. Furthermore,
we started to verify Pastry using the very same model and the interactive
theorem prover tlaps for TLA+. A rst result is the reduction of global
Pastry correctness properties to invariants of the underlying data structures.
%B Research Report
Finding Images of Rare and Ambiguous Entities
B. Taneva, M. Kacimi El Hassani and G. Weikum
Technical Report, 2011
B. Taneva, M. Kacimi El Hassani and G. Weikum
Technical Report, 2011
Export
BibTeX
@techreport{TanevaKacimiWeikum2011,
TITLE = {Finding Images of Rare and Ambiguous Entities},
AUTHOR = {Taneva, Bilyana and Kacimi El Hassani, M. and Weikum, Gerhard},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2011-5-002},
NUMBER = {MPI-I-2011-5-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2011},
DATE = {2011},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Taneva, Bilyana
%A Kacimi El Hassani, M.
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Finding Images of Rare and Ambiguous Entities :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6581-8
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2011-5-002
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2011
%P 30 p.
%B Research Report
Videoscapes: Exploring Unstructured Video Collections
J. Tompkin, K. I. Kim, J. Kautz and C. Theobalt
Technical Report, 2011
J. Tompkin, K. I. Kim, J. Kautz and C. Theobalt
Technical Report, 2011
Export
BibTeX
@techreport{TompkinKimKautzTheobalt2011,
TITLE = {Videoscapes: Exploring Unstructured Video Collections},
AUTHOR = {Tompkin, James and Kim, Kwang In and Kautz, Jan and Theobalt, Christian},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2011-4-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2011},
DATE = {2011},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Tompkin, James
%A Kim, Kwang In
%A Kautz, Jan
%A Theobalt, Christian
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Videoscapes: Exploring Unstructured Video Collections :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-F76C-8
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2011
%P 32 p.
%B Research Report
%@ false
2010
A New Combinatorial Approach to Parametric Path Analysis
E. Althaus, S. Altmeyer and R. Naujoks
Technical Report, 2010
E. Althaus, S. Altmeyer and R. Naujoks
Technical Report, 2010
Abstract
Hard real-time systems require tasks to finish in time. To guarantee the
timeliness of such a system, static timing analyses derive upper bounds on the
\emph{worst-case execution time} of tasks. There are two types of timing
analyses: numeric and parametric ones. A numeric analysis derives a numeric
timing bound and, to this end, assumes all information such as loop bounds to
be given a priori.
If these bounds are unknown during analysis time, a parametric analysis can
compute a timing formula parametric in these variables.
A performance bottleneck of timing analyses, numeric and especially parametric,
can be the so-called path analysis, which determines the path in the analyzed
task with the longest execution time bound.
In this paper, we present a new approach to the path analysis.
This approach exploits the rather regular structure of software for hard
real-time and safety-critical systems.
As we show in the evaluation of this paper, we strongly improve upon former
techniques in terms of precision and runtime in the parametric case. Even in
the numeric case, our approach matches up to state-of-the-art techniques and
may be an alternative to commercial tools employed for path analysis.
Export
BibTeX
@techreport{Naujoks10a,
TITLE = {A New Combinatorial Approach to Parametric Path Analysis},
AUTHOR = {Althaus, Ernst and Altmeyer, Sebastian and Naujoks, Rouven},
LANGUAGE = {eng},
ISSN = {1860-9821},
NUMBER = {ATR58},
LOCALID = {Local-ID: C1256428004B93B8-7741AE14A57A7C00C125781100477B84-Naujoks10a},
INSTITUTION = {SFB/TR 14 AVACS},
YEAR = {2010},
DATE = {2010},
ABSTRACT = {Hard real-time systems require tasks to finish in time. To guarantee the timeliness of such a system, static timing analyses derive upper bounds on the \emph{worst-case execution time} of tasks. There are two types of timing analyses: numeric and parametric ones. A numeric analysis derives a numeric timing bound and, to this end, assumes all information such as loop bounds to be given a priori. If these bounds are unknown during analysis time, a parametric analysis can compute a timing formula parametric in these variables. A performance bottleneck of timing analyses, numeric and especially parametric, can be the so-called path analysis, which determines the path in the analyzed task with the longest execution time bound. In this paper, we present a new approach to the path analysis. This approach exploits the rather regular structure of software for hard real-time and safety-critical systems. As we show in the evaluation of this paper, we strongly improve upon former techniques in terms of precision and runtime in the parametric case. Even in the numeric case, our approach matches up to state-of-the-art techniques and may be an alternative to commercial tools employed for path analysis.},
TYPE = {AVACS Technical Report},
VOLUME = {58},
}
Endnote
%0 Report
%A Althaus, Ernst
%A Altmeyer, Sebastian
%A Naujoks, Rouven
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A New Combinatorial Approach to Parametric Path Analysis :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-15F7-8
%F EDOC: 536763
%F OTHER: Local-ID: C1256428004B93B8-7741AE14A57A7C00C125781100477B84-Naujoks10a
%Y SFB/TR 14 AVACS
%D 2010
%P 33 p.
%X Hard real-time systems require tasks to finish in time. To guarantee the
timeliness of such a system, static timing analyses derive upper bounds on the
\emph{worst-case execution time} of tasks. There are two types of timing
analyses: numeric and parametric ones. A numeric analysis derives a numeric
timing bound and, to this end, assumes all information such as loop bounds to
be given a priori.
If these bounds are unknown during analysis time, a parametric analysis can
compute a timing formula parametric in these variables.
A performance bottleneck of timing analyses, numeric and especially parametric,
can be the so-called path analysis, which determines the path in the analyzed
task with the longest execution time bound.
In this paper, we present a new approach to the path analysis.
This approach exploits the rather regular structure of software for hard
real-time and safety-critical systems.
As we show in the evaluation of this paper, we strongly improve upon former
techniques in terms of precision and runtime in the parametric case. Even in
the numeric case, our approach matches up to state-of-the-art techniques and
may be an alternative to commercial tools employed for path analysis.
%B AVACS Technical Report
%N 58
%@ false
%U http://www.avacs.org/Publikationen/Open/avacs_technical_report_058.pdf
Efficient Temporal Keyword Queries over Versioned Text
A. Anand, S. Bedathur, K. Berberich and R. Schenkel
Technical Report, 2010
A. Anand, S. Bedathur, K. Berberich and R. Schenkel
Technical Report, 2010
Export
BibTeX
@techreport{AnandBedathurBerberichSchenkel2010,
TITLE = {Efficient Temporal Keyword Queries over Versioned Text},
AUTHOR = {Anand, Avishek and Bedathur, Srikanta and Berberich, Klaus and Schenkel, Ralf},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-003},
NUMBER = {MPI-I-2010-5-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2010},
DATE = {2010},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Anand, Avishek
%A Bedathur, Srikanta
%A Berberich, Klaus
%A Schenkel, Ralf
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Efficient Temporal Keyword Queries over Versioned Text :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-65A0-2
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-003
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2010
%P 39 p.
%B Research Report
A Generic Algebraic Kernel for Non-linear Geometric Applications
E. Berberich, M. Hemmer and M. Kerber
Technical Report, 2010
E. Berberich, M. Hemmer and M. Kerber
Technical Report, 2010
Export
BibTeX
@techreport{bhk-ak2-inria-2010,
TITLE = {A Generic Algebraic Kernel for Non-linear Geometric Applications},
AUTHOR = {Berberich, Eric and Hemmer, Michael and Kerber, Michael},
LANGUAGE = {eng},
URL = {http://hal.inria.fr/inria-00480031/fr/},
NUMBER = {7274},
LOCALID = {Local-ID: C1256428004B93B8-4DF2B1DAA1910721C12577FB00348D67-bhk-ak2-inria-2010},
INSTITUTION = {INRIA},
ADDRESS = {Sophia Antipolis, France},
YEAR = {2010},
DATE = {2010},
TYPE = {Rapport de recherche / INRIA},
}
Endnote
%0 Report
%A Berberich, Eric
%A Hemmer, Michael
%A Kerber, Michael
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A Generic Algebraic Kernel for Non-linear Geometric Applications :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-15EC-2
%F EDOC: 536754
%U http://hal.inria.fr/inria-00480031/fr/
%F OTHER: Local-ID: C1256428004B93B8-4DF2B1DAA1910721C12577FB00348D67-bhk-ak2-inria-2010
%Y INRIA
%C Sophia Antipolis, France
%D 2010
%P 20 p.
%B Rapport de recherche / INRIA
A Language Modeling Approach for Temporal Information Needs
K. Berberich, S. Bedathur, O. Alonso and G. Weikum
Technical Report, 2010
K. Berberich, S. Bedathur, O. Alonso and G. Weikum
Technical Report, 2010
Abstract
This work addresses information needs that have a temporal
dimension conveyed by a temporal expression in the
user's query. Temporal expressions such as \textsf{``in the 1990s''}
are
frequent, easily extractable, but not leveraged by existing
retrieval models. One challenge when dealing with them is their
inherent uncertainty. It is often unclear which exact time interval
a temporal expression refers to.
We integrate temporal expressions into a language modeling approach,
thus making them first-class citizens of the retrieval model and
considering their inherent uncertainty. Experiments on the New York
Times Annotated Corpus using Amazon Mechanical Turk to collect
queries and obtain relevance assessments demonstrate that
our approach yields substantial improvements in retrieval
effectiveness.
Export
BibTeX
@techreport{BerberichBedathurAlonsoWeikum2010,
TITLE = {A Language Modeling Approach for Temporal Information Needs},
AUTHOR = {Berberich, Klaus and Bedathur, Srikanta and Alonso, Omar and Weikum, Gerhard},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-001},
NUMBER = {MPI-I-2010-5-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2010},
DATE = {2010},
ABSTRACT = {This work addresses information needs that have a temporal dimension conveyed by a temporal expression in the user's query. Temporal expressions such as \textsf{``in the 1990s''} are frequent, easily extractable, but not leveraged by existing retrieval models. One challenge when dealing with them is their inherent uncertainty. It is often unclear which exact time interval a temporal expression refers to. We integrate temporal expressions into a language modeling approach, thus making them first-class citizens of the retrieval model and considering their inherent uncertainty. Experiments on the New York Times Annotated Corpus using Amazon Mechanical Turk to collect queries and obtain relevance assessments demonstrate that our approach yields substantial improvements in retrieval effectiveness.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Berberich, Klaus
%A Bedathur, Srikanta
%A Alonso, Omar
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T A Language Modeling Approach for Temporal Information Needs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-65AB-C
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2010
%P 29 p.
%X This work addresses information needs that have a temporal
dimension conveyed by a temporal expression in the
user's query. Temporal expressions such as \textsf{``in the 1990s''}
are
frequent, easily extractable, but not leveraged by existing
retrieval models. One challenge when dealing with them is their
inherent uncertainty. It is often unclear which exact time interval
a temporal expression refers to.
We integrate temporal expressions into a language modeling approach,
thus making them first-class citizens of the retrieval model and
considering their inherent uncertainty. Experiments on the New York
Times Annotated Corpus using Amazon Mechanical Turk to collect
queries and obtain relevance assessments demonstrate that
our approach yields substantial improvements in retrieval
effectiveness.
%B Research Report
Real-time Text Queries with Tunable Term Pair Indexes
A. Broschart and R. Schenkel
Technical Report, 2010
A. Broschart and R. Schenkel
Technical Report, 2010
Abstract
Term proximity scoring is an established means in information retrieval for improving result quality of full-text queries. Integrating such proximity scores into efficient query processing, however, has not been equally well studied. Existing methods make use of precomputed lists of documents where tuples of terms, usually pairs, occur together, usually incurring a huge index size compared to term-only indexes. This paper introduces a joint framework for trading off index size and result quality, and provides optimization techniques for tuning precomputed indexes towards either maximal result quality or maximal query processing performance, given an upper bound for the index size. The framework allows to selectively materialize lists for pairs based on a query log to further reduce index size. Extensive experiments with two large text collections demonstrate runtime improvements of several orders of magnitude over existing text-based processing techniques with reasonable index sizes.
Export
BibTeX
@techreport{BroschartSchenkel2010,
TITLE = {Real-time Text Queries with Tunable Term Pair Indexes},
AUTHOR = {Broschart, Andreas and Schenkel, Ralf},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-006},
NUMBER = {MPI-I-2010-5-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2010},
DATE = {2010},
ABSTRACT = {Term proximity scoring is an established means in information retrieval for improving result quality of full-text queries. Integrating such proximity scores into efficient query processing, however, has not been equally well studied. Existing methods make use of precomputed lists of documents where tuples of terms, usually pairs, occur together, usually incurring a huge index size compared to term-only indexes. This paper introduces a joint framework for trading off index size and result quality, and provides optimization techniques for tuning precomputed indexes towards either maximal result quality or maximal query processing performance, given an upper bound for the index size. The framework allows to selectively materialize lists for pairs based on a query log to further reduce index size. Extensive experiments with two large text collections demonstrate runtime improvements of several orders of magnitude over existing text-based processing techniques with reasonable index sizes.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Broschart, Andreas
%A Schenkel, Ralf
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Real-time Text Queries with Tunable Term Pair Indexes :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-658C-1
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-006
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2010
%P 41 p.
%X Term proximity scoring is an established means in information retrieval for improving result quality of full-text queries. Integrating such proximity scores into efficient query processing, however, has not been equally well studied. Existing methods make use of precomputed lists of documents where tuples of terms, usually pairs, occur together, usually incurring a huge index size compared to term-only indexes. This paper introduces a joint framework for trading off index size and result quality, and provides optimization techniques for tuning precomputed indexes towards either maximal result quality or maximal query processing performance, given an upper bound for the index size. The framework allows to selectively materialize lists for pairs based on a query log to further reduce index size. Extensive experiments with two large text collections demonstrate runtime improvements of several orders of magnitude over existing text-based processing techniques with reasonable index sizes.
%B Research Report
LIVE: A Lineage-Supported Versioned DBMS
A. Das Sarma, M. Theobald and J. Widom
Technical Report, 2010
A. Das Sarma, M. Theobald and J. Widom
Technical Report, 2010
Export
BibTeX
@techreport{ilpubs-926,
TITLE = {{LIVE}: A Lineage-Supported Versioned {DBMS}},
AUTHOR = {Das Sarma, Anish and Theobald, Martin and Widom, Jennifer},
LANGUAGE = {eng},
URL = {http://ilpubs.stanford.edu:8090/926/},
NUMBER = {ILPUBS-926},
LOCALID = {Local-ID: C1256DBF005F876D-C48EC96138450196C12576B1003F58D3-ilpubs-926},
INSTITUTION = {Standford University},
ADDRESS = {Standford},
YEAR = {2010},
DATE = {2010},
TYPE = {Technical Report},
}
Endnote
%0 Report
%A Das Sarma, Anish
%A Theobald, Martin
%A Widom, Jennifer
%+ External Organizations
Databases and Information Systems, MPI for Informatics, Max Planck Society
External Organizations
%T LIVE: A Lineage-Supported Versioned DBMS :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1512-A
%F EDOC: 536357
%U http://ilpubs.stanford.edu:8090/926/
%F OTHER: Local-ID: C1256DBF005F876D-C48EC96138450196C12576B1003F58D3-ilpubs-926
%Y Standford University
%C Standford
%D 2010
%P 13 p.
%B Technical Report
Query Relaxation for Entity-relationship Search
S. Elbassuoni, M. Ramanath and G. Weikum
Technical Report, 2010
S. Elbassuoni, M. Ramanath and G. Weikum
Technical Report, 2010
Export
BibTeX
@techreport{Elbassuoni-relax2010,
TITLE = {Query Relaxation for Entity-relationship Search},
AUTHOR = {Elbassuoni, Shady and Ramanath, Maya and Weikum, Gerhard},
LANGUAGE = {eng},
NUMBER = {MPI-I-2010-5-008},
INSTITUTION = {Max-Planck Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2010},
DATE = {2010},
TYPE = {Report},
}
Endnote
%0 Report
%A Elbassuoni, Shady
%A Ramanath, Maya
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Query Relaxation for Entity-relationship Search :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0019-B30B-6
%Y Max-Planck Institut für Informatik
%C Saarbrücken
%D 2010
%B Report
Automatic Verification of Parametric Specifications with Complex Topologies
J. Faber, C. Ihlemann, S. Jacobs and V. Sofronie-Stokkermans
Technical Report, 2010
J. Faber, C. Ihlemann, S. Jacobs and V. Sofronie-Stokkermans
Technical Report, 2010
Abstract
The focus of this paper is on reducing the complexity in
verification by exploiting modularity at various levels:
in specification, in verification, and structurally.
\begin{itemize}
\item For specifications, we use the modular language CSP-OZ-DC,
which allows us to decouple verification tasks concerning
data from those concerning durations.
\item At the verification level, we exploit modularity in
theorem proving for rich data structures and use this for
invariant checking.
\item At the structural level, we analyze possibilities
for modular verification of systems consisting of various
components which interact.
\end{itemize}
We illustrate these ideas by automatically verifying safety
properties of a case study from the European Train Control
System standard, which extends previous examples by comprising a
complex track topology with lists of track segments and trains
with different routes.
Export
BibTeX
@techreport{faber-ihlemann-jacobs-sofronie-2010-report,
TITLE = {Automatic Verification of Parametric Specifications with Complex Topologies},
AUTHOR = {Faber, Johannes and Ihlemann, Carsten and Jacobs, Swen and Sofronie-Stokkermans, Viorica},
LANGUAGE = {eng},
ISSN = {1860-9821},
NUMBER = {ATR66},
LOCALID = {Local-ID: C125716C0050FB51-2E8AD7BA67FF4CB5C12577B4004D8EF8-faber-ihlemann-jacobs-sofronie-2010-report},
INSTITUTION = {SFB/TR 14 AVACS},
YEAR = {2010},
DATE = {2010},
ABSTRACT = {The focus of this paper is on reducing the complexity in verification by exploiting modularity at various levels: in specification, in verification, and structurally. \begin{itemize} \item For specifications, we use the modular language CSP-OZ-DC, which allows us to decouple verification tasks concerning data from those concerning durations. \item At the verification level, we exploit modularity in theorem proving for rich data structures and use this for invariant checking. \item At the structural level, we analyze possibilities for modular verification of systems consisting of various components which interact. \end{itemize} We illustrate these ideas by automatically verifying safety properties of a case study from the European Train Control System standard, which extends previous examples by comprising a complex track topology with lists of track segments and trains with different routes.},
TYPE = {AVACS Technical Report},
VOLUME = {66},
}
Endnote
%0 Report
%A Faber, Johannes
%A Ihlemann, Carsten
%A Jacobs, Swen
%A Sofronie-Stokkermans, Viorica
%+ Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T Automatic Verification of Parametric Specifications with Complex Topologies :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-14A6-8
%F EDOC: 536341
%F OTHER: Local-ID: C125716C0050FB51-2E8AD7BA67FF4CB5C12577B4004D8EF8-faber-ihlemann-jacobs-sofronie-2010-report
%Y SFB/TR 14 AVACS
%D 2010
%P 40 p.
%X The focus of this paper is on reducing the complexity in
verification by exploiting modularity at various levels:
in specification, in verification, and structurally.
\begin{itemize}
\item For specifications, we use the modular language CSP-OZ-DC,
which allows us to decouple verification tasks concerning
data from those concerning durations.
\item At the verification level, we exploit modularity in
theorem proving for rich data structures and use this for
invariant checking.
\item At the structural level, we analyze possibilities
for modular verification of systems consisting of various
components which interact.
\end{itemize}
We illustrate these ideas by automatically verifying safety
properties of a case study from the European Train Control
System standard, which extends previous examples by comprising a
complex track topology with lists of track segments and trains
with different routes.
%B AVACS Technical Report
%N 66
%@ false
YAGO2: A Spatially and Temporally Enhanced Knowledge Base from Wikipedia
J. Hoffart, F. M. Suchanek, K. Berberich and G. Weikum
Technical Report, 2010
J. Hoffart, F. M. Suchanek, K. Berberich and G. Weikum
Technical Report, 2010
Abstract
We present YAGO2, an extension of the YAGO knowledge base, in which entities,
facts, and events are anchored in both time and space. YAGO2 is built
automatically from Wikipedia, GeoNames, and WordNet. It contains 80 million
facts about 9.8 million entities. Human evaluation confirmed an accuracy of
95\% of the facts in YAGO2. In this paper, we present the extraction
methodology, the integration of the spatio-temporal dimension, and our
knowledge representation SPOTL, an extension of the original SPO-triple model
to time and space.
Export
BibTeX
@techreport{Hoffart2010,
TITLE = {{YAGO}2: A Spatially and Temporally Enhanced Knowledge Base from {Wikipedia}},
AUTHOR = {Hoffart, Johannes and Suchanek, Fabian M. and Berberich, Klaus and Weikum, Gerhard},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-007},
NUMBER = {MPI-I-2010-5-007},
LOCALID = {Local-ID: C1256DBF005F876D-37A86CDFCE56B71DC125784800386E6A-Hoffart2010},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2010},
DATE = {2010},
ABSTRACT = {We present YAGO2, an extension of the YAGO knowledge base, in which entities, facts, and events are anchored in both time and space. YAGO2 is built automatically from Wikipedia, GeoNames, and WordNet. It contains 80 million facts about 9.8 million entities. Human evaluation confirmed an accuracy of 95\% of the facts in YAGO2. In this paper, we present the extraction methodology, the integration of the spatio-temporal dimension, and our knowledge representation SPOTL, an extension of the original SPO-triple model to time and space.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Hoffart, Johannes
%A Suchanek, Fabian M.
%A Berberich, Klaus
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T YAGO2: A Spatially and Temporally Enhanced Knowledge Base from Wikipedia :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-155B-A
%F EDOC: 536412
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-007
%F OTHER: Local-ID: C1256DBF005F876D-37A86CDFCE56B71DC125784800386E6A-Hoffart2010
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2010
%P 55 p.
%X We present YAGO2, an extension of the YAGO knowledge base, in which entities,
facts, and events are anchored in both time and space. YAGO2 is built
automatically from Wikipedia, GeoNames, and WordNet. It contains 80 million
facts about 9.8 million entities. Human evaluation confirmed an accuracy of
95\% of the facts in YAGO2. In this paper, we present the extraction
methodology, the integration of the spatio-temporal dimension, and our
knowledge representation SPOTL, an extension of the original SPO-triple model
to time and space.
%B Research Report
Maximum Cardinality Popular Matchings in Strict Two-sided Preference Lists
C.-C. Huang and T. Kavitha
Technical Report, 2010
C.-C. Huang and T. Kavitha
Technical Report, 2010
Abstract
We consider the problem of computing a maximum cardinality {\em popular}
matching in a bipartite
graph $G = (\A\cup\B, E)$ where each vertex $u \in \A\cup\B$ ranks its
neighbors in a
strict order of preference. This is the same as an instance of the {\em
stable marriage}
problem with incomplete lists.
A matching $M^*$ is said to be popular if there is no matching $M$ such
that more vertices are better off in $M$ than in $M^*$.
\smallskip
Popular matchings have been extensively studied in the case of one-sided
preference lists, i.e.,
only vertices of $\A$ have preferences over their neighbors while
vertices in $\B$ have no
preferences; polynomial time algorithms
have been shown here to determine if a given instance admits a popular
matching
or not and if so, to compute one with maximum cardinality. It has very
recently
been shown that for two-sided preference lists, the problem of
determining if a given instance
admits a popular matching or not is NP-complete. However this hardness
result
assumes that preference lists have {\em ties}.
When preference lists are {\em strict}, it is easy to
show that popular matchings always exist since stable matchings always
exist and they are popular.
But the
complexity of computing a maximum cardinality popular matching was
unknown. In this paper
we show an $O(mn)$ algorithm for this problem, where $n = |\A| + |\B|$ and
$m = |E|$.
Export
BibTeX
@techreport{HuangKavitha2010,
TITLE = {Maximum Cardinality Popular Matchings in Strict Two-sided Preference Lists},
AUTHOR = {Huang, Chien-Chung and Kavitha, Telikepalli},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-1-001},
NUMBER = {MPI-I-2010-1-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2010},
DATE = {2010},
ABSTRACT = {We consider the problem of computing a maximum cardinality {\em popular} matching in a bipartite graph $G = (\A\cup\B, E)$ where each vertex $u \in \A\cup\B$ ranks its neighbors in a strict order of preference. This is the same as an instance of the {\em stable marriage} problem with incomplete lists. A matching $M^*$ is said to be popular if there is no matching $M$ such that more vertices are better off in $M$ than in $M^*$. \smallskip Popular matchings have been extensively studied in the case of one-sided preference lists, i.e., only vertices of $\A$ have preferences over their neighbors while vertices in $\B$ have no preferences; polynomial time algorithms have been shown here to determine if a given instance admits a popular matching or not and if so, to compute one with maximum cardinality. It has very recently been shown that for two-sided preference lists, the problem of determining if a given instance admits a popular matching or not is NP-complete. However this hardness result assumes that preference lists have {\em ties}. When preference lists are {\em strict}, it is easy to show that popular matchings always exist since stable matchings always exist and they are popular. But the complexity of computing a maximum cardinality popular matching was unknown. In this paper we show an $O(mn)$ algorithm for this problem, where $n = |\A| + |\B|$ and $m = |E|$.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Huang, Chien-Chung
%A Kavitha, Telikepalli
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T Maximum Cardinality Popular Matchings in Strict Two-sided Preference Lists :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6668-9
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-1-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2010
%P 17 p.
%X We consider the problem of computing a maximum cardinality {\em popular}
matching in a bipartite
graph $G = (\A\cup\B, E)$ where each vertex $u \in \A\cup\B$ ranks its
neighbors in a
strict order of preference. This is the same as an instance of the {\em
stable marriage}
problem with incomplete lists.
A matching $M^*$ is said to be popular if there is no matching $M$ such
that more vertices are better off in $M$ than in $M^*$.
\smallskip
Popular matchings have been extensively studied in the case of one-sided
preference lists, i.e.,
only vertices of $\A$ have preferences over their neighbors while
vertices in $\B$ have no
preferences; polynomial time algorithms
have been shown here to determine if a given instance admits a popular
matching
or not and if so, to compute one with maximum cardinality. It has very
recently
been shown that for two-sided preference lists, the problem of
determining if a given instance
admits a popular matching or not is NP-complete. However this hardness
result
assumes that preference lists have {\em ties}.
When preference lists are {\em strict}, it is easy to
show that popular matchings always exist since stable matchings always
exist and they are popular.
But the
complexity of computing a maximum cardinality popular matching was
unknown. In this paper
we show an $O(mn)$ algorithm for this problem, where $n = |\A| + |\B|$ and
$m = |E|$.
%B Research Report
On Hierarchical Reasoning in Combinations of Theories
C. Ihlemann and V. Sofronie-Stokkermans
Technical Report, 2010a
C. Ihlemann and V. Sofronie-Stokkermans
Technical Report, 2010a
Abstract
In this paper we study theory combinations over non-disjoint
signatures in which hierarchical and modular reasoning is
possible. We use a notion of locality of a theory extension
parameterized by a closure operator on ground terms.
We give criteria for recognizing these types of theory
extensions. We then show that combinations of extensions of
theories which are local in this extended sense have also a
locality property and hence allow modular and hierarchical
reasoning. We thus obtain parameterized decidability and
complexity results for many (combinations of) theories
important in verification.
Export
BibTeX
@techreport{Ihlemann-Sofronie-Stokkermans-atr60-2010,
TITLE = {On Hierarchical Reasoning in Combinations of Theories},
AUTHOR = {Ihlemann, Carsten and Sofronie-Stokkermans, Viorica},
LANGUAGE = {eng},
ISSN = {1860-9821},
NUMBER = {ATR60},
LOCALID = {Local-ID: C125716C0050FB51-8E77AFE123C76116C1257782003FEBDA-Ihlemann-Sofronie-Stokkermans-atr60-2010},
INSTITUTION = {SFB/TR 14 AVACS},
YEAR = {2010},
DATE = {2010},
ABSTRACT = {In this paper we study theory combinations over non-disjoint signatures in which hierarchical and modular reasoning is possible. We use a notion of locality of a theory extension parameterized by a closure operator on ground terms. We give criteria for recognizing these types of theory extensions. We then show that combinations of extensions of theories which are local in this extended sense have also a locality property and hence allow modular and hierarchical reasoning. We thus obtain parameterized decidability and complexity results for many (combinations of) theories important in verification.},
TYPE = {AVACS Technical Report},
VOLUME = {60},
}
Endnote
%0 Report
%A Ihlemann, Carsten
%A Sofronie-Stokkermans, Viorica
%+ Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T On Hierarchical Reasoning in Combinations of Theories :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-14B7-2
%F EDOC: 536339
%F OTHER: Local-ID: C125716C0050FB51-8E77AFE123C76116C1257782003FEBDA-Ihlemann-Sofronie-Stokkermans-atr60-2010
%Y SFB/TR 14 AVACS
%D 2010
%P 26 p.
%X In this paper we study theory combinations over non-disjoint
signatures in which hierarchical and modular reasoning is
possible. We use a notion of locality of a theory extension
parameterized by a closure operator on ground terms.
We give criteria for recognizing these types of theory
extensions. We then show that combinations of extensions of
theories which are local in this extended sense have also a
locality property and hence allow modular and hierarchical
reasoning. We thus obtain parameterized decidability and
complexity results for many (combinations of) theories
important in verification.
%B AVACS Technical Report
%N 60
%@ false
%U http://www.avacs.org/Publikationen/Open/avacs_technical_report_060.pdf
System Description: H-PILoT (Version 1.9)
C. Ihlemann and V. Sofronie-Stokkermans
Technical Report, 2010b
C. Ihlemann and V. Sofronie-Stokkermans
Technical Report, 2010b
Abstract
This system description provides an overview of H-PILoT
(Hierarchical Proving by Instantiation in Local Theory
extensions), a program for hierarchical reasoning in
extensions of logical theories.
H-PILoT reduces deduction problems in the theory extension
to deduction problems in the base theory.
Specialized provers and standard SMT solvers can be used
for testing the satisfiability of the formulae obtained
after the reduction. For a certain type of theory extension
(namely for {\em local theory extensions}) this
hierarchical reduction is sound and complete and --
if the formulae obtained this way belong to a fragment
decidable in the base theory -- H-PILoT provides a decision
procedure for testing satisfiability of ground formulae,
and can also be used for model generation.
Export
BibTeX
@techreport{Ihlemann-Sofronie-Stokkermans-atr61-2010,
TITLE = {System Description: H-{PILoT} (Version 1.9)},
AUTHOR = {Ihlemann, Carsten and Sofronie-Stokkermans, Viorica},
LANGUAGE = {eng},
ISSN = {1860-9821},
NUMBER = {ATR61},
LOCALID = {Local-ID: C125716C0050FB51-5F53450808E13ED9C125778C00501AE6-Ihlemann-Sofronie-Stokkermans-atr61-2010},
INSTITUTION = {SFB/TR 14 AVACS},
YEAR = {2010},
DATE = {2010},
ABSTRACT = {This system description provides an overview of H-PILoT (Hierarchical Proving by Instantiation in Local Theory extensions), a program for hierarchical reasoning in extensions of logical theories. H-PILoT reduces deduction problems in the theory extension to deduction problems in the base theory. Specialized provers and standard SMT solvers can be used for testing the satisfiability of the formulae obtained after the reduction. For a certain type of theory extension (namely for {\em local theory extensions}) this hierarchical reduction is sound and complete and -- if the formulae obtained this way belong to a fragment decidable in the base theory -- H-PILoT provides a decision procedure for testing satisfiability of ground formulae, and can also be used for model generation.},
TYPE = {AVACS Technical Report},
VOLUME = {61},
}
Endnote
%0 Report
%A Ihlemann, Carsten
%A Sofronie-Stokkermans, Viorica
%+ Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T System Description: H-PILoT (Version 1.9) :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-14C5-2
%F EDOC: 536340
%F OTHER: Local-ID: C125716C0050FB51-5F53450808E13ED9C125778C00501AE6-Ihlemann-Sofronie-Stokkermans-atr61-2010
%Y SFB/TR 14 AVACS
%D 2010
%P 45 p.
%X This system description provides an overview of H-PILoT
(Hierarchical Proving by Instantiation in Local Theory
extensions), a program for hierarchical reasoning in
extensions of logical theories.
H-PILoT reduces deduction problems in the theory extension
to deduction problems in the base theory.
Specialized provers and standard SMT solvers can be used
for testing the satisfiability of the formulae obtained
after the reduction. For a certain type of theory extension
(namely for {\em local theory extensions}) this
hierarchical reduction is sound and complete and --
if the formulae obtained this way belong to a fragment
decidable in the base theory -- H-PILoT provides a decision
procedure for testing satisfiability of ground formulae,
and can also be used for model generation.
%B AVACS Technical Report
%N 61
%@ false
Query Evaluation with Asymmetric Web Services
N. Preda, F. Suchanek, W. Yuan and G. Weikum
Technical Report, 2010
N. Preda, F. Suchanek, W. Yuan and G. Weikum
Technical Report, 2010
Export
BibTeX
@techreport{PredaSuchanekYuanWeikum2011,
TITLE = {Query Evaluation with Asymmetric Web Services},
AUTHOR = {Preda, Nicoleta and Suchanek, F. and Yuan, Wenjun and Weikum, Gerhard},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-004},
NUMBER = {MPI-I-2010-5-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2010},
DATE = {2010},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Preda, Nicoleta
%A Suchanek, F.
%A Yuan, Wenjun
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Query Evaluation with Asymmetric Web Services :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-659D-C
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-004
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2010
%P 31 p.
%B Research Report
Bonsai: Growing Interesting Small Trees
S. Seufert, S. Bedathur, J. Mestre and G. Weikum
Technical Report, 2010
S. Seufert, S. Bedathur, J. Mestre and G. Weikum
Technical Report, 2010
Abstract
Graphs are increasingly used to model a variety of loosely structured data such
as biological or social networks and entity-relationships. Given this profusion
of large-scale graph data, efficiently discovering interesting substructures
buried
within is essential. These substructures are typically used in determining
subsequent actions, such as conducting visual analytics by humans or designing
expensive biomedical experiments. In such settings, it is often desirable to
constrain the size of the discovered results in order to directly control the
associated costs. In this report, we address the problem of finding
cardinality-constrained connected
subtrees from large node-weighted graphs that maximize the sum of weights of
selected nodes. We provide an efficient constant-factor approximation algorithm
for this strongly NP-hard problem. Our techniques can be applied in a wide
variety
of application settings, for example in differential analysis of graphs, a
problem that frequently arises in bioinformatics but also has applications on
the web.
Export
BibTeX
@techreport{Seufert2010a,
TITLE = {Bonsai: Growing Interesting Small Trees},
AUTHOR = {Seufert, Stephan and Bedathur, Srikanta and Mestre, Julian and Weikum, Gerhard},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-005},
NUMBER = {MPI-I-2010-5-005},
LOCALID = {Local-ID: C1256DBF005F876D-BC73995718B48415C12577E600538833-Seufert2010a},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2010},
DATE = {2010},
ABSTRACT = {Graphs are increasingly used to model a variety of loosely structured data such as biological or social networks and entity-relationships. Given this profusion of large-scale graph data, efficiently discovering interesting substructures buried within is essential. These substructures are typically used in determining subsequent actions, such as conducting visual analytics by humans or designing expensive biomedical experiments. In such settings, it is often desirable to constrain the size of the discovered results in order to directly control the associated costs. In this report, we address the problem of finding cardinality-constrained connected subtrees from large node-weighted graphs that maximize the sum of weights of selected nodes. We provide an efficient constant-factor approximation algorithm for this strongly NP-hard problem. Our techniques can be applied in a wide variety of application settings, for example in differential analysis of graphs, a problem that frequently arises in bioinformatics but also has applications on the web.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Seufert, Stephan
%A Bedathur, Srikanta
%A Mestre, Julian
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Bonsai: Growing Interesting Small Trees :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-14D8-7
%F EDOC: 536383
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-005
%F OTHER: Local-ID: C1256DBF005F876D-BC73995718B48415C12577E600538833-Seufert2010a
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2010
%P 32 p.
%X Graphs are increasingly used to model a variety of loosely structured data such
as biological or social networks and entity-relationships. Given this profusion
of large-scale graph data, efficiently discovering interesting substructures
buried
within is essential. These substructures are typically used in determining
subsequent actions, such as conducting visual analytics by humans or designing
expensive biomedical experiments. In such settings, it is often desirable to
constrain the size of the discovered results in order to directly control the
associated costs. In this report, we address the problem of finding
cardinality-constrained connected
subtrees from large node-weighted graphs that maximize the sum of weights of
selected nodes. We provide an efficient constant-factor approximation algorithm
for this strongly NP-hard problem. Our techniques can be applied in a wide
variety
of application settings, for example in differential analysis of graphs, a
problem that frequently arises in bioinformatics but also has applications on
the web.
%B Research Report
On the saturation of YAGO
M. Suda, C. Weidenbach and P. Wischnewski
Technical Report, 2010
M. Suda, C. Weidenbach and P. Wischnewski
Technical Report, 2010
Export
BibTeX
@techreport{SudaWischnewski2010,
TITLE = {On the saturation of {YAGO}},
AUTHOR = {Suda, Martin and Weidenbach, Christoph and Wischnewski, Patrick},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-RG1-001},
NUMBER = {MPI-I-2010-RG1-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2010},
DATE = {2010},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Suda, Martin
%A Weidenbach, Christoph
%A Wischnewski, Patrick
%+ Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T On the saturation of YAGO :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6584-2
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-RG1-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2010
%P 50 p.
%B Research Report
A Bayesian Approach to Manifold Topology Reconstruction
A. Tevs, M. Wand, I. Ihrke and H.-P. Seidel
Technical Report, 2010
A. Tevs, M. Wand, I. Ihrke and H.-P. Seidel
Technical Report, 2010
Abstract
In this paper, we investigate the problem of statistical reconstruction of
piecewise linear manifold topology. Given a noisy, probably undersampled point
cloud from a one- or two-manifold, the algorithm reconstructs an approximated
most likely mesh in a Bayesian sense from which the sample might have been
taken. We incorporate statistical priors on the object geometry to improve the
reconstruction quality if additional knowledge about the class of original
shapes is available. The priors can be formulated analytically or learned from
example geometry with known manifold tessellation. The statistical objective
function is approximated by a linear programming / integer programming problem,
for which a globally optimal solution is found. We apply the algorithm to a set
of 2D and 3D reconstruction examples, demon-strating that a statistics-based
manifold reconstruction is feasible, and still yields plausible results in
situations where sampling conditions are violated.
Export
BibTeX
@techreport{TevsTechReport2009,
TITLE = {A Bayesian Approach to Manifold Topology Reconstruction},
AUTHOR = {Tevs, Art and Wand, Michael and Ihrke, Ivo and Seidel, Hans-Peter},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2009-4-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2010},
DATE = {2010},
ABSTRACT = {In this paper, we investigate the problem of statistical reconstruction of piecewise linear manifold topology. Given a noisy, probably undersampled point cloud from a one- or two-manifold, the algorithm reconstructs an approximated most likely mesh in a Bayesian sense from which the sample might have been taken. We incorporate statistical priors on the object geometry to improve the reconstruction quality if additional knowledge about the class of original shapes is available. The priors can be formulated analytically or learned from example geometry with known manifold tessellation. The statistical objective function is approximated by a linear programming / integer programming problem, for which a globally optimal solution is found. We apply the algorithm to a set of 2D and 3D reconstruction examples, demon-strating that a statistics-based manifold reconstruction is feasible, and still yields plausible results in situations where sampling conditions are violated.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Tevs, Art
%A Wand, Michael
%A Ihrke, Ivo
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
International Max Planck Research School, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Graphics - Optics - Vision, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T A Bayesian Approach to Manifold Topology Reconstruction :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1722-7
%F EDOC: 537282
%@ 0946-011X
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2010
%P 23 p.
%X In this paper, we investigate the problem of statistical reconstruction of
piecewise linear manifold topology. Given a noisy, probably undersampled point
cloud from a one- or two-manifold, the algorithm reconstructs an approximated
most likely mesh in a Bayesian sense from which the sample might have been
taken. We incorporate statistical priors on the object geometry to improve the
reconstruction quality if additional knowledge about the class of original
shapes is available. The priors can be formulated analytically or learned from
example geometry with known manifold tessellation. The statistical objective
function is approximated by a linear programming / integer programming problem,
for which a globally optimal solution is found. We apply the algorithm to a set
of 2D and 3D reconstruction examples, demon-strating that a statistics-based
manifold reconstruction is feasible, and still yields plausible results in
situations where sampling conditions are violated.
%B Research Report
URDF: Efficient Reasoning in Uncertain RDF Knowledge Bases with Soft and Hard Rules
M. Theobald, M. Sozio, F. Suchanek and N. Nakashole
Technical Report, 2010
M. Theobald, M. Sozio, F. Suchanek and N. Nakashole
Technical Report, 2010
Abstract
We present URDF, an efficient reasoning framework for graph-based, nonschematic
RDF knowledge bases and SPARQL-like queries. URDF augments
first-order reasoning by a combination of soft rules, with Datalog-style
recursive
implications, and hard rules, in the shape of mutually exclusive sets of facts.
It incorporates
the common possible worlds semantics with independent base facts as
it is prevalent in most probabilistic database approaches, but also supports
semantically
more expressive, probabilistic first-order representations such as Markov
Logic Networks.
As knowledge extraction on theWeb often is an iterative (and inherently noisy)
process, URDF explicitly targets the resolution of inconsistencies between the
underlying
RDF base facts and the inference rules. Core of our approach is a novel
and efficient approximation algorithm for a generalized version of the Weighted
MAX-SAT problem, allowing us to dynamically resolve such inconsistencies
directly
at query processing time. Our MAX-SAT algorithm has a worst-case running
time of O(jCj jSj), where jCj and jSj denote the number of facts in grounded
soft and hard rules, respectively, and it comes with tight approximation
guarantees
with respect to the shape of the rules and the distribution of confidences of
facts
they contain. Experiments over various benchmark settings confirm a high
robustness
and significantly improved runtime of our reasoning framework in comparison
to state-of-the-art techniques for MCMC sampling such as MAP inference
and MC-SAT.
Keywords
Export
BibTeX
@techreport{urdf-tr-2010,
TITLE = {{URDF}: Efficient Reasoning in Uncertain {RDF} Knowledge Bases with Soft and Hard Rules},
AUTHOR = {Theobald, Martin and Sozio, Mauro and Suchanek, Fabian and Nakashole, Ndapandula},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-002},
NUMBER = {MPI-I-2010-5-002},
LOCALID = {Local-ID: C1256DBF005F876D-4F6C2407136ECAA6C125770E003634BE-urdf-tr-2010},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2010},
DATE = {2010},
ABSTRACT = {We present URDF, an efficient reasoning framework for graph-based, nonschematic RDF knowledge bases and SPARQL-like queries. URDF augments first-order reasoning by a combination of soft rules, with Datalog-style recursive implications, and hard rules, in the shape of mutually exclusive sets of facts. It incorporates the common possible worlds semantics with independent base facts as it is prevalent in most probabilistic database approaches, but also supports semantically more expressive, probabilistic first-order representations such as Markov Logic Networks. As knowledge extraction on theWeb often is an iterative (and inherently noisy) process, URDF explicitly targets the resolution of inconsistencies between the underlying RDF base facts and the inference rules. Core of our approach is a novel and efficient approximation algorithm for a generalized version of the Weighted MAX-SAT problem, allowing us to dynamically resolve such inconsistencies directly at query processing time. Our MAX-SAT algorithm has a worst-case running time of O(jCj jSj), where jCj and jSj denote the number of facts in grounded soft and hard rules, respectively, and it comes with tight approximation guarantees with respect to the shape of the rules and the distribution of confidences of facts they contain. Experiments over various benchmark settings confirm a high robustness and significantly improved runtime of our reasoning framework in comparison to state-of-the-art techniques for MCMC sampling such as MAP inference and MC-SAT. Keywords},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Theobald, Martin
%A Sozio, Mauro
%A Suchanek, Fabian
%A Nakashole, Ndapandula
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T URDF: Efficient Reasoning in Uncertain RDF Knowledge Bases with Soft and Hard Rules :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1556-3
%F EDOC: 536366
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2010-5-002
%F OTHER: Local-ID: C1256DBF005F876D-4F6C2407136ECAA6C125770E003634BE-urdf-tr-2010
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2010
%P 48 p.
%X We present URDF, an efficient reasoning framework for graph-based, nonschematic
RDF knowledge bases and SPARQL-like queries. URDF augments
first-order reasoning by a combination of soft rules, with Datalog-style
recursive
implications, and hard rules, in the shape of mutually exclusive sets of facts.
It incorporates
the common possible worlds semantics with independent base facts as
it is prevalent in most probabilistic database approaches, but also supports
semantically
more expressive, probabilistic first-order representations such as Markov
Logic Networks.
As knowledge extraction on theWeb often is an iterative (and inherently noisy)
process, URDF explicitly targets the resolution of inconsistencies between the
underlying
RDF base facts and the inference rules. Core of our approach is a novel
and efficient approximation algorithm for a generalized version of the Weighted
MAX-SAT problem, allowing us to dynamically resolve such inconsistencies
directly
at query processing time. Our MAX-SAT algorithm has a worst-case running
time of O(jCj jSj), where jCj and jSj denote the number of facts in grounded
soft and hard rules, respectively, and it comes with tight approximation
guarantees
with respect to the shape of the rules and the distribution of confidences of
facts
they contain. Experiments over various benchmark settings confirm a high
robustness
and significantly improved runtime of our reasoning framework in comparison
to state-of-the-art techniques for MCMC sampling such as MAP inference
and MC-SAT.
Keywords
%B Research Report
2009
Scalable Phrase Mining for Ad-hoc Text Analytics
S. Bedathur, K. Berberich, J. Dittrich, N. Mamoulis and G. Weikum
Technical Report, 2009
S. Bedathur, K. Berberich, J. Dittrich, N. Mamoulis and G. Weikum
Technical Report, 2009
Export
BibTeX
@techreport{BedathurBerberichDittrichMamoulisWeikum2009,
TITLE = {Scalable Phrase Mining for Ad-hoc Text Analytics},
AUTHOR = {Bedathur, Srikanta and Berberich, Klaus and Dittrich, Jens and Mamoulis, Nikos and Weikum, Gerhard},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2009-5-006},
LOCALID = {Local-ID: C1256DBF005F876D-4E35301DBC58B9F7C12575A00044A942-TechReport-BBDMW2009},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2009},
DATE = {2009},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Bedathur, Srikanta
%A Berberich, Klaus
%A Dittrich, Jens
%A Mamoulis, Nikos
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Scalable Phrase Mining for Ad-hoc Text Analytics :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-194A-0
%F EDOC: 520425
%@ 0946-011X
%F OTHER: Local-ID: C1256DBF005F876D-4E35301DBC58B9F7C12575A00044A942-TechReport-BBDMW2009
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2009
%P 41 p.
%B Research Report
Generalized intrinsic symmetry detection
A. Berner, M. Bokeloh, M. Wand, A. Schilling and H.-P. Seidel
Technical Report, 2009
A. Berner, M. Bokeloh, M. Wand, A. Schilling and H.-P. Seidel
Technical Report, 2009
Abstract
In this paper, we address the problem of detecting partial symmetries in
3D objects. In contrast to previous work, our algorithm is able to match
deformed symmetric parts: We first develop an algorithm for the case of
approximately isometric deformations, based on matching graphs of
surface feature lines that are annotated with intrinsic geometric
properties. The sensitivity to non-isometry is controlled by tolerance
parameters for each such annotation. Using large tolerance values for
some of these annotations and a robust matching of the graph topology
yields a more general symmetry detection algorithm that can detect
similarities in structures that have undergone strong deformations. This
approach for the first time allows for detecting partial intrinsic as
well as more general, non-isometric symmetries. We evaluate the
recognition performance of our technique for a number synthetic and
real-world scanner data sets.
Export
BibTeX
@techreport{BernerBokelohWandSchillingSeidel2009,
TITLE = {Generalized intrinsic symmetry detection},
AUTHOR = {Berner, Alexander and Bokeloh, Martin and Wand, Martin and Schilling, Andreas and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-4-005},
NUMBER = {MPI-I-2009-4-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2009},
DATE = {2009},
ABSTRACT = {In this paper, we address the problem of detecting partial symmetries in 3D objects. In contrast to previous work, our algorithm is able to match deformed symmetric parts: We first develop an algorithm for the case of approximately isometric deformations, based on matching graphs of surface feature lines that are annotated with intrinsic geometric properties. The sensitivity to non-isometry is controlled by tolerance parameters for each such annotation. Using large tolerance values for some of these annotations and a robust matching of the graph topology yields a more general symmetry detection algorithm that can detect similarities in structures that have undergone strong deformations. This approach for the first time allows for detecting partial intrinsic as well as more general, non-isometric symmetries. We evaluate the recognition performance of our technique for a number synthetic and real-world scanner data sets.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Berner, Alexander
%A Bokeloh, Martin
%A Wand, Martin
%A Schilling, Andreas
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
%T Generalized intrinsic symmetry detection :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-666B-3
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-4-005
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2009
%P 33 p.
%X In this paper, we address the problem of detecting partial symmetries in
3D objects. In contrast to previous work, our algorithm is able to match
deformed symmetric parts: We first develop an algorithm for the case of
approximately isometric deformations, based on matching graphs of
surface feature lines that are annotated with intrinsic geometric
properties. The sensitivity to non-isometry is controlled by tolerance
parameters for each such annotation. Using large tolerance values for
some of these annotations and a robust matching of the graph topology
yields a more general symmetry detection algorithm that can detect
similarities in structures that have undergone strong deformations. This
approach for the first time allows for detecting partial intrinsic as
well as more general, non-isometric symmetries. We evaluate the
recognition performance of our technique for a number synthetic and
real-world scanner data sets.
%B Research Report / Max-Planck-Institut für Informatik
Towards a Universal Wordnet by Learning from Combined Evidenc
G. de Melo and G. Weikum
Technical Report, 2009
G. de Melo and G. Weikum
Technical Report, 2009
Abstract
Lexical databases are invaluable sources of knowledge about words and
their meanings,
with numerous applications in areas like NLP, IR, and AI.
We propose a methodology for the automatic construction of a large-scale
multilingual
lexical database where words of many languages are hierarchically
organized in terms of their
meanings and their semantic relations to other words. This resource is
bootstrapped from
WordNet, a well-known English-language resource. Our approach extends
WordNet with around
1.5 million meaning links for 800,000 words in over 200 languages,
drawing on evidence extracted
from a variety of resources including existing (monolingual) wordnets,
(mostly bilingual) translation
dictionaries, and parallel corpora.
Graph-based scoring functions and statistical learning techniques are
used to iteratively integrate
this information and build an output graph. Experiments show that this
wordnet has a high
level of precision and coverage, and that it can be useful in applied
tasks such as
cross-lingual text classification.
Export
BibTeX
@techreport{deMeloWeikum2009,
TITLE = {Towards a Universal Wordnet by Learning from Combined Evidenc},
AUTHOR = {de Melo, Gerard and Weikum, Gerhard},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-5-005},
NUMBER = {MPI-I-2009-5-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2009},
DATE = {2009},
ABSTRACT = {Lexical databases are invaluable sources of knowledge about words and their meanings, with numerous applications in areas like NLP, IR, and AI. We propose a methodology for the automatic construction of a large-scale multilingual lexical database where words of many languages are hierarchically organized in terms of their meanings and their semantic relations to other words. This resource is bootstrapped from WordNet, a well-known English-language resource. Our approach extends WordNet with around 1.5 million meaning links for 800,000 words in over 200 languages, drawing on evidence extracted from a variety of resources including existing (monolingual) wordnets, (mostly bilingual) translation dictionaries, and parallel corpora. Graph-based scoring functions and statistical learning techniques are used to iteratively integrate this information and build an output graph. Experiments show that this wordnet has a high level of precision and coverage, and that it can be useful in applied tasks such as cross-lingual text classification.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A de Melo, Gerard
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Towards a Universal Wordnet by Learning from Combined Evidenc :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-665C-5
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-5-005
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2009
%P 32 p.
%X Lexical databases are invaluable sources of knowledge about words and
their meanings,
with numerous applications in areas like NLP, IR, and AI.
We propose a methodology for the automatic construction of a large-scale
multilingual
lexical database where words of many languages are hierarchically
organized in terms of their
meanings and their semantic relations to other words. This resource is
bootstrapped from
WordNet, a well-known English-language resource. Our approach extends
WordNet with around
1.5 million meaning links for 800,000 words in over 200 languages,
drawing on evidence extracted
from a variety of resources including existing (monolingual) wordnets,
(mostly bilingual) translation
dictionaries, and parallel corpora.
Graph-based scoring functions and statistical learning techniques are
used to iteratively integrate
this information and build an output graph. Experiments show that this
wordnet has a high
level of precision and coverage, and that it can be useful in applied
tasks such as
cross-lingual text classification.
%B Research Report
A shaped temporal filter camera
M. Fuchs, T. Chen, O. Wang, R. Raskar, H. P. A. Lensch and H.-P. Seidel
Technical Report, 2009
M. Fuchs, T. Chen, O. Wang, R. Raskar, H. P. A. Lensch and H.-P. Seidel
Technical Report, 2009
Export
BibTeX
@techreport{FuchsChenWangRaskarLenschSeidel2009,
TITLE = {A shaped temporal filter camera},
AUTHOR = {Fuchs, Martin and Chen, Tongbo and Wang, Oliver and Raskar, Ramesh and Lensch, Hendrik P. A. and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-4-003},
NUMBER = {MPI-I-2009-4-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2009},
DATE = {2009},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Fuchs, Martin
%A Chen, Tongbo
%A Wang, Oliver
%A Raskar, Ramesh
%A Lensch, Hendrik P. A.
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T A shaped temporal filter camera :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-666E-E
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-4-003
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2009
%P 25 p.
%B Research Report / Max-Planck-Institut für Informatik
MPI Informatics building model as data for your research
V. Havran, J. Zajac, J. Drahokoupil and H.-P. Seidel
Technical Report, 2009
V. Havran, J. Zajac, J. Drahokoupil and H.-P. Seidel
Technical Report, 2009
Abstract
In this report we describe the MPI Informatics building
model that provides the data of the Max-Planck-Institut
f\"{u}r Informatik (MPII) building. We present our
motivation for this work and its relationship to
reproducibility of a scientific research. We describe the
dataset acquisition and creation including geometry,
luminaires, surface reflectances, reference photographs etc.
needed to use this model in testing of algorithms. The
created dataset can be used in computer graphics and beyond,
in particular in global illumination algorithms with focus
on realistic and predictive image synthesis. Outside of
computer graphics, it can be used as general source of real
world geometry with an existing counterpart and hence also
suitable for computer vision.
Export
BibTeX
@techreport{HavranZajacDrahokoupilSeidel2009,
TITLE = {{MPI} Informatics building model as data for your research},
AUTHOR = {Havran, Vlastimil and Zajac, Jozef and Drahokoupil, Jiri and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-4-004},
NUMBER = {MPI-I-2009-4-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2009},
DATE = {2009},
ABSTRACT = {In this report we describe the MPI Informatics building model that provides the data of the Max-Planck-Institut f\"{u}r Informatik (MPII) building. We present our motivation for this work and its relationship to reproducibility of a scientific research. We describe the dataset acquisition and creation including geometry, luminaires, surface reflectances, reference photographs etc. needed to use this model in testing of algorithms. The created dataset can be used in computer graphics and beyond, in particular in global illumination algorithms with focus on realistic and predictive image synthesis. Outside of computer graphics, it can be used as general source of real world geometry with an existing counterpart and hence also suitable for computer vision.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Havran, Vlastimil
%A Zajac, Jozef
%A Drahokoupil, Jiri
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
%T MPI Informatics building model as data for your research :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6665-F
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-4-004
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2009
%P 113 p.
%X In this report we describe the MPI Informatics building
model that provides the data of the Max-Planck-Institut
f\"{u}r Informatik (MPII) building. We present our
motivation for this work and its relationship to
reproducibility of a scientific research. We describe the
dataset acquisition and creation including geometry,
luminaires, surface reflectances, reference photographs etc.
needed to use this model in testing of algorithms. The
created dataset can be used in computer graphics and beyond,
in particular in global illumination algorithms with focus
on realistic and predictive image synthesis. Outside of
computer graphics, it can be used as general source of real
world geometry with an existing counterpart and hence also
suitable for computer vision.
%B Research Report / Max-Planck-Institut für Informatik
Deciding the Inductive Validity of Forall Exists* Queries
M. Horbach and C. Weidenbach
Technical Report, 2009a
M. Horbach and C. Weidenbach
Technical Report, 2009a
Abstract
We present a new saturation-based decidability result for inductive validity.
Let $\Sigma$ be a finite signature in which all function symbols are at most
unary and let $N$ be a satisfiable Horn clause set without equality in which
all positive literals are linear.
If $N\cup\{A_1,\ldots,A_n\rightarrow\}$ belongs to a finitely saturating clause
class, then it is decidable whether a sentence of the form $\forall\exists^*
(A_1\wedge\ldots\wedge A_n)$ is valid in the minimal model of $N$.
Export
BibTeX
@techreport{HorbachWeidenbach2009,
TITLE = {Deciding the Inductive Validity of Forall Exists* Queries},
AUTHOR = {Horbach, Matthias and Weidenbach, Christoph},
LANGUAGE = {eng},
NUMBER = {MPI-I-2009-RG1-001},
LOCALID = {Local-ID: C125716C0050FB51-F9BA0666A42B8463C12576AF002882D7-Horbach2009TR1},
YEAR = {2009},
DATE = {2009},
ABSTRACT = {We present a new saturation-based decidability result for inductive validity. Let $\Sigma$ be a finite signature in which all function symbols are at most unary and let $N$ be a satisfiable Horn clause set without equality in which all positive literals are linear. If $N\cup\{A_1,\ldots,A_n\rightarrow\}$ belongs to a finitely saturating clause class, then it is decidable whether a sentence of the form $\forall\exists^* (A_1\wedge\ldots\wedge A_n)$ is valid in the minimal model of $N$.},
}
Endnote
%0 Report
%A Horbach, Matthias
%A Weidenbach, Christoph
%+ Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T Deciding the Inductive Validity of Forall Exists* Queries :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1A51-3
%F EDOC: 521099
%F OTHER: Local-ID: C125716C0050FB51-F9BA0666A42B8463C12576AF002882D7-Horbach2009TR1
%D 2009
%X We present a new saturation-based decidability result for inductive validity.
Let $\Sigma$ be a finite signature in which all function symbols are at most
unary and let $N$ be a satisfiable Horn clause set without equality in which
all positive literals are linear.
If $N\cup\{A_1,\ldots,A_n\rightarrow\}$ belongs to a finitely saturating clause
class, then it is decidable whether a sentence of the form $\forall\exists^*
(A_1\wedge\ldots\wedge A_n)$ is valid in the minimal model of $N$.
Superposition for Fixed Domains
M. Horbach and C. Weidenbach
Technical Report, 2009b
M. Horbach and C. Weidenbach
Technical Report, 2009b
Abstract
Superposition is an established decision procedure for a variety of first-order
logic theories represented by sets of clauses. A satisfiable theory, saturated
by superposition, implicitly defines a minimal term-generated model for the
theory.
Proving universal properties with respect to a saturated theory directly leads
to a modification of the minimal model's term-generated domain, as new Skolem
functions are introduced. For many applications, this is not desired.
Therefore, we propose the first superposition calculus that can explicitly
represent existentially quantified variables and can thus compute with respect
to a given domain. This calculus is sound and refutationally complete in the
limit for a first-order fixed domain semantics.
For saturated Horn theories and classes of positive formulas, we can even
employ the calculus to prove properties of the minimal model itself, going
beyond the scope of known superposition-based approaches.
Export
BibTeX
@techreport{Horbach2009TR2,
TITLE = {Superposition for Fixed Domains},
AUTHOR = {Horbach, Matthias and Weidenbach, Christoph},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2009-RG1-005},
LOCALID = {Local-ID: C125716C0050FB51-5DDBBB1B134360CFC12576AF0028D299-Horbach2009TR2},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2009},
DATE = {2009},
ABSTRACT = {Superposition is an established decision procedure for a variety of first-order logic theories represented by sets of clauses. A satisfiable theory, saturated by superposition, implicitly defines a minimal term-generated model for the theory. Proving universal properties with respect to a saturated theory directly leads to a modification of the minimal model's term-generated domain, as new Skolem functions are introduced. For many applications, this is not desired. Therefore, we propose the first superposition calculus that can explicitly represent existentially quantified variables and can thus compute with respect to a given domain. This calculus is sound and refutationally complete in the limit for a first-order fixed domain semantics. For saturated Horn theories and classes of positive formulas, we can even employ the calculus to prove properties of the minimal model itself, going beyond the scope of known superposition-based approaches.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Horbach, Matthias
%A Weidenbach, Christoph
%+ Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T Superposition for Fixed Domains :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1A71-C
%F EDOC: 521100
%F OTHER: Local-ID: C125716C0050FB51-5DDBBB1B134360CFC12576AF0028D299-Horbach2009TR2
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2009
%P 49 p.
%X Superposition is an established decision procedure for a variety of first-order
logic theories represented by sets of clauses. A satisfiable theory, saturated
by superposition, implicitly defines a minimal term-generated model for the
theory.
Proving universal properties with respect to a saturated theory directly leads
to a modification of the minimal model's term-generated domain, as new Skolem
functions are introduced. For many applications, this is not desired.
Therefore, we propose the first superposition calculus that can explicitly
represent existentially quantified variables and can thus compute with respect
to a given domain. This calculus is sound and refutationally complete in the
limit for a first-order fixed domain semantics.
For saturated Horn theories and classes of positive formulas, we can even
employ the calculus to prove properties of the minimal model itself, going
beyond the scope of known superposition-based approaches.
%B Research Report
%@ false
Decidability Results for Saturation-based Model Building
M. Horbach and C. Weidenbach
Technical Report, 2009c
M. Horbach and C. Weidenbach
Technical Report, 2009c
Abstract
Saturation-based calculi such as superposition can be
successfully instantiated to decision procedures for many decidable
fragments of first-order logic. In case of termination without
generating an empty clause, a saturated clause set implicitly represents
a minimal model for all clauses, based on the underlying term ordering
of the superposition calculus. In general, it is not decidable whether a
ground atom, a clause or even a formula holds in this minimal model of a
satisfiable saturated clause set.
Based on an extension of our superposition calculus for fixed domains
with syntactic disequality constraints in a non-equational setting, we
describe models given by ARM (Atomic Representations of term Models) or
DIG (Disjunctions of Implicit Generalizations) representations as
minimal models of finite saturated clause sets. This allows us to
present several new decidability results for validity in such models.
These results extend in particular the known decidability results for
ARM and DIG representations.
Export
BibTeX
@techreport{HorbachWeidenbach2010,
TITLE = {Decidability Results for Saturation-based Model Building},
AUTHOR = {Horbach, Matthias and Weidenbach, Christoph},
LANGUAGE = {eng},
ISSN = {0946-011X},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-RG1-004},
NUMBER = {MPI-I-2009-RG1-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2009},
DATE = {2009},
ABSTRACT = {Saturation-based calculi such as superposition can be successfully instantiated to decision procedures for many decidable fragments of first-order logic. In case of termination without generating an empty clause, a saturated clause set implicitly represents a minimal model for all clauses, based on the underlying term ordering of the superposition calculus. In general, it is not decidable whether a ground atom, a clause or even a formula holds in this minimal model of a satisfiable saturated clause set. Based on an extension of our superposition calculus for fixed domains with syntactic disequality constraints in a non-equational setting, we describe models given by ARM (Atomic Representations of term Models) or DIG (Disjunctions of Implicit Generalizations) representations as minimal models of finite saturated clause sets. This allows us to present several new decidability results for validity in such models. These results extend in particular the known decidability results for ARM and DIG representations.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Horbach, Matthias
%A Weidenbach, Christoph
%+ Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T Decidability Results for Saturation-based Model Building :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6659-B
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-RG1-004
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2009
%P 38 p.
%X Saturation-based calculi such as superposition can be
successfully instantiated to decision procedures for many decidable
fragments of first-order logic. In case of termination without
generating an empty clause, a saturated clause set implicitly represents
a minimal model for all clauses, based on the underlying term ordering
of the superposition calculus. In general, it is not decidable whether a
ground atom, a clause or even a formula holds in this minimal model of a
satisfiable saturated clause set.
Based on an extension of our superposition calculus for fixed domains
with syntactic disequality constraints in a non-equational setting, we
describe models given by ARM (Atomic Representations of term Models) or
DIG (Disjunctions of Implicit Generalizations) representations as
minimal models of finite saturated clause sets. This allows us to
present several new decidability results for validity in such models.
These results extend in particular the known decidability results for
ARM and DIG representations.
%B Research Report
%@ false
Acquisition and analysis of bispectral bidirectional reflectance distribution functions
M. B. Hullin, B. Ajdin, J. Hanika, H.-P. Seidel, J. Kautz and H. P. A. Lensch
Technical Report, 2009
M. B. Hullin, B. Ajdin, J. Hanika, H.-P. Seidel, J. Kautz and H. P. A. Lensch
Technical Report, 2009
Abstract
In fluorescent materials, energy from a certain band of incident wavelengths is
reflected or reradiated at larger wavelengths, i.e. with lower energy per
photon. While fluorescent materials are common in everyday life, they have
received little attention in computer graphics. Especially, no bidirectional
reflectance measurements of fluorescent materials have been available so far. In
this paper, we develop the concept of a bispectral BRDF, which extends the
well-known concept of the bidirectional reflectance distribution function (BRDF)
to account for energy transfer between wavelengths. Using a bidirectional and
bispectral measurement setup, we acquire reflectance data of a variety of
fluorescent materials, including vehicle paints, paper and fabric. We show
bispectral renderings of the measured data and compare them with reduced
versions of the bispectral BRDF, including the traditional RGB vector valued
BRDF. Principal component analysis of the measured data reveals that for some
materials the fluorescent reradiation spectrum changes considerably over the
range of directions. We further show that bispectral BRDFs can be efficiently
acquired using an acquisition strategy based on principal components.
Export
BibTeX
@techreport{HullinAjdinHanikaSeidelKautzLensch2009,
TITLE = {Acquisition and analysis of bispectral bidirectional reflectance distribution functions},
AUTHOR = {Hullin, Matthias B. and Ajdin, Boris and Hanika, Johannes and Seidel, Hans-Peter and Kautz, Jan and Lensch, Hendrik P. A.},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-4-001},
NUMBER = {MPI-I-2009-4-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2009},
DATE = {2009},
ABSTRACT = {In fluorescent materials, energy from a certain band of incident wavelengths is reflected or reradiated at larger wavelengths, i.e. with lower energy per photon. While fluorescent materials are common in everyday life, they have received little attention in computer graphics. Especially, no bidirectional reflectance measurements of fluorescent materials have been available so far. In this paper, we develop the concept of a bispectral BRDF, which extends the well-known concept of the bidirectional reflectance distribution function (BRDF) to account for energy transfer between wavelengths. Using a bidirectional and bispectral measurement setup, we acquire reflectance data of a variety of fluorescent materials, including vehicle paints, paper and fabric. We show bispectral renderings of the measured data and compare them with reduced versions of the bispectral BRDF, including the traditional RGB vector valued BRDF. Principal component analysis of the measured data reveals that for some materials the fluorescent reradiation spectrum changes considerably over the range of directions. We further show that bispectral BRDFs can be efficiently acquired using an acquisition strategy based on principal components.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Hullin, Matthias B.
%A Ajdin, Boris
%A Hanika, Johannes
%A Seidel, Hans-Peter
%A Kautz, Jan
%A Lensch, Hendrik P. A.
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Acquisition and analysis of bispectral bidirectional reflectance distribution functions :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6671-4
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-4-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2009
%P 25 p.
%X In fluorescent materials, energy from a certain band of incident wavelengths is
reflected or reradiated at larger wavelengths, i.e. with lower energy per
photon. While fluorescent materials are common in everyday life, they have
received little attention in computer graphics. Especially, no bidirectional
reflectance measurements of fluorescent materials have been available so far. In
this paper, we develop the concept of a bispectral BRDF, which extends the
well-known concept of the bidirectional reflectance distribution function (BRDF)
to account for energy transfer between wavelengths. Using a bidirectional and
bispectral measurement setup, we acquire reflectance data of a variety of
fluorescent materials, including vehicle paints, paper and fabric. We show
bispectral renderings of the measured data and compare them with reduced
versions of the bispectral BRDF, including the traditional RGB vector valued
BRDF. Principal component analysis of the measured data reveals that for some
materials the fluorescent reradiation spectrum changes considerably over the
range of directions. We further show that bispectral BRDFs can be efficiently
acquired using an acquisition strategy based on principal components.
%B Research Report / Max-Planck-Institut für Informatik
MING: Mining Informative Entity-relationship Subgraphs
G. Kasneci, S. Elbassuoni and G. Weikum
Technical Report, 2009
G. Kasneci, S. Elbassuoni and G. Weikum
Technical Report, 2009
Export
BibTeX
@techreport{KasneciWeikumElbassuoni2009,
TITLE = {{MING}: Mining Informative Entity-relationship Subgraphs},
AUTHOR = {Kasneci, Gjergji and Elbassuoni, Shady and Weikum, Gerhard},
LANGUAGE = {eng},
NUMBER = {MPI-I-2009-5-007},
LOCALID = {Local-ID: C1256DBF005F876D-E977DDB8EDAABEE6C12576320036DBD9-KasneciMING2009},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2009},
DATE = {2009},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Kasneci, Gjergji
%A Elbassuoni, Shady
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T MING: Mining Informative Entity-relationship Subgraphs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1932-4
%F EDOC: 520416
%F OTHER: Local-ID: C1256DBF005F876D-E977DDB8EDAABEE6C12576320036DBD9-KasneciMING2009
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2009
%P 32 p.
%B Research Report
The RDF-3X Engine for Scalable Management of RDF Data
T. Neumann and G. Weikum
Technical Report, 2009
T. Neumann and G. Weikum
Technical Report, 2009
Abstract
RDF is a data model for schema-free structured information that is gaining
momentum in the context of Semantic-Web data, life sciences, and also Web 2.0
platforms. The ``pay-as-you-go'' nature of RDF and the flexible
pattern-matching capabilities of its query language SPARQL entail efficiency
and scalability challenges for complex queries including long join paths. This
paper presents the RDF-3X engine, an implementation of SPARQL that achieves
excellent performance by pursuing a RISC-style architecture with streamlined
indexing and query processing.
The physical design is identical for all RDF-3X databases regardless of their
workloads, and completely eliminates the need for index tuning by exhaustive
indexes for all permutations of subject-property-object triples and their
binary and unary projections. These indexes are highly compressed, and the
query processor can aggressively leverage fast merge joins with excellent
performance of processor caches. The query optimizer is able to choose optimal
join orders even for complex queries, with a cost model that includes
statistical synopses for entire join paths. Although RDF-3X is optimized for
queries, it also provides good support for efficient online updates by means of
a staging architecture: direct updates to the main database indexes are
deferred, and instead applied to compact differential indexes which are later
merged into the main indexes in a batched manner.
Experimental studies with several large-scale datasets with more than 50
million RDF triples and benchmark queries that include pattern matching,
manyway star-joins, and long path-joins demonstrate that RDF-3X can outperform
the previously best alternatives by one or two orders of magnitude.
Export
BibTeX
@techreport{Neumann2009report1,
TITLE = {The {RDF}-3X Engine for Scalable Management of {RDF} Data},
AUTHOR = {Neumann, Thomas and Weikum, Gerhard},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2009-5-003},
LOCALID = {Local-ID: C1256DBF005F876D-AD3DBAFA6FB90DD2C1257593002FF3DF-Neumann2009report1},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2009},
DATE = {2009},
ABSTRACT = {RDF is a data model for schema-free structured information that is gaining momentum in the context of Semantic-Web data, life sciences, and also Web 2.0 platforms. The ``pay-as-you-go'' nature of RDF and the flexible pattern-matching capabilities of its query language SPARQL entail efficiency and scalability challenges for complex queries including long join paths. This paper presents the RDF-3X engine, an implementation of SPARQL that achieves excellent performance by pursuing a RISC-style architecture with streamlined indexing and query processing. The physical design is identical for all RDF-3X databases regardless of their workloads, and completely eliminates the need for index tuning by exhaustive indexes for all permutations of subject-property-object triples and their binary and unary projections. These indexes are highly compressed, and the query processor can aggressively leverage fast merge joins with excellent performance of processor caches. The query optimizer is able to choose optimal join orders even for complex queries, with a cost model that includes statistical synopses for entire join paths. Although RDF-3X is optimized for queries, it also provides good support for efficient online updates by means of a staging architecture: direct updates to the main database indexes are deferred, and instead applied to compact differential indexes which are later merged into the main indexes in a batched manner. Experimental studies with several large-scale datasets with more than 50 million RDF triples and benchmark queries that include pattern matching, manyway star-joins, and long path-joins demonstrate that RDF-3X can outperform the previously best alternatives by one or two orders of magnitude.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Neumann, Thomas
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T The RDF-3X Engine for Scalable Management of RDF Data :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-195A-A
%F EDOC: 520381
%@ 0946-011X
%F OTHER: Local-ID: C1256DBF005F876D-AD3DBAFA6FB90DD2C1257593002FF3DF-Neumann2009report1
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2009
%X RDF is a data model for schema-free structured information that is gaining
momentum in the context of Semantic-Web data, life sciences, and also Web 2.0
platforms. The ``pay-as-you-go'' nature of RDF and the flexible
pattern-matching capabilities of its query language SPARQL entail efficiency
and scalability challenges for complex queries including long join paths. This
paper presents the RDF-3X engine, an implementation of SPARQL that achieves
excellent performance by pursuing a RISC-style architecture with streamlined
indexing and query processing.
The physical design is identical for all RDF-3X databases regardless of their
workloads, and completely eliminates the need for index tuning by exhaustive
indexes for all permutations of subject-property-object triples and their
binary and unary projections. These indexes are highly compressed, and the
query processor can aggressively leverage fast merge joins with excellent
performance of processor caches. The query optimizer is able to choose optimal
join orders even for complex queries, with a cost model that includes
statistical synopses for entire join paths. Although RDF-3X is optimized for
queries, it also provides good support for efficient online updates by means of
a staging architecture: direct updates to the main database indexes are
deferred, and instead applied to compact differential indexes which are later
merged into the main indexes in a batched manner.
Experimental studies with several large-scale datasets with more than 50
million RDF triples and benchmark queries that include pattern matching,
manyway star-joins, and long path-joins demonstrate that RDF-3X can outperform
the previously best alternatives by one or two orders of magnitude.
%B Research Report
Coupling Knowledge Bases and Web Services for Active Knowledge
N. Preda, F. Suchanek, G. Kasneci, T. Neumann and G. Weikum
Technical Report, 2009
N. Preda, F. Suchanek, G. Kasneci, T. Neumann and G. Weikum
Technical Report, 2009
Abstract
We present ANGIE, a system that can answer user queries by combining
knowledge
from a local database with knowledge retrieved from Web services. If a user
poses a query that cannot be answered by the local database alone, ANGIE
calls
the appropriate Web services to retrieve the missing information. In
ANGIE,Web
services act as dynamic components of the knowledge base that deliver
knowledge
on demand. To the user, this is fully transparent; the dynamically acquired
knowledge is presented as if it were stored in the local knowledge base.
We have developed a RDF based model for declarative definition of functions
embedded in the local knowledge base. The results of available Web
services are
cast into RDF subgraphs. Parameter bindings are automatically constructed by
ANGIE, services are invoked, and the semi-structured information returned by
the services are dynamically integrated into the knowledge base
We have developed a query rewriting algorithm that determines one or more
function composition that need to be executed in order to evaluate a
SPARQL style
user query. The key idea is that the local knowledge base can be used to
guide the selection of values used as input parameters of function
calls. This is in
contrast to the conventional approaches in the literature which would
exhaustively
materialize all values that can be used as binding values for the input
parameters.
Export
BibTeX
@techreport{PredaSuchanekKasneciNeumannWeikum2009,
TITLE = {Coupling Knowledge Bases and Web Services for Active Knowledge},
AUTHOR = {Preda, Nicoleta and Suchanek, Fabian and Kasneci, Gjergji and Neumann, Thomas and Weikum, Gerhard},
LANGUAGE = {eng},
NUMBER = {MPI-I-2009-5-004},
LOCALID = {Local-ID: C1256DBF005F876D-BF2AB4A39F925BC8C125759800444744-PredaSuchanekKasneciNeumannWeikum2009},
YEAR = {2009},
DATE = {2009},
ABSTRACT = {We present ANGIE, a system that can answer user queries by combining knowledge from a local database with knowledge retrieved from Web services. If a user poses a query that cannot be answered by the local database alone, ANGIE calls the appropriate Web services to retrieve the missing information. In ANGIE,Web services act as dynamic components of the knowledge base that deliver knowledge on demand. To the user, this is fully transparent; the dynamically acquired knowledge is presented as if it were stored in the local knowledge base. We have developed a RDF based model for declarative definition of functions embedded in the local knowledge base. The results of available Web services are cast into RDF subgraphs. Parameter bindings are automatically constructed by ANGIE, services are invoked, and the semi-structured information returned by the services are dynamically integrated into the knowledge base We have developed a query rewriting algorithm that determines one or more function composition that need to be executed in order to evaluate a SPARQL style user query. The key idea is that the local knowledge base can be used to guide the selection of values used as input parameters of function calls. This is in contrast to the conventional approaches in the literature which would exhaustively materialize all values that can be used as binding values for the input parameters.},
TYPE = {Research Reports},
}
Endnote
%0 Report
%A Preda, Nicoleta
%A Suchanek, Fabian
%A Kasneci, Gjergji
%A Neumann, Thomas
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Coupling Knowledge Bases and Web Services for Active Knowledge :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1901-1
%F EDOC: 520423
%F OTHER: Local-ID: C1256DBF005F876D-BF2AB4A39F925BC8C125759800444744-PredaSuchanekKasneciNeumannWeikum2009
%D 2009
%X We present ANGIE, a system that can answer user queries by combining
knowledge
from a local database with knowledge retrieved from Web services. If a user
poses a query that cannot be answered by the local database alone, ANGIE
calls
the appropriate Web services to retrieve the missing information. In
ANGIE,Web
services act as dynamic components of the knowledge base that deliver
knowledge
on demand. To the user, this is fully transparent; the dynamically acquired
knowledge is presented as if it were stored in the local knowledge base.
We have developed a RDF based model for declarative definition of functions
embedded in the local knowledge base. The results of available Web
services are
cast into RDF subgraphs. Parameter bindings are automatically constructed by
ANGIE, services are invoked, and the semi-structured information returned by
the services are dynamically integrated into the knowledge base
We have developed a query rewriting algorithm that determines one or more
function composition that need to be executed in order to evaluate a
SPARQL style
user query. The key idea is that the local knowledge base can be used to
guide the selection of values used as input parameters of function
calls. This is in
contrast to the conventional approaches in the literature which would
exhaustively
materialize all values that can be used as binding values for the input
parameters.
%B Research Reports
Generating Concise and Readable Summaries of XML documents
M. Ramanath, K. Sarath Kumar and G. Ifrim
Technical Report, 2009
M. Ramanath, K. Sarath Kumar and G. Ifrim
Technical Report, 2009
Export
BibTeX
@techreport{Ramanath2008a,
TITLE = {Generating Concise and Readable Summaries of {XML} documents},
AUTHOR = {Ramanath, Maya and Sarath Kumar, Kondreddi and Ifrim, Georgiana},
LANGUAGE = {eng},
NUMBER = {MPI-I-2009-5-002},
LOCALID = {Local-ID: C1256DBF005F876D-EA355A84178BB514C12575BA002A90E0-Ramanath2008},
YEAR = {2009},
DATE = {2009},
TYPE = {Research Reports},
}
Endnote
%0 Report
%A Ramanath, Maya
%A Sarath Kumar, Kondreddi
%A Ifrim, Georgiana
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Generating Concise and Readable Summaries of XML documents :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1915-6
%F EDOC: 520419
%F OTHER: Local-ID: C1256DBF005F876D-EA355A84178BB514C12575BA002A90E0-Ramanath2008
%D 2009
%B Research Reports
Constraint Solving for Interpolation
A. Rybalchenko and V. Sofronie-Stokkermans
Technical Report, 2009
A. Rybalchenko and V. Sofronie-Stokkermans
Technical Report, 2009
Export
BibTeX
@techreport{Rybalchenko-Sofronie-Stokkermans-2009,
TITLE = {Constraint Solving for Interpolation},
AUTHOR = {Rybalchenko, Andrey and Sofronie-Stokkermans, Viorica},
LANGUAGE = {eng},
LOCALID = {Local-ID: C125716C0050FB51-7BE33255DCBCF2AAC1257650004B7C65-Rybalchenko-Sofronie-Stokkermans-2009},
YEAR = {2009},
DATE = {2009},
}
Endnote
%0 Report
%A Rybalchenko, Andrey
%A Sofronie-Stokkermans, Viorica
%+ Automation of Logic, MPI for Informatics, Max Planck Society
%T Constraint Solving for Interpolation :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1A4A-6
%F EDOC: 521091
%F OTHER: Local-ID: C125716C0050FB51-7BE33255DCBCF2AAC1257650004B7C65-Rybalchenko-Sofronie-Stokkermans-2009
%D 2009
A Higher-order Structure Tensor
T. Schultz, J. Weickert and H.-P. Seidel
Technical Report, 2009
T. Schultz, J. Weickert and H.-P. Seidel
Technical Report, 2009
Abstract
Structure tensors are a common tool for orientation estimation in
image processing and computer vision. We present a generalization of
the traditional second-order model to a higher-order structure
tensor (HOST), which is able to model more than one significant
orientation, as found in corners, junctions, and multi-channel images. We
provide a theoretical analysis and a number of mathematical tools
that facilitate practical use of the HOST, visualize it using a
novel glyph for higher-order tensors, and demonstrate how it can be
applied in an improved integrated edge, corner, and junction
Export
BibTeX
@techreport{SchultzlWeickertSeidel2007,
TITLE = {A Higher-order Structure Tensor},
AUTHOR = {Schultz, Thomas and Weickert, Joachim and Seidel, Hans-Peter},
LANGUAGE = {eng},
NUMBER = {MPI-I-2007-4-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2009},
DATE = {2009},
ABSTRACT = {Structure tensors are a common tool for orientation estimation in image processing and computer vision. We present a generalization of the traditional second-order model to a higher-order structure tensor (HOST), which is able to model more than one significant orientation, as found in corners, junctions, and multi-channel images. We provide a theoretical analysis and a number of mathematical tools that facilitate practical use of the HOST, visualize it using a novel glyph for higher-order tensors, and demonstrate how it can be applied in an improved integrated edge, corner, and junction},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Schultz, Thomas
%A Weickert, Joachim
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
%T A Higher-order Structure Tensor :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0027-13BC-7
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2009
%X Structure tensors are a common tool for orientation estimation in
image processing and computer vision. We present a generalization of
the traditional second-order model to a higher-order structure
tensor (HOST), which is able to model more than one significant
orientation, as found in corners, junctions, and multi-channel images. We
provide a theoretical analysis and a number of mathematical tools
that facilitate practical use of the HOST, visualize it using a
novel glyph for higher-order tensors, and demonstrate how it can be
applied in an improved integrated edge, corner, and junction
%B Research Report
Optical reconstruction of detailed animatable human body models
C. Stoll
Technical Report, 2009
C. Stoll
Technical Report, 2009
Export
BibTeX
@techreport{Stoll2009,
TITLE = {Optical reconstruction of detailed animatable human body models},
AUTHOR = {Stoll, Carsten},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-4-006},
NUMBER = {MPI-I-2009-4-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2009},
DATE = {2009},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Stoll, Carsten
%+ Computer Graphics, MPI for Informatics, Max Planck Society
%T Optical reconstruction of detailed animatable human body models :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-665F-0
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2009-4-006
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2009
%P 37 p.
%B Research Report / Max-Planck-Institut für Informatik
Contextual Rewriting
C. Weidenbach and P. Wischnewski
Technical Report, 2009
C. Weidenbach and P. Wischnewski
Technical Report, 2009
Export
BibTeX
@techreport{WischnewskiWeidenbach2009,
TITLE = {Contextual Rewriting},
AUTHOR = {Weidenbach, Christoph and Wischnewski, Patrick},
LANGUAGE = {eng},
NUMBER = {MPI-I-2009-RG1-002},
LOCALID = {Local-ID: C125716C0050FB51-DD89BAB0441DE797C125757F0034B8CB-WeidenbachWischnewskiReport2009},
YEAR = {2009},
DATE = {2009},
}
Endnote
%0 Report
%A Weidenbach, Christoph
%A Wischnewski, Patrick
%+ Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T Contextual Rewriting :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1A4C-2
%F EDOC: 521106
%F OTHER: Local-ID: C125716C0050FB51-DD89BAB0441DE797C125757F0034B8CB-WeidenbachWischnewskiReport2009
%D 2009
2008
Characterizing the performance of Flash memory storage devices and its impact on algorithm design
D. Ajwani, I. Malinger, U. Meyer and S. Toledo
Technical Report, 2008
D. Ajwani, I. Malinger, U. Meyer and S. Toledo
Technical Report, 2008
Abstract
Initially used in digital audio players, digital cameras, mobile
phones, and USB memory sticks, flash memory may become the dominant
form of end-user storage in mobile computing, either completely
replacing the magnetic hard disks or being an additional secondary
storage. We study the design of algorithms and data structures that
can exploit the flash memory devices better. For this, we characterize
the performance of NAND flash based storage devices, including many
solid state disks. We show that these devices have better random read
performance than hard disks, but much worse random write performance.
We also analyze the effect of misalignments, aging and past I/O
patterns etc. on the performance obtained on these devices. We show
that despite the similarities between flash memory and RAM (fast
random reads) and between flash disk and hard disk (both are block
based devices), the algorithms designed in the RAM model or the
external memory model do not realize the full potential of the flash
memory devices. We later give some broad guidelines for designing
algorithms which can exploit the comparative advantages of both a
flash memory device and a hard disk, when used together.
Export
BibTeX
@techreport{AjwaniMalingerMeyerToledo2008,
TITLE = {Characterizing the performance of Flash memory storage devices and its impact on algorithm design},
AUTHOR = {Ajwani, Deepak and Malinger, Itay and Meyer, Ulrich and Toledo, Sivan},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-1-001},
NUMBER = {MPI-I-2008-1-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2008},
DATE = {2008},
ABSTRACT = {Initially used in digital audio players, digital cameras, mobile phones, and USB memory sticks, flash memory may become the dominant form of end-user storage in mobile computing, either completely replacing the magnetic hard disks or being an additional secondary storage. We study the design of algorithms and data structures that can exploit the flash memory devices better. For this, we characterize the performance of NAND flash based storage devices, including many solid state disks. We show that these devices have better random read performance than hard disks, but much worse random write performance. We also analyze the effect of misalignments, aging and past I/O patterns etc. on the performance obtained on these devices. We show that despite the similarities between flash memory and RAM (fast random reads) and between flash disk and hard disk (both are block based devices), the algorithms designed in the RAM model or the external memory model do not realize the full potential of the flash memory devices. We later give some broad guidelines for designing algorithms which can exploit the comparative advantages of both a flash memory device and a hard disk, when used together.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Ajwani, Deepak
%A Malinger, Itay
%A Meyer, Ulrich
%A Toledo, Sivan
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T Characterizing the performance of Flash memory storage devices and its impact on algorithm design :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-66C7-6
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-1-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2008
%P 36 p.
%X Initially used in digital audio players, digital cameras, mobile
phones, and USB memory sticks, flash memory may become the dominant
form of end-user storage in mobile computing, either completely
replacing the magnetic hard disks or being an additional secondary
storage. We study the design of algorithms and data structures that
can exploit the flash memory devices better. For this, we characterize
the performance of NAND flash based storage devices, including many
solid state disks. We show that these devices have better random read
performance than hard disks, but much worse random write performance.
We also analyze the effect of misalignments, aging and past I/O
patterns etc. on the performance obtained on these devices. We show
that despite the similarities between flash memory and RAM (fast
random reads) and between flash disk and hard disk (both are block
based devices), the algorithms designed in the RAM model or the
external memory model do not realize the full potential of the flash
memory devices. We later give some broad guidelines for designing
algorithms which can exploit the comparative advantages of both a
flash memory device and a hard disk, when used together.
%B Research Report
Prototype Implementation of the Algebraic Kernel
E. Berberich, M. Hemmer, M. Karavelas, S. Pion, M. Teillaud and E. Tsigaridas
Technical Report, 2008
E. Berberich, M. Hemmer, M. Karavelas, S. Pion, M. Teillaud and E. Tsigaridas
Technical Report, 2008
Abstract
In this report we describe the current progress with respect to prototype
implementations of algebraic kernels within the ACS project. More specifically,
we report on: (1) the Cgal package Algebraic_kernel_for_circles_2_2 aimed at
providing the necessary algebraic functionality required for treating circular
arcs; (2) an interface between Cgal and SYNAPS for accessing the algebraic
functionality in the SYNAPS library; (3) the NumeriX library (part of the
EXACUS project) which is a prototype implementation of a set of algebraic tools
on univariate polynomials, needed to built an algebraic kernel and (4) a rough
CGAL-like prototype implementation of a set of algebraic tools on univariate
polynomials.
Export
BibTeX
@techreport{ACS-TR-121202-01,
TITLE = {Prototype Implementation of the Algebraic Kernel},
AUTHOR = {Berberich, Eric and Hemmer, Michael and Karavelas, Menelaos and Pion, Sylvain and Teillaud, Monique and Tsigaridas, Elias},
LANGUAGE = {eng},
NUMBER = {ACS-TR-121202-01},
INSTITUTION = {University of Groningen},
ADDRESS = {Groningen},
YEAR = {2008},
DATE = {2008},
ABSTRACT = {In this report we describe the current progress with respect to prototype implementations of algebraic kernels within the ACS project. More specifically, we report on: (1) the Cgal package Algebraic_kernel_for_circles_2_2 aimed at providing the necessary algebraic functionality required for treating circular arcs; (2) an interface between Cgal and SYNAPS for accessing the algebraic functionality in the SYNAPS library; (3) the NumeriX library (part of the EXACUS project) which is a prototype implementation of a set of algebraic tools on univariate polynomials, needed to built an algebraic kernel and (4) a rough CGAL-like prototype implementation of a set of algebraic tools on univariate polynomials.},
}
Endnote
%0 Report
%A Berberich, Eric
%A Hemmer, Michael
%A Karavelas, Menelaos
%A Pion, Sylvain
%A Teillaud, Monique
%A Tsigaridas, Elias
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
%T Prototype Implementation of the Algebraic Kernel :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0019-E387-2
%Y University of Groningen
%C Groningen
%D 2008
%X In this report we describe the current progress with respect to prototype
implementations of algebraic kernels within the ACS project. More specifically,
we report on: (1) the Cgal package Algebraic_kernel_for_circles_2_2 aimed at
providing the necessary algebraic functionality required for treating circular
arcs; (2) an interface between Cgal and SYNAPS for accessing the algebraic
functionality in the SYNAPS library; (3) the NumeriX library (part of the
EXACUS project) which is a prototype implementation of a set of algebraic tools
on univariate polynomials, needed to built an algebraic kernel and (4) a rough
CGAL-like prototype implementation of a set of algebraic tools on univariate
polynomials.
%U http://www.researchgate.net/publication/254300442_Prototype_implementation_of_the_algebraic_kernel
Slippage Features
M. Bokeloh, A. Berner, M. Wand, H.-P. Seidel and A. Schilling
Technical Report, 2008
M. Bokeloh, A. Berner, M. Wand, H.-P. Seidel and A. Schilling
Technical Report, 2008
Export
BibTeX
@techreport{Bokeloh2008,
TITLE = {Slippage Features},
AUTHOR = {Bokeloh, Martin and Berner, Alexander and Wand, Michael and Seidel, Hans-Peter and Schilling, Andreas},
LANGUAGE = {eng},
ISSN = {0946-3852},
URL = {urn:nbn:de:bsz:21-opus-33880},
NUMBER = {WSI-2008-03},
INSTITUTION = {Wilhelm-Schickard-Institut / Universit{\"a}t T{\"u}bingen},
ADDRESS = {T{\"u}bingen},
YEAR = {2008},
DATE = {2008},
TYPE = {WSI},
VOLUME = {2008-03},
}
Endnote
%0 Report
%A Bokeloh, Martin
%A Berner, Alexander
%A Wand, Michael
%A Seidel, Hans-Peter
%A Schilling, Andreas
%+ External Organizations
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
%T Slippage Features :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0023-D3FC-F
%U urn:nbn:de:bsz:21-opus-33880
%Y Wilhelm-Schickard-Institut / Universität Tübingen
%C Tübingen
%D 2008
%P 17 p.
%B WSI
%N 2008-03
%@ false
%U http://nbn-resolving.de/urn:nbn:de:bsz:21-opus-33880
Data Modifications and Versioning in Trio
A. Das Sarma, M. Theobald and J. Widom
Technical Report, 2008
A. Das Sarma, M. Theobald and J. Widom
Technical Report, 2008
Export
BibTeX
@techreport{ilpubs-849,
TITLE = {Data Modifications and Versioning in Trio},
AUTHOR = {Das Sarma, Anish and Theobald, Martin and Widom, Jennifer},
LANGUAGE = {eng},
URL = {http://ilpubs.stanford.edu:8090/849/},
NUMBER = {ILPUBS-849},
INSTITUTION = {Standford University Infolab},
ADDRESS = {Standford, CA},
YEAR = {2008},
TYPE = {Technical Report},
}
Endnote
%0 Report
%A Das Sarma, Anish
%A Theobald, Martin
%A Widom, Jennifer
%+ External Organizations
Databases and Information Systems, MPI for Informatics, Max Planck Society
External Organizations
%T Data Modifications and Versioning in Trio :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0027-AED6-D
%U http://ilpubs.stanford.edu:8090/849/
%Y Standford University Infolab
%C Standford, CA
%D 2008
%B Technical Report
Integrating Yago into the suggested upper merged ontology
G. de Melo, F. Suchanek and A. Pease
Technical Report, 2008
G. de Melo, F. Suchanek and A. Pease
Technical Report, 2008
Abstract
Ontologies are becoming more and more popular as background knowledge
for intelligent applications.
Up to now, there has been a schism between manually assembled, highly
axiomatic ontologies
and large, automatically constructed knowledge bases.
This report discusses how the two worlds can be brought together by
combining the high-level axiomatizations from
the Standard Upper Merged Ontology (SUMO) with the extensive world
knowledge of the YAGO ontology.
On the theoretical side, it analyses the differences between the
knowledge representation in YAGO and SUMO.
On the practical side, this report explains how the two resources can
be merged. This yields a new
large-scale formal ontology, which provides information about millions
of entities such as people, cities,
organizations, and companies. This report is the detailed version of
our paper at ICTAI 2008.
Export
BibTeX
@techreport{deMeloSuchanekPease2008,
TITLE = {Integrating Yago into the suggested upper merged ontology},
AUTHOR = {de Melo, Gerard and Suchanek, Fabian and Pease, Adam},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-5-003},
NUMBER = {MPI-I-2008-5-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2008},
DATE = {2008},
ABSTRACT = {Ontologies are becoming more and more popular as background knowledge for intelligent applications. Up to now, there has been a schism between manually assembled, highly axiomatic ontologies and large, automatically constructed knowledge bases. This report discusses how the two worlds can be brought together by combining the high-level axiomatizations from the Standard Upper Merged Ontology (SUMO) with the extensive world knowledge of the YAGO ontology. On the theoretical side, it analyses the differences between the knowledge representation in YAGO and SUMO. On the practical side, this report explains how the two resources can be merged. This yields a new large-scale formal ontology, which provides information about millions of entities such as people, cities, organizations, and companies. This report is the detailed version of our paper at ICTAI 2008.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A de Melo, Gerard
%A Suchanek, Fabian
%A Pease, Adam
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
External Organizations
%T Integrating Yago into the suggested upper merged ontology :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-66AB-6
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-5-003
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2008
%P 31 p.
%X Ontologies are becoming more and more popular as background knowledge
for intelligent applications.
Up to now, there has been a schism between manually assembled, highly
axiomatic ontologies
and large, automatically constructed knowledge bases.
This report discusses how the two worlds can be brought together by
combining the high-level axiomatizations from
the Standard Upper Merged Ontology (SUMO) with the extensive world
knowledge of the YAGO ontology.
On the theoretical side, it analyses the differences between the
knowledge representation in YAGO and SUMO.
On the practical side, this report explains how the two resources can
be merged. This yields a new
large-scale formal ontology, which provides information about millions
of entities such as people, cities,
organizations, and companies. This report is the detailed version of
our paper at ICTAI 2008.
%B Research Report / Max-Planck-Institut für Informatik
Labelled splitting
A. L. Fietzke and C. Weidenbach
Technical Report, 2008
A. L. Fietzke and C. Weidenbach
Technical Report, 2008
Abstract
We define a superposition calculus with explicit splitting and
an explicit, new backtracking rule on the basis of labelled clauses.
For the first time we show a superposition calculus with explicit
backtracking rule sound and complete. The new backtracking rule advances
backtracking with branch condensing known from SPASS.
An experimental evaluation of an implementation of the new rule
shows that it improves considerably the
previous SPASS splitting implementation.
Finally, we discuss the relationship between labelled first-order
splitting and DPLL style splitting with intelligent backtracking
and clause learning.
Export
BibTeX
@techreport{FietzkeWeidenbach2008,
TITLE = {Labelled splitting},
AUTHOR = {Fietzke, Arnaud Luc and Weidenbach, Christoph},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-RG1-001},
NUMBER = {MPI-I-2008-RG1-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2008},
DATE = {2008},
ABSTRACT = {We define a superposition calculus with explicit splitting and an explicit, new backtracking rule on the basis of labelled clauses. For the first time we show a superposition calculus with explicit backtracking rule sound and complete. The new backtracking rule advances backtracking with branch condensing known from SPASS. An experimental evaluation of an implementation of the new rule shows that it improves considerably the previous SPASS splitting implementation. Finally, we discuss the relationship between labelled first-order splitting and DPLL style splitting with intelligent backtracking and clause learning.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Fietzke, Arnaud Luc
%A Weidenbach, Christoph
%+ Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T Labelled splitting :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6674-D
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-RG1-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2008
%P 45 p.
%X We define a superposition calculus with explicit splitting and
an explicit, new backtracking rule on the basis of labelled clauses.
For the first time we show a superposition calculus with explicit
backtracking rule sound and complete. The new backtracking rule advances
backtracking with branch condensing known from SPASS.
An experimental evaluation of an implementation of the new rule
shows that it improves considerably the
previous SPASS splitting implementation.
Finally, we discuss the relationship between labelled first-order
splitting and DPLL style splitting with intelligent backtracking
and clause learning.
%B Research Report
STAR: Steiner tree approximation in relationship-graphs
G. Kasneci, M. Ramanath, M. Sozio, F. Suchanek and G. Weikum
Technical Report, 2008
G. Kasneci, M. Ramanath, M. Sozio, F. Suchanek and G. Weikum
Technical Report, 2008
Abstract
Large-scale graphs and networks are abundant in modern information systems:
entity-relationship graphs over relational data or Web-extracted entities,
biological networks, social online communities, knowledge bases, and
many more. Often such data comes with expressive node and edge labels that
allow an interpretation as a semantic graph, and edge weights that reflect
the strengths of semantic relations between entities. Finding close
relationships between a given set of two, three, or more entities is an
important building block for many search, ranking, and analysis tasks.
From an algorithmic point of view, this translates into computing the best
Steiner trees between the given nodes, a classical NP-hard problem. In
this paper, we present a new approximation algorithm, coined STAR, for
relationship queries over large graphs that do not fit into memory. We
prove that for n query entities, STAR yields an O(log(n))-approximation of
the optimal Steiner tree, and show that in practical cases the results
returned by STAR are qualitatively better than the results returned by a
classical 2-approximation algorithm. We then describe an extension to our
algorithm to return the top-k Steiner trees. Finally, we evaluate our
algorithm over both main-memory as well as completely disk-resident graphs
containing millions of nodes. Our experiments show that STAR outperforms
the best state-of-the returns qualitatively better results.
Export
BibTeX
@techreport{KasneciRamanathSozioSuchanekWeikum2008,
TITLE = {{STAR}: Steiner tree approximation in relationship-graphs},
AUTHOR = {Kasneci, Gjergji and Ramanath, Maya and Sozio, Mauro and Suchanek, Fabian and Weikum, Gerhard},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-5-001},
NUMBER = {MPI-I-2008-5-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2008},
DATE = {2008},
ABSTRACT = {Large-scale graphs and networks are abundant in modern information systems: entity-relationship graphs over relational data or Web-extracted entities, biological networks, social online communities, knowledge bases, and many more. Often such data comes with expressive node and edge labels that allow an interpretation as a semantic graph, and edge weights that reflect the strengths of semantic relations between entities. Finding close relationships between a given set of two, three, or more entities is an important building block for many search, ranking, and analysis tasks. From an algorithmic point of view, this translates into computing the best Steiner trees between the given nodes, a classical NP-hard problem. In this paper, we present a new approximation algorithm, coined STAR, for relationship queries over large graphs that do not fit into memory. We prove that for n query entities, STAR yields an O(log(n))-approximation of the optimal Steiner tree, and show that in practical cases the results returned by STAR are qualitatively better than the results returned by a classical 2-approximation algorithm. We then describe an extension to our algorithm to return the top-k Steiner trees. Finally, we evaluate our algorithm over both main-memory as well as completely disk-resident graphs containing millions of nodes. Our experiments show that STAR outperforms the best state-of-the returns qualitatively better results.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Kasneci, Gjergji
%A Ramanath, Maya
%A Sozio, Mauro
%A Suchanek, Fabian
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T STAR: Steiner tree approximation in relationship-graphs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-66B3-1
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-5-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2008
%P 37 p.
%X Large-scale graphs and networks are abundant in modern information systems:
entity-relationship graphs over relational data or Web-extracted entities,
biological networks, social online communities, knowledge bases, and
many more. Often such data comes with expressive node and edge labels that
allow an interpretation as a semantic graph, and edge weights that reflect
the strengths of semantic relations between entities. Finding close
relationships between a given set of two, three, or more entities is an
important building block for many search, ranking, and analysis tasks.
From an algorithmic point of view, this translates into computing the best
Steiner trees between the given nodes, a classical NP-hard problem. In
this paper, we present a new approximation algorithm, coined STAR, for
relationship queries over large graphs that do not fit into memory. We
prove that for n query entities, STAR yields an O(log(n))-approximation of
the optimal Steiner tree, and show that in practical cases the results
returned by STAR are qualitatively better than the results returned by a
classical 2-approximation algorithm. We then describe an extension to our
algorithm to return the top-k Steiner trees. Finally, we evaluate our
algorithm over both main-memory as well as completely disk-resident graphs
containing millions of nodes. Our experiments show that STAR outperforms
the best state-of-the returns qualitatively better results.
%B Research Report / Max-Planck-Institut für Informatik
Single phase construction of optimal DAG-structured QEPs
T. Neumann and G. Moerkotte
Technical Report, 2008
T. Neumann and G. Moerkotte
Technical Report, 2008
Abstract
Traditionally, database management systems use tree-structured query
evaluation plans. They are easy to implement but not expressive enough
for some optimizations like eliminating common algebraic subexpressions
or magic sets. These require directed acyclic graphs (DAGs), i.e.
shared subplans.
Existing approaches consider DAGs merely for special cases
and not in full generality.
We introduce a novel framework to reason about sharing of subplans
and, thus, DAG-structured query evaluation plans.
Then, we present the first plan generator capable
of generating optimal DAG-structured query evaluation plans.
The experimental results show that with no or only a modest
increase of plan generation time, a major reduction
of query execution time can be
achieved for common queries.
Export
BibTeX
@techreport{NeumannMoerkotte2008,
TITLE = {Single phase construction of optimal {DAG}-structured {QEPs}},
AUTHOR = {Neumann, Thomas and Moerkotte, Guido},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-5-002},
NUMBER = {MPI-I-2008-5-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2008},
DATE = {2008},
ABSTRACT = {Traditionally, database management systems use tree-structured query evaluation plans. They are easy to implement but not expressive enough for some optimizations like eliminating common algebraic subexpressions or magic sets. These require directed acyclic graphs (DAGs), i.e. shared subplans. Existing approaches consider DAGs merely for special cases and not in full generality. We introduce a novel framework to reason about sharing of subplans and, thus, DAG-structured query evaluation plans. Then, we present the first plan generator capable of generating optimal DAG-structured query evaluation plans. The experimental results show that with no or only a modest increase of plan generation time, a major reduction of query execution time can be achieved for common queries.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Neumann, Thomas
%A Moerkotte, Guido
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
External Organizations
%T Single phase construction of optimal DAG-structured QEPs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-66B0-7
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-5-002
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2008
%P 73 p.
%X Traditionally, database management systems use tree-structured query
evaluation plans. They are easy to implement but not expressive enough
for some optimizations like eliminating common algebraic subexpressions
or magic sets. These require directed acyclic graphs (DAGs), i.e.
shared subplans.
Existing approaches consider DAGs merely for special cases
and not in full generality.
We introduce a novel framework to reason about sharing of subplans
and, thus, DAG-structured query evaluation plans.
Then, we present the first plan generator capable
of generating optimal DAG-structured query evaluation plans.
The experimental results show that with no or only a modest
increase of plan generation time, a major reduction
of query execution time can be
achieved for common queries.
%B Research Report / Max-Planck-Institut für Informatik
Crease surfaces: from theory to extraction and application to diffusion tensor MRI
T. Schultz, H. Theisel and H.-P. Seidel
Technical Report, 2008
T. Schultz, H. Theisel and H.-P. Seidel
Technical Report, 2008
Abstract
Crease surfaces are two-dimensional manifolds along which a scalar
field assumes a local maximum (ridge) or a local minimum (valley) in
a constrained space. Unlike isosurfaces, they are able to capture
extremal structures in the data. Creases have a long tradition in
image processing and computer vision, and have recently become a
popular tool for visualization. When extracting crease surfaces,
degeneracies of the Hessian (i.e., lines along which two eigenvalues
are equal), have so far been ignored. We show that these loci,
however, have two important consequences for the topology of crease
surfaces: First, creases are bounded not only by a side constraint
on eigenvalue sign, but also by Hessian degeneracies. Second, crease
surfaces are not in general orientable. We describe an efficient
algorithm for the extraction of crease surfaces which takes these
insights into account and demonstrate that it produces more accurate
results than previous approaches. Finally, we show that DT-MRI
streamsurfaces, which were previously used for the analysis of
planar regions in diffusion tensor MRI data, are mathematically
ill-defined. As an example application of our method, creases in a
measure of planarity are presented as a viable substitute.
Export
BibTeX
@techreport{SchultzTheiselSeidel2008,
TITLE = {Crease surfaces: from theory to extraction and application to diffusion tensor {MRI}},
AUTHOR = {Schultz, Thomas and Theisel, Holger and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-4-003},
NUMBER = {MPI-I-2008-4-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2008},
DATE = {2008},
ABSTRACT = {Crease surfaces are two-dimensional manifolds along which a scalar field assumes a local maximum (ridge) or a local minimum (valley) in a constrained space. Unlike isosurfaces, they are able to capture extremal structures in the data. Creases have a long tradition in image processing and computer vision, and have recently become a popular tool for visualization. When extracting crease surfaces, degeneracies of the Hessian (i.e., lines along which two eigenvalues are equal), have so far been ignored. We show that these loci, however, have two important consequences for the topology of crease surfaces: First, creases are bounded not only by a side constraint on eigenvalue sign, but also by Hessian degeneracies. Second, crease surfaces are not in general orientable. We describe an efficient algorithm for the extraction of crease surfaces which takes these insights into account and demonstrate that it produces more accurate results than previous approaches. Finally, we show that DT-MRI streamsurfaces, which were previously used for the analysis of planar regions in diffusion tensor MRI data, are mathematically ill-defined. As an example application of our method, creases in a measure of planarity are presented as a viable substitute.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Schultz, Thomas
%A Theisel, Holger
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Crease surfaces: from theory to extraction and application to diffusion tensor MRI :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-66B6-C
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-4-003
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2008
%P 33 p.
%X Crease surfaces are two-dimensional manifolds along which a scalar
field assumes a local maximum (ridge) or a local minimum (valley) in
a constrained space. Unlike isosurfaces, they are able to capture
extremal structures in the data. Creases have a long tradition in
image processing and computer vision, and have recently become a
popular tool for visualization. When extracting crease surfaces,
degeneracies of the Hessian (i.e., lines along which two eigenvalues
are equal), have so far been ignored. We show that these loci,
however, have two important consequences for the topology of crease
surfaces: First, creases are bounded not only by a side constraint
on eigenvalue sign, but also by Hessian degeneracies. Second, crease
surfaces are not in general orientable. We describe an efficient
algorithm for the extraction of crease surfaces which takes these
insights into account and demonstrate that it produces more accurate
results than previous approaches. Finally, we show that DT-MRI
streamsurfaces, which were previously used for the analysis of
planar regions in diffusion tensor MRI data, are mathematically
ill-defined. As an example application of our method, creases in a
measure of planarity are presented as a viable substitute.
%B Research Report / Max-Planck-Institut für Informatik
Efficient Hierarchical Reasoning about Functions over Numerical Domains
V. Sofronie-Stokkermans
Technical Report, 2008a
V. Sofronie-Stokkermans
Technical Report, 2008a
Abstract
We show that many properties studied in mathematical
analysis (monotonicity, boundedness, inverse, Lipschitz
properties, possibly combined with continuity or derivability)
are expressible by formulae in a class for which sound and
complete hierarchical proof methods for testing satisfiability of
sets of ground clauses exist.
The results are useful for automated reasoning in mathematical
analysis and for the verification of hybrid systems.
Export
BibTeX
@techreport{Sofronie-Stokkermans-atr45-2008,
TITLE = {Efficient Hierarchical Reasoning about Functions over Numerical Domains},
AUTHOR = {Sofronie-Stokkermans, Viorica},
LANGUAGE = {eng},
ISSN = {1860-9821},
NUMBER = {ATR45},
INSTITUTION = {SFB/TR 14 AVACS},
YEAR = {2008},
DATE = {2008},
ABSTRACT = {We show that many properties studied in mathematical analysis (monotonicity, boundedness, inverse, Lipschitz properties, possibly combined with continuity or derivability) are expressible by formulae in a class for which sound and complete hierarchical proof methods for testing satisfiability of sets of ground clauses exist. The results are useful for automated reasoning in mathematical analysis and for the verification of hybrid systems.},
TYPE = {AVACS Technical Report},
VOLUME = {45},
}
Endnote
%0 Report
%A Sofronie-Stokkermans, Viorica
%+ Automation of Logic, MPI for Informatics, Max Planck Society
%T Efficient Hierarchical Reasoning about Functions over Numerical Domains :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0027-A46C-B
%Y SFB/TR 14 AVACS
%D 2008
%P 17 p.
%X We show that many properties studied in mathematical
analysis (monotonicity, boundedness, inverse, Lipschitz
properties, possibly combined with continuity or derivability)
are expressible by formulae in a class for which sound and
complete hierarchical proof methods for testing satisfiability of
sets of ground clauses exist.
The results are useful for automated reasoning in mathematical
analysis and for the verification of hybrid systems.
%B AVACS Technical Report
%N 45
%@ false
%U http://www.avacs.org/fileadmin/Publikationen/Open/avacs_technical_report_045.pdf
Sheaves and Geometric Logic and Applications to Modular Verification of Complex Systems
V. Sofronie-Stokkermans
Technical Report, 2008b
V. Sofronie-Stokkermans
Technical Report, 2008b
Abstract
In this paper we show that states, transitions and behavior of
concurrent systems can often be modeled as sheaves over a
suitable topological space (where the topology expresses how the
interacting systems share the information). This allows us to use
results from categorical logic (and in particular geometric
logic) to describe which type of properties are transferred, if
valid locally in all component systems, also at a global level,
to the system obtained by interconnecting the individual systems.
The main area of application is to modular verification of
complex systems.
We illustrate the ideas by means of an example involving
a family of interacting controllers for trains on a rail track.
Export
BibTeX
@techreport{Sofronie-Stokkermans-atr46-2008,
TITLE = {Sheaves and Geometric Logic and Applications to Modular Verification of Complex Systems},
AUTHOR = {Sofronie-Stokkermans, Viorica},
LANGUAGE = {eng},
ISSN = {1860-9821},
NUMBER = {ATR46},
INSTITUTION = {SFB/TR 14 AVACS},
YEAR = {2008},
DATE = {2008},
ABSTRACT = {In this paper we show that states, transitions and behavior of concurrent systems can often be modeled as sheaves over a suitable topological space (where the topology expresses how the interacting systems share the information). This allows us to use results from categorical logic (and in particular geometric logic) to describe which type of properties are transferred, if valid locally in all component systems, also at a global level, to the system obtained by interconnecting the individual systems. The main area of application is to modular verification of complex systems. We illustrate the ideas by means of an example involving a family of interacting controllers for trains on a rail track.},
TYPE = {AVACS Technical Report},
VOLUME = {46},
}
Endnote
%0 Report
%A Sofronie-Stokkermans, Viorica
%+ Automation of Logic, MPI for Informatics, Max Planck Society
%T Sheaves and Geometric Logic and Applications to Modular Verification of Complex Systems :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0027-A579-5
%Y SFB/TR 14 AVACS
%D 2008
%X In this paper we show that states, transitions and behavior of
concurrent systems can often be modeled as sheaves over a
suitable topological space (where the topology expresses how the
interacting systems share the information). This allows us to use
results from categorical logic (and in particular geometric
logic) to describe which type of properties are transferred, if
valid locally in all component systems, also at a global level,
to the system obtained by interconnecting the individual systems.
The main area of application is to modular verification of
complex systems.
We illustrate the ideas by means of an example involving
a family of interacting controllers for trains on a rail track.
%B AVACS Technical Report
%N 46
%@ false
%U http://www.avacs.org/fileadmin/Publikationen/Open/avacs_technical_report_046.pdf
SOFIE: A Self-Organizing Framework for Information Extraction
F. Suchanek, M. Sozio and G. Weikum
Technical Report, 2008
F. Suchanek, M. Sozio and G. Weikum
Technical Report, 2008
Abstract
This paper presents SOFIE, a system for automated ontology extension.
SOFIE can parse natural language documents, extract ontological facts
from them and link the facts into an ontology. SOFIE uses logical
reasoning on the existing knowledge and on the new knowledge in order
to disambiguate words to their most probable meaning, to reason on the
meaning of text patterns and to take into account world knowledge
axioms. This allows SOFIE to check the plausibility of hypotheses and
to avoid inconsistencies with the ontology. The framework of SOFIE
unites the paradigms of pattern matching, word sense disambiguation
and ontological reasoning in one unified model. Our experiments show
that SOFIE delivers near-perfect output, even from unstructured
Internet documents.
Export
BibTeX
@techreport{SuchanekMauroWeikum2008,
TITLE = {{SOFIE}: A Self-Organizing Framework for Information Extraction},
AUTHOR = {Suchanek, Fabian and Sozio, Mauro and Weikum, Gerhard},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-5-004},
NUMBER = {MPI-I-2008-5-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2008},
DATE = {2008},
ABSTRACT = {This paper presents SOFIE, a system for automated ontology extension. SOFIE can parse natural language documents, extract ontological facts from them and link the facts into an ontology. SOFIE uses logical reasoning on the existing knowledge and on the new knowledge in order to disambiguate words to their most probable meaning, to reason on the meaning of text patterns and to take into account world knowledge axioms. This allows SOFIE to check the plausibility of hypotheses and to avoid inconsistencies with the ontology. The framework of SOFIE unites the paradigms of pattern matching, word sense disambiguation and ontological reasoning in one unified model. Our experiments show that SOFIE delivers near-perfect output, even from unstructured Internet documents.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Suchanek, Fabian
%A Sozio, Mauro
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T SOFIE: A Self-Organizing Framework for Information Extraction :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-668E-6
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-5-004
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2008
%P 49 p.
%X This paper presents SOFIE, a system for automated ontology extension.
SOFIE can parse natural language documents, extract ontological facts
from them and link the facts into an ontology. SOFIE uses logical
reasoning on the existing knowledge and on the new knowledge in order
to disambiguate words to their most probable meaning, to reason on the
meaning of text patterns and to take into account world knowledge
axioms. This allows SOFIE to check the plausibility of hypotheses and
to avoid inconsistencies with the ontology. The framework of SOFIE
unites the paradigms of pattern matching, word sense disambiguation
and ontological reasoning in one unified model. Our experiments show
that SOFIE delivers near-perfect output, even from unstructured
Internet documents.
%B Research Report / Max-Planck-Institut für Informatik
Shape Complexity from Image Similarity
D. Wang, A. Belyaev, W. Saleem and H.-P. Seidel
Technical Report, 2008
D. Wang, A. Belyaev, W. Saleem and H.-P. Seidel
Technical Report, 2008
Abstract
We present an approach to automatically compute the complexity of a
given 3D shape. Previous approaches have made use of geometric
and/or topological properties of the 3D shape to compute
complexity. Our approach is based on shape appearance and estimates
the complexity of a given 3D shape according to how 2D views of the
shape diverge from each other. We use similarity among views of the
3D shape as the basis for our complexity computation. Hence our
approach uses claims from psychology that humans mentally represent
3D shapes as organizations of 2D views and, therefore, mimics how
humans gauge shape complexity. Experimental results show that our
approach produces results that are more in agreement with the human
notion of shape complexity than those obtained using previous
approaches.
Export
BibTeX
@techreport{WangBelyaevSaleemSeidel2008,
TITLE = {Shape Complexity from Image Similarity},
AUTHOR = {Wang, Danyi and Belyaev, Alexander and Saleem, Waqar and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-4-002},
NUMBER = {MPI-I-2008-4-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2008},
DATE = {2008},
ABSTRACT = {We present an approach to automatically compute the complexity of a given 3D shape. Previous approaches have made use of geometric and/or topological properties of the 3D shape to compute complexity. Our approach is based on shape appearance and estimates the complexity of a given 3D shape according to how 2D views of the shape diverge from each other. We use similarity among views of the 3D shape as the basis for our complexity computation. Hence our approach uses claims from psychology that humans mentally represent 3D shapes as organizations of 2D views and, therefore, mimics how humans gauge shape complexity. Experimental results show that our approach produces results that are more in agreement with the human notion of shape complexity than those obtained using previous approaches.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Wang, Danyi
%A Belyaev, Alexander
%A Saleem, Waqar
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Shape Complexity from Image Similarity :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-66B9-6
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2008-4-002
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2008
%P 28 p.
%X We present an approach to automatically compute the complexity of a
given 3D shape. Previous approaches have made use of geometric
and/or topological properties of the 3D shape to compute
complexity. Our approach is based on shape appearance and estimates
the complexity of a given 3D shape according to how 2D views of the
shape diverge from each other. We use similarity among views of the
3D shape as the basis for our complexity computation. Hence our
approach uses claims from psychology that humans mentally represent
3D shapes as organizations of 2D views and, therefore, mimics how
humans gauge shape complexity. Experimental results show that our
approach produces results that are more in agreement with the human
notion of shape complexity than those obtained using previous
approaches.
%B Research Report / Max-Planck-Institut für Informatik
2007
A Lagrangian relaxation approach for the multiple sequence alignment problem
E. Althaus and S. Canzar
Technical Report, 2007
E. Althaus and S. Canzar
Technical Report, 2007
Abstract
We present a branch-and-bound (bb) algorithm for the multiple sequence
alignment
problem (MSA), one of the most important problems in computational
biology. The
upper bound at each bb node is based on a Lagrangian relaxation of an
integer linear programming formulation for MSA. Dualizing certain
inequalities, the Lagrangian subproblem becomes a pairwise alignment
problem, which
can be solved efficiently by a dynamic programming approach. Due to a
reformulation
w.r.t. additionally introduced variables prior to relaxation we improve
the convergence
rate dramatically while at the same time being able to solve the
Lagrangian problem efficiently.
Our experiments show that our implementation, although preliminary,
outperforms all exact
algorithms for the multiple sequence alignment problem.
Export
BibTeX
@techreport{,
TITLE = {A Lagrangian relaxation approach for the multiple sequence alignment problem},
AUTHOR = {Althaus, Ernst and Canzar, Stefan},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-1-002},
NUMBER = {MPI-I-2007-1-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2007},
DATE = {2007},
ABSTRACT = {We present a branch-and-bound (bb) algorithm for the multiple sequence alignment problem (MSA), one of the most important problems in computational biology. The upper bound at each bb node is based on a Lagrangian relaxation of an integer linear programming formulation for MSA. Dualizing certain inequalities, the Lagrangian subproblem becomes a pairwise alignment problem, which can be solved efficiently by a dynamic programming approach. Due to a reformulation w.r.t. additionally introduced variables prior to relaxation we improve the convergence rate dramatically while at the same time being able to solve the Lagrangian problem efficiently. Our experiments show that our implementation, although preliminary, outperforms all exact algorithms for the multiple sequence alignment problem.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Althaus, Ernst
%A Canzar, Stefan
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A Lagrangian relaxation approach for the multiple sequence alignment problem :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6707-0
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-1-002
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2007
%P 41 p.
%X We present a branch-and-bound (bb) algorithm for the multiple sequence
alignment
problem (MSA), one of the most important problems in computational
biology. The
upper bound at each bb node is based on a Lagrangian relaxation of an
integer linear programming formulation for MSA. Dualizing certain
inequalities, the Lagrangian subproblem becomes a pairwise alignment
problem, which
can be solved efficiently by a dynamic programming approach. Due to a
reformulation
w.r.t. additionally introduced variables prior to relaxation we improve
the convergence
rate dramatically while at the same time being able to solve the
Lagrangian problem efficiently.
Our experiments show that our implementation, although preliminary,
outperforms all exact
algorithms for the multiple sequence alignment problem.
%B Research Report / Max-Planck-Institut für Informatik
A nonlinear viseme model for triphone-based speech synthesis
R. Bargmann, V. Blanz and H.-P. Seidel
Technical Report, 2007
R. Bargmann, V. Blanz and H.-P. Seidel
Technical Report, 2007
Abstract
This paper presents a representation of visemes that defines a measure
of similarity between different visemes, and a system of viseme
categories. The representation is derived from a statistical data
analysis of feature points on 3D scans, using Locally Linear
Embedding (LLE). The similarity measure determines which available
viseme and triphones to use to synthesize 3D face animation for a
novel audio file. From a corpus of dynamic recorded 3D mouth
articulation data, our system is able to find the best suited sequence
of triphones over which to interpolate while reusing the
coarticulation information to obtain correct mouth movements over
time. Due to the similarity measure, the system can deal with
relatively small triphone databases and find the most appropriate
candidates. With the selected sequence of database triphones, we can
finally morph along the successive triphones to produce the final
articulation animation.
In an entirely data-driven approach, our automated procedure for
defining viseme categories reproduces the groups of related visemes
that are defined in the phonetics literature.
Export
BibTeX
@techreport{BargmannBlanzSeidel2007,
TITLE = {A nonlinear viseme model for triphone-based speech synthesis},
AUTHOR = {Bargmann, Robert and Blanz, Volker and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-003},
NUMBER = {MPI-I-2007-4-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2007},
DATE = {2007},
ABSTRACT = {This paper presents a representation of visemes that defines a measure of similarity between different visemes, and a system of viseme categories. The representation is derived from a statistical data analysis of feature points on 3D scans, using Locally Linear Embedding (LLE). The similarity measure determines which available viseme and triphones to use to synthesize 3D face animation for a novel audio file. From a corpus of dynamic recorded 3D mouth articulation data, our system is able to find the best suited sequence of triphones over which to interpolate while reusing the coarticulation information to obtain correct mouth movements over time. Due to the similarity measure, the system can deal with relatively small triphone databases and find the most appropriate candidates. With the selected sequence of database triphones, we can finally morph along the successive triphones to produce the final articulation animation. In an entirely data-driven approach, our automated procedure for defining viseme categories reproduces the groups of related visemes that are defined in the phonetics literature.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Bargmann, Robert
%A Blanz, Volker
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T A nonlinear viseme model for triphone-based speech synthesis :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-66DC-7
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-003
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2007
%P 28 p.
%X This paper presents a representation of visemes that defines a measure
of similarity between different visemes, and a system of viseme
categories. The representation is derived from a statistical data
analysis of feature points on 3D scans, using Locally Linear
Embedding (LLE). The similarity measure determines which available
viseme and triphones to use to synthesize 3D face animation for a
novel audio file. From a corpus of dynamic recorded 3D mouth
articulation data, our system is able to find the best suited sequence
of triphones over which to interpolate while reusing the
coarticulation information to obtain correct mouth movements over
time. Due to the similarity measure, the system can deal with
relatively small triphone databases and find the most appropriate
candidates. With the selected sequence of database triphones, we can
finally morph along the successive triphones to produce the final
articulation animation.
In an entirely data-driven approach, our automated procedure for
defining viseme categories reproduces the groups of related visemes
that are defined in the phonetics literature.
%B Research Report / Max-Planck-Institut für Informatik
Computing Envelopes of Quadrics
E. Berberich and M. Meyerovitch
Technical Report, 2007
E. Berberich and M. Meyerovitch
Technical Report, 2007
Export
BibTeX
@techreport{acs:bm-ceq-07,
TITLE = {Computing Envelopes of Quadrics},
AUTHOR = {Berberich, Eric and Meyerovitch, Michal},
LANGUAGE = {eng},
NUMBER = {ACS-TR-241402-03},
LOCALID = {Local-ID: C12573CC004A8E26-12A6DC64E5449DC9C12573D1004DA0BC-acs:bm-ceq-07},
INSTITUTION = {University of Groningen},
ADDRESS = {Groningen, The Netherlands},
YEAR = {2007},
DATE = {2007},
TYPE = {ACS Technical Reports},
}
Endnote
%0 Report
%A Berberich, Eric
%A Meyerovitch, Michal
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Computing Envelopes of Quadrics :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1EA4-F
%F EDOC: 356718
%F OTHER: Local-ID: C12573CC004A8E26-12A6DC64E5449DC9C12573D1004DA0BC-acs:bm-ceq-07
%Y University of Groningen
%C Groningen, The Netherlands
%D 2007
%P 5 p.
%B ACS Technical Reports
Linear-Time Reordering in a Sweep-line Algorithm for Algebraic Curves Intersecting in a Common Point
E. Berberich and L. Kettner
Technical Report, 2007
E. Berberich and L. Kettner
Technical Report, 2007
Export
BibTeX
@techreport{bk-reorder-07,
TITLE = {Linear-Time Reordering in a Sweep-line Algorithm for Algebraic Curves Intersecting in a Common Point},
AUTHOR = {Berberich, Eric and Kettner, Lutz},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2007-1-001},
LOCALID = {Local-ID: C12573CC004A8E26-D3347FB7A037EE5CC12573D1004C6833-bk-reorder-07},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2007},
DATE = {2007},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Berberich, Eric
%A Kettner, Lutz
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Linear-Time Reordering in a Sweep-line Algorithm for Algebraic Curves Intersecting in a Common Point :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1FB9-8
%F EDOC: 356668
%@ 0946-011X
%F OTHER: Local-ID: C12573CC004A8E26-D3347FB7A037EE5CC12573D1004C6833-bk-reorder-07
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2007
%P 20 p.
%B Research Report
Revision of interface specification of algebraic kernel
E. Berberich, M. Hemmer, M. I. Karavelas and M. Teillaud
Technical Report, 2007
E. Berberich, M. Hemmer, M. I. Karavelas and M. Teillaud
Technical Report, 2007
Export
BibTeX
@techreport{acs:bhkt-risak-06,
TITLE = {Revision of interface specification of algebraic kernel},
AUTHOR = {Berberich, Eric and Hemmer, Michael and Karavelas, Menelaos I. and Teillaud, Monique},
LANGUAGE = {eng},
LOCALID = {Local-ID: C12573CC004A8E26-1F31C7FA352D83DDC12573D1004F257E-acs:bhkt-risak-06},
INSTITUTION = {University of Groningen},
ADDRESS = {Groningen, The Netherlands},
YEAR = {2007},
DATE = {2007},
TYPE = {ACS Technical Reports},
}
Endnote
%0 Report
%A Berberich, Eric
%A Hemmer, Michael
%A Karavelas, Menelaos I.
%A Teillaud, Monique
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
%T Revision of interface specification of algebraic kernel :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-208F-0
%F EDOC: 356661
%F OTHER: Local-ID: C12573CC004A8E26-1F31C7FA352D83DDC12573D1004F257E-acs:bhkt-risak-06
%F OTHER: ACS-TR-243301-01
%Y University of Groningen
%C Groningen, The Netherlands
%D 2007
%P 100 p.
%B ACS Technical Reports
Sweeping and maintaining two-dimensional arrangements on quadrics
E. Berberich, E. Fogel, D. Halperin, K. Mehlhorn and R. Wein
Technical Report, 2007
E. Berberich, E. Fogel, D. Halperin, K. Mehlhorn and R. Wein
Technical Report, 2007
Export
BibTeX
@techreport{acs:bfhmw-smtaoq-07,
TITLE = {Sweeping and maintaining two-dimensional arrangements on quadrics},
AUTHOR = {Berberich, Eric and Fogel, Efi and Halperin, Dan and Mehlhorn, Kurt and Wein, Ron},
LANGUAGE = {eng},
NUMBER = {ACS-TR-241402-02},
LOCALID = {Local-ID: C12573CC004A8E26-A2D9FC191F294C4BC12573D1004D4FA3-acs:bfhmw-smtaoq-07},
INSTITUTION = {University of Groningen},
ADDRESS = {Groningen, The Netherlands},
YEAR = {2007},
DATE = {2007},
TYPE = {ACS Technical Reports},
}
Endnote
%0 Report
%A Berberich, Eric
%A Fogel, Efi
%A Halperin, Dan
%A Mehlhorn, Kurt
%A Wein, Ron
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Sweeping and maintaining two-dimensional arrangements on quadrics :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-20E3-1
%F EDOC: 356692
%F OTHER: Local-ID: C12573CC004A8E26-A2D9FC191F294C4BC12573D1004D4FA3-acs:bfhmw-smtaoq-07
%Y University of Groningen
%C Groningen, The Netherlands
%D 2007
%P 10 p.
%B ACS Technical Reports
Definition of the 3D Quadrical Kernel Content
E. Berberich and M. Hemmer
Technical Report, 2007
E. Berberich and M. Hemmer
Technical Report, 2007
Export
BibTeX
@techreport{acs:bh-dtqkc-07,
TITLE = {Definition of the {3D} Quadrical Kernel Content},
AUTHOR = {Berberich, Eric and Hemmer, Michael},
LANGUAGE = {eng},
NUMBER = {ACS-TR-243302-02},
LOCALID = {Local-ID: C12573CC004A8E26-2FF567066FB82A5FC12573D1004DDD73-acs:bh-dtqkc-07},
INSTITUTION = {University of Groningen},
ADDRESS = {Groningen, The Netherlands},
YEAR = {2007},
DATE = {2007},
TYPE = {ACS Technical Reports},
}
Endnote
%0 Report
%A Berberich, Eric
%A Hemmer, Michael
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Definition of the 3D Quadrical Kernel Content :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1ED4-1
%F EDOC: 356735
%F OTHER: Local-ID: C12573CC004A8E26-2FF567066FB82A5FC12573D1004DDD73-acs:bh-dtqkc-07
%Y University of Groningen
%C Groningen, The Netherlands
%D 2007
%P 25 p.
%B ACS Technical Reports
Exact Computation of Arrangements of Rotated Conics
E. Berberich, M. Caroli and N. Wolpert
Technical Report, 2007
E. Berberich, M. Caroli and N. Wolpert
Technical Report, 2007
Export
BibTeX
@techreport{acs:bcw-carc-07,
TITLE = {Exact Computation of Arrangements of Rotated Conics},
AUTHOR = {Berberich, Eric and Caroli, Manuel and Wolpert, Nicola},
LANGUAGE = {eng},
NUMBER = {ACS-TR-123104-03},
LOCALID = {Local-ID: C12573CC004A8E26-1EB177EFAA801139C12573D1004D0246-acs:bcw-carc-07},
INSTITUTION = {University of Groningen},
ADDRESS = {Groningen, The Netherlands},
YEAR = {2007},
DATE = {2007},
TYPE = {ACS Technical Reports},
}
Endnote
%0 Report
%A Berberich, Eric
%A Caroli, Manuel
%A Wolpert, Nicola
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Exact Computation of Arrangements of Rotated Conics :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1F20-F
%F EDOC: 356666
%F OTHER: Local-ID: C12573CC004A8E26-1EB177EFAA801139C12573D1004D0246-acs:bcw-carc-07
%Y University of Groningen
%C Groningen, The Netherlands
%D 2007
%P 5 p
%B ACS Technical Reports
Updated Website to include Benchmark Instances for Arrangements of Quadrics and Planar Algebraic Curves
E. Berberich, E. Fogel and A. Meyer
Technical Report, 2007
E. Berberich, E. Fogel and A. Meyer
Technical Report, 2007
Export
BibTeX
@techreport{acs:bfm-uwibaqpac-07,
TITLE = {Updated Website to include Benchmark Instances for Arrangements of Quadrics and Planar Algebraic Curves},
AUTHOR = {Berberich, Eric and Fogel, Efi and Meyer, Andreas},
LANGUAGE = {eng},
NUMBER = {ACS-TR-243305-01},
LOCALID = {Local-ID: C12573CC004A8E26-DEDF6F20E463424CC12573D1004E1823-acs:bfm-uwibaqpac-07},
INSTITUTION = {University of Groningen},
ADDRESS = {Groningen, The Netherlands},
YEAR = {2007},
DATE = {2007},
TYPE = {ACS Technical Reports},
}
Endnote
%0 Report
%A Berberich, Eric
%A Fogel, Efi
%A Meyer, Andreas
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Updated Website to include Benchmark Instances for Arrangements of Quadrics and Planar Algebraic Curves :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-2128-E
%F EDOC: 356664
%F OTHER: Local-ID: C12573CC004A8E26-DEDF6F20E463424CC12573D1004E1823-acs:bfm-uwibaqpac-07
%Y University of Groningen
%C Groningen, The Netherlands
%D 2007
%P 5 p.
%B ACS Technical Reports
A Time Machine for Text Search
K. Berberich, S. Bedathur, T. Neumann and G. Weikum
Technical Report, 2007
K. Berberich, S. Bedathur, T. Neumann and G. Weikum
Technical Report, 2007
Export
BibTeX
@techreport{TechReportBBNW-2007,
TITLE = {A Time Machine for Text Search},
AUTHOR = {Berberich, Klaus and Bedathur, Srikanta and Neumann, Thomas and Weikum, Gerhard},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPII-I-2007-5-02},
LOCALID = {Local-ID: C12573CC004A8E26-D444201EBAA5F95BC125731E00458A41-TechReportBBNW-2007},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2007},
DATE = {2007},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Berberich, Klaus
%A Bedathur, Srikanta
%A Neumann, Thomas
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T A Time Machine for Text Search :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1E49-E
%F EDOC: 356443
%@ 0946-011X
%F OTHER: Local-ID: C12573CC004A8E26-D444201EBAA5F95BC125731E00458A41-TechReportBBNW-2007
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2007
%P 39 p.
%B Research Report
HistoPyramids in Iso-Surface Extraction
C. Dyken, G. Ziegler, C. Theobalt and H.-P. Seidel
Technical Report, 2007
C. Dyken, G. Ziegler, C. Theobalt and H.-P. Seidel
Technical Report, 2007
Abstract
We present an implementation approach to high-speed Marching
Cubes, running entirely on the Graphics Processing Unit of Shader Model
3.0 and 4.0 graphics hardware. Our approach is based on the interpretation
of Marching Cubes as a stream compaction and expansion process, and is
implemented using the HistoPyramid, a hierarchical data structure
previously only used in GPU data compaction. We extend the HistoPyramid
structure to allow for stream expansion, which provides an efficient
method for generating geometry directly on the GPU, even on Shader Model
3.0 hardware. Currently, our algorithm outperforms all other known
GPU-based iso-surface extraction algorithms. We describe our
implementation and present a performance analysis on several generations
of graphics hardware.
Export
BibTeX
@techreport{DykenZieglerTheobaltSeidel2007,
TITLE = {Histo{P}yramids in Iso-Surface Extraction},
AUTHOR = {Dyken, Christopher and Ziegler, Gernot and Theobalt, Christian and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-006},
NUMBER = {MPI-I-2007-4-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2007},
DATE = {2007},
ABSTRACT = {We present an implementation approach to high-speed Marching Cubes, running entirely on the Graphics Processing Unit of Shader Model 3.0 and 4.0 graphics hardware. Our approach is based on the interpretation of Marching Cubes as a stream compaction and expansion process, and is implemented using the HistoPyramid, a hierarchical data structure previously only used in GPU data compaction. We extend the HistoPyramid structure to allow for stream expansion, which provides an efficient method for generating geometry directly on the GPU, even on Shader Model 3.0 hardware. Currently, our algorithm outperforms all other known GPU-based iso-surface extraction algorithms. We describe our implementation and present a performance analysis on several generations of graphics hardware.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Dyken, Christopher
%A Ziegler, Gernot
%A Theobalt, Christian
%A Seidel, Hans-Peter
%+ External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T HistoPyramids in Iso-Surface Extraction :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-66D3-A
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-006
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2007
%P 16 p.
%X We present an implementation approach to high-speed Marching
Cubes, running entirely on the Graphics Processing Unit of Shader Model
3.0 and 4.0 graphics hardware. Our approach is based on the interpretation
of Marching Cubes as a stream compaction and expansion process, and is
implemented using the HistoPyramid, a hierarchical data structure
previously only used in GPU data compaction. We extend the HistoPyramid
structure to allow for stream expansion, which provides an efficient
method for generating geometry directly on the GPU, even on Shader Model
3.0 hardware. Currently, our algorithm outperforms all other known
GPU-based iso-surface extraction algorithms. We describe our
implementation and present a performance analysis on several generations
of graphics hardware.
%B Research Report / Max-Planck-Institut für Informatik
Snap Rounding of Bézier Curves
A. Eigenwillig, L. Kettner and N. Wolpert
Technical Report, 2007
A. Eigenwillig, L. Kettner and N. Wolpert
Technical Report, 2007
Export
BibTeX
@techreport{ACS-TR-121108-01,
TITLE = {Snap Rounding of B{\'e}zier Curves},
AUTHOR = {Eigenwillig, Arno and Kettner, Lutz and Wolpert, Nicola},
LANGUAGE = {eng},
NUMBER = {MPI-I-2006-1-005},
LOCALID = {Local-ID: C12573CC004A8E26-13E19171EEC8D5E0C12572A0005C02F6-ACS-TR-121108-01},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken, Germany},
YEAR = {2007},
DATE = {2007},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Eigenwillig, Arno
%A Kettner, Lutz
%A Wolpert, Nicola
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Snap Rounding of Bézier Curves :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-20B9-0
%F EDOC: 356760
%F OTHER: Local-ID: C12573CC004A8E26-13E19171EEC8D5E0C12572A0005C02F6-ACS-TR-121108-01
%F OTHER: ACS-TR-121108-01
%Y Max-Planck-Institut für Informatik
%C Saarbrücken, Germany
%D 2007
%P 19 p.
%B Research Report
Global stochastic optimization for robust and accurate human motion capture
J. Gall, T. Brox, B. Rosenhahn and H.-P. Seidel
Technical Report, 2007
J. Gall, T. Brox, B. Rosenhahn and H.-P. Seidel
Technical Report, 2007
Abstract
Tracking of human motion in video is usually tackled either
by local optimization or filtering approaches. While
local optimization offers accurate estimates but often looses
track due to local optima, particle filtering can recover from
errors at the expense of a poor accuracy due to overestimation
of noise. In this paper, we propose to embed global
stochastic optimization in a tracking framework. This new
optimization technique exhibits both the robustness of filtering
strategies and a remarkable accuracy. We apply the
optimization to an energy function that relies on silhouettes
and color, as well as some prior information on physical
constraints. This framework provides a general solution to
markerless human motion capture since neither excessive
preprocessing nor strong assumptions except of a 3D model
are required. The optimization provides initialization and
accurate tracking even in case of low contrast and challenging
illumination. Our experimental evaluation demonstrates
the large improvements obtained with this technique.
It comprises a quantitative error analysis comparing the
approach with local optimization, particle filtering, and a
heuristic based on particle filtering.
Export
BibTeX
@techreport{GallBroxRosenhahnSeidel2008,
TITLE = {Global stochastic optimization for robust and accurate human motion capture},
AUTHOR = {Gall, J{\"u}rgen and Brox, Thomas and Rosenhahn, Bodo and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-008},
NUMBER = {MPI-I-2007-4-008},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2007},
DATE = {2007},
ABSTRACT = {Tracking of human motion in video is usually tackled either by local optimization or filtering approaches. While local optimization offers accurate estimates but often looses track due to local optima, particle filtering can recover from errors at the expense of a poor accuracy due to overestimation of noise. In this paper, we propose to embed global stochastic optimization in a tracking framework. This new optimization technique exhibits both the robustness of filtering strategies and a remarkable accuracy. We apply the optimization to an energy function that relies on silhouettes and color, as well as some prior information on physical constraints. This framework provides a general solution to markerless human motion capture since neither excessive preprocessing nor strong assumptions except of a 3D model are required. The optimization provides initialization and accurate tracking even in case of low contrast and challenging illumination. Our experimental evaluation demonstrates the large improvements obtained with this technique. It comprises a quantitative error analysis comparing the approach with local optimization, particle filtering, and a heuristic based on particle filtering.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Gall, Jürgen
%A Brox, Thomas
%A Rosenhahn, Bodo
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Global stochastic optimization for robust and accurate human motion capture :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-66CE-7
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-008
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2007
%P 28 p.
%X Tracking of human motion in video is usually tackled either
by local optimization or filtering approaches. While
local optimization offers accurate estimates but often looses
track due to local optima, particle filtering can recover from
errors at the expense of a poor accuracy due to overestimation
of noise. In this paper, we propose to embed global
stochastic optimization in a tracking framework. This new
optimization technique exhibits both the robustness of filtering
strategies and a remarkable accuracy. We apply the
optimization to an energy function that relies on silhouettes
and color, as well as some prior information on physical
constraints. This framework provides a general solution to
markerless human motion capture since neither excessive
preprocessing nor strong assumptions except of a 3D model
are required. The optimization provides initialization and
accurate tracking even in case of low contrast and challenging
illumination. Our experimental evaluation demonstrates
the large improvements obtained with this technique.
It comprises a quantitative error analysis comparing the
approach with local optimization, particle filtering, and a
heuristic based on particle filtering.
%B Research Report / Max-Planck-Institut für Informatik
Clustered stochastic optimization for object recognition and pose estimation
J. Gall, B. Rosenhahn and H.-P. Seidel
Technical Report, 2007
J. Gall, B. Rosenhahn and H.-P. Seidel
Technical Report, 2007
Abstract
We present an approach for estimating the 3D position and in case of
articulated objects also the joint configuration from segmented 2D
images. The pose estimation without initial information is a challenging
optimization problem in a high dimensional space and is essential for
texture acquisition and initialization of model-based tracking
algorithms. Our method is able to recognize the correct object in the
case of multiple objects and estimates its pose with a high accuracy.
The key component is a particle-based global optimization method that
converges to the global minimum similar to simulated annealing. After
detecting potential bounded subsets of the search space, the particles
are divided into clusters and migrate to the most attractive cluster as
the time increases. The performance of our approach is verified by means
of real scenes and a quantative error analysis for image distortions.
Our experiments include rigid bodies and full human bodies.
Export
BibTeX
@techreport{GallRosenhahnSeidel2007,
TITLE = {Clustered stochastic optimization for object recognition and pose estimation},
AUTHOR = {Gall, J{\"u}rgen and Rosenhahn, Bodo and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-001},
NUMBER = {MPI-I-2007-4-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2007},
DATE = {2007},
ABSTRACT = {We present an approach for estimating the 3D position and in case of articulated objects also the joint configuration from segmented 2D images. The pose estimation without initial information is a challenging optimization problem in a high dimensional space and is essential for texture acquisition and initialization of model-based tracking algorithms. Our method is able to recognize the correct object in the case of multiple objects and estimates its pose with a high accuracy. The key component is a particle-based global optimization method that converges to the global minimum similar to simulated annealing. After detecting potential bounded subsets of the search space, the particles are divided into clusters and migrate to the most attractive cluster as the time increases. The performance of our approach is verified by means of real scenes and a quantative error analysis for image distortions. Our experiments include rigid bodies and full human bodies.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Gall, Jürgen
%A Rosenhahn, Bodo
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Clustered stochastic optimization for object recognition and pose estimation :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-66E5-2
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2007
%P 23 p.
%X We present an approach for estimating the 3D position and in case of
articulated objects also the joint configuration from segmented 2D
images. The pose estimation without initial information is a challenging
optimization problem in a high dimensional space and is essential for
texture acquisition and initialization of model-based tracking
algorithms. Our method is able to recognize the correct object in the
case of multiple objects and estimates its pose with a high accuracy.
The key component is a particle-based global optimization method that
converges to the global minimum similar to simulated annealing. After
detecting potential bounded subsets of the search space, the particles
are divided into clusters and migrate to the most attractive cluster as
the time increases. The performance of our approach is verified by means
of real scenes and a quantative error analysis for image distortions.
Our experiments include rigid bodies and full human bodies.
%B Research Report / Max-Planck-Institut für Informatik
Interacting and Annealing Particle Filters: Mathematics and a Recipe for Applications
J. Gall, J. Potthoff, C. Schnörr, B. Rosenhahn and H.-P. Seidel
Technical Report, 2007
J. Gall, J. Potthoff, C. Schnörr, B. Rosenhahn and H.-P. Seidel
Technical Report, 2007
Abstract
Interacting and annealing are two powerful strategies that are applied
in different areas of stochastic modelling and data analysis.
Interacting particle systems approximate a distribution of interest by a
finite number of particles where the particles interact between the time
steps. In computer vision, they are commonly known as particle filters.
Simulated annealing, on the other hand, is a global optimization method
derived from statistical mechanics. A recent heuristic approach to fuse
these two techniques for motion capturing has become known as annealed
particle filter. In order to analyze these techniques, we rigorously
derive in this paper two algorithms with annealing properties based on
the mathematical theory of interacting particle systems. Convergence
results and sufficient parameter restrictions enable us to point out
limitations of the annealed particle filter. Moreover, we evaluate the
impact of the parameters on the performance in various experiments,
including the tracking of articulated bodies from noisy measurements.
Our results provide a general guidance on suitable parameter choices for
different applications.
Export
BibTeX
@techreport{GallPotthoffRosenhahnSchnoerrSeidel2006,
TITLE = {Interacting and Annealing Particle Filters: Mathematics and a Recipe for Applications},
AUTHOR = {Gall, J{\"u}rgen and Potthoff, J{\"u}rgen and Schn{\"o}rr, Christoph and Rosenhahn, Bodo and Seidel, Hans-Peter},
LANGUAGE = {eng},
NUMBER = {MPI-I-2006-4-009},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2007},
DATE = {2007},
ABSTRACT = {Interacting and annealing are two powerful strategies that are applied in different areas of stochastic modelling and data analysis. Interacting particle systems approximate a distribution of interest by a finite number of particles where the particles interact between the time steps. In computer vision, they are commonly known as particle filters. Simulated annealing, on the other hand, is a global optimization method derived from statistical mechanics. A recent heuristic approach to fuse these two techniques for motion capturing has become known as annealed particle filter. In order to analyze these techniques, we rigorously derive in this paper two algorithms with annealing properties based on the mathematical theory of interacting particle systems. Convergence results and sufficient parameter restrictions enable us to point out limitations of the annealed particle filter. Moreover, we evaluate the impact of the parameters on the performance in various experiments, including the tracking of articulated bodies from noisy measurements. Our results provide a general guidance on suitable parameter choices for different applications.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Gall, Jürgen
%A Potthoff, Jürgen
%A Schnörr, Christoph
%A Rosenhahn, Bodo
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Interacting and Annealing Particle Filters: Mathematics and a Recipe for Applications :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0027-13C7-D
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2007
%Z Review method: peer-reviewed
%X Interacting and annealing are two powerful strategies that are applied
in different areas of stochastic modelling and data analysis.
Interacting particle systems approximate a distribution of interest by a
finite number of particles where the particles interact between the time
steps. In computer vision, they are commonly known as particle filters.
Simulated annealing, on the other hand, is a global optimization method
derived from statistical mechanics. A recent heuristic approach to fuse
these two techniques for motion capturing has become known as annealed
particle filter. In order to analyze these techniques, we rigorously
derive in this paper two algorithms with annealing properties based on
the mathematical theory of interacting particle systems. Convergence
results and sufficient parameter restrictions enable us to point out
limitations of the annealed particle filter. Moreover, we evaluate the
impact of the parameters on the performance in various experiments,
including the tracking of articulated bodies from noisy measurements.
Our results provide a general guidance on suitable parameter choices for
different applications.
%B Research Report
LFthreads: a lock-free thread library
A. Gidenstam and M. Papatriantafilou
Technical Report, 2007
A. Gidenstam and M. Papatriantafilou
Technical Report, 2007
Abstract
This paper presents the synchronization in LFthreads, a thread library
entirely based on lock-free methods, i.e. no
spin-locks or similar synchronization mechanisms are employed in the
implementation of the multithreading.
Since lock-freedom is highly desirable in multiprocessors/multicores
due to its advantages in parallelism, fault-tolerance,
convoy-avoidance and more, there is an increased demand in lock-free
methods in parallel applications, hence also in multiprocessor/multicore
system services. This is why a lock-free
multithreading library is important. To the best of our knowledge
LFthreads is the first thread library that provides a lock-free
implementation
of blocking synchronization primitives for application threads.
Lock-free implementation of objects with blocking semantics may sound like
a contradicting goal. However, such objects have benefits:
e.g. library operations that block and unblock threads on the same
synchronization object can make progress in parallel while maintaining
the desired thread-level semantics
and without having to wait for any ``slow'' operations among them.
Besides, as no spin-locks or similar synchronization mechanisms are employed,
processors are always able to do useful work. As a consequence,
applications, too, can enjoy enhanced parallelism and fault-tolerance.
The synchronization in LFthreads is achieved by a new method, which
we call responsibility hand-off (RHO), that does not need any
special kernel support.
Export
BibTeX
@techreport{,
TITLE = {{LFthreads}: a lock-free thread library},
AUTHOR = {Gidenstam, Anders and Papatriantafilou, Marina},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-1-003},
NUMBER = {MPI-I-2007-1-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2007},
DATE = {2007},
ABSTRACT = {This paper presents the synchronization in LFthreads, a thread library entirely based on lock-free methods, i.e. no spin-locks or similar synchronization mechanisms are employed in the implementation of the multithreading. Since lock-freedom is highly desirable in multiprocessors/multicores due to its advantages in parallelism, fault-tolerance, convoy-avoidance and more, there is an increased demand in lock-free methods in parallel applications, hence also in multiprocessor/multicore system services. This is why a lock-free multithreading library is important. To the best of our knowledge LFthreads is the first thread library that provides a lock-free implementation of blocking synchronization primitives for application threads. Lock-free implementation of objects with blocking semantics may sound like a contradicting goal. However, such objects have benefits: e.g. library operations that block and unblock threads on the same synchronization object can make progress in parallel while maintaining the desired thread-level semantics and without having to wait for any ``slow'' operations among them. Besides, as no spin-locks or similar synchronization mechanisms are employed, processors are always able to do useful work. As a consequence, applications, too, can enjoy enhanced parallelism and fault-tolerance. The synchronization in LFthreads is achieved by a new method, which we call responsibility hand-off (RHO), that does not need any special kernel support.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Gidenstam, Anders
%A Papatriantafilou, Marina
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T LFthreads: a lock-free thread library :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-66F8-7
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-1-003
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2007
%P 36 p.
%X This paper presents the synchronization in LFthreads, a thread library
entirely based on lock-free methods, i.e. no
spin-locks or similar synchronization mechanisms are employed in the
implementation of the multithreading.
Since lock-freedom is highly desirable in multiprocessors/multicores
due to its advantages in parallelism, fault-tolerance,
convoy-avoidance and more, there is an increased demand in lock-free
methods in parallel applications, hence also in multiprocessor/multicore
system services. This is why a lock-free
multithreading library is important. To the best of our knowledge
LFthreads is the first thread library that provides a lock-free
implementation
of blocking synchronization primitives for application threads.
Lock-free implementation of objects with blocking semantics may sound like
a contradicting goal. However, such objects have benefits:
e.g. library operations that block and unblock threads on the same
synchronization object can make progress in parallel while maintaining
the desired thread-level semantics
and without having to wait for any ``slow'' operations among them.
Besides, as no spin-locks or similar synchronization mechanisms are employed,
processors are always able to do useful work. As a consequence,
applications, too, can enjoy enhanced parallelism and fault-tolerance.
The synchronization in LFthreads is achieved by a new method, which
we call responsibility hand-off (RHO), that does not need any
special kernel support.
%B Research Report / Max-Planck-Institut für Informatik
Global Illumination using Photon Ray Splatting
R. Herzog, V. Havran, S. Kinuwaki, K. Myszkowski and H.-P. Seidel
Technical Report, 2007
R. Herzog, V. Havran, S. Kinuwaki, K. Myszkowski and H.-P. Seidel
Technical Report, 2007
Abstract
We present a novel framework for efficiently computing the indirect
illumination in diffuse and moderately glossy scenes using density estimation
techniques.
A vast majority of existing global illumination approaches either quickly
computes an approximate solution, which may not be adequate for previews, or
performs a much more time-consuming computation to obtain high-quality results
for the indirect illumination. Our method improves photon density estimation,
which is an approximate solution, and leads to significantly better visual
quality in particular for complex geometry, while only slightly increasing the
computation time. We perform direct splatting of photon rays, which allows us
to use simpler search data structures. Our novel lighting computation is
derived from basic radiometric theory and requires only small changes to
existing photon splatting approaches.
Since our density estimation is carried out in ray space rather than on
surfaces, as in the commonly used photon mapping algorithm, the results are
more robust against geometrically incurred sources of bias. This holds also in
combination with final gathering where photon mapping often overestimates the
illumination near concave geometric features. In addition, we show that our
splatting technique can be extended to handle moderately glossy surfaces and
can be combined with traditional irradiance caching for sparse sampling and
filtering in image space.
Export
BibTeX
@techreport{HerzogReport2007,
TITLE = {Global Illumination using Photon Ray Splatting},
AUTHOR = {Herzog, Robert and Havran, Vlastimil and Kinuwaki, Shinichi and Myszkowski, Karol and Seidel, Hans-Peter},
LANGUAGE = {eng},
NUMBER = {MPI-I-2007-4-007},
LOCALID = {Local-ID: C12573CC004A8E26-88919E23BF524D6AC12573C4005B8D41-HerzogReport2007},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken, Germany},
YEAR = {2007},
DATE = {2007},
ABSTRACT = {We present a novel framework for efficiently computing the indirect illumination in diffuse and moderately glossy scenes using density estimation techniques. A vast majority of existing global illumination approaches either quickly computes an approximate solution, which may not be adequate for previews, or performs a much more time-consuming computation to obtain high-quality results for the indirect illumination. Our method improves photon density estimation, which is an approximate solution, and leads to significantly better visual quality in particular for complex geometry, while only slightly increasing the computation time. We perform direct splatting of photon rays, which allows us to use simpler search data structures. Our novel lighting computation is derived from basic radiometric theory and requires only small changes to existing photon splatting approaches. Since our density estimation is carried out in ray space rather than on surfaces, as in the commonly used photon mapping algorithm, the results are more robust against geometrically incurred sources of bias. This holds also in combination with final gathering where photon mapping often overestimates the illumination near concave geometric features. In addition, we show that our splatting technique can be extended to handle moderately glossy surfaces and can be combined with traditional irradiance caching for sparse sampling and filtering in image space.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Herzog, Robert
%A Havran, Vlastimil
%A Kinuwaki, Shinichi
%A Myszkowski, Karol
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
International Max Planck Research School, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Global Illumination using Photon Ray Splatting :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1F57-6
%F EDOC: 356502
%F OTHER: Local-ID: C12573CC004A8E26-88919E23BF524D6AC12573C4005B8D41-HerzogReport2007
%Y Max-Planck-Institut für Informatik
%C Saarbrücken, Germany
%D 2007
%P 66 p.
%X We present a novel framework for efficiently computing the indirect
illumination in diffuse and moderately glossy scenes using density estimation
techniques.
A vast majority of existing global illumination approaches either quickly
computes an approximate solution, which may not be adequate for previews, or
performs a much more time-consuming computation to obtain high-quality results
for the indirect illumination. Our method improves photon density estimation,
which is an approximate solution, and leads to significantly better visual
quality in particular for complex geometry, while only slightly increasing the
computation time. We perform direct splatting of photon rays, which allows us
to use simpler search data structures. Our novel lighting computation is
derived from basic radiometric theory and requires only small changes to
existing photon splatting approaches.
Since our density estimation is carried out in ray space rather than on
surfaces, as in the commonly used photon mapping algorithm, the results are
more robust against geometrically incurred sources of bias. This holds also in
combination with final gathering where photon mapping often overestimates the
illumination near concave geometric features. In addition, we show that our
splatting technique can be extended to handle moderately glossy surfaces and
can be combined with traditional irradiance caching for sparse sampling and
filtering in image space.
%B Research Report
Superposition for Finite Domains
T. Hillenbrand and C. Weidenbach
Technical Report, 2007
T. Hillenbrand and C. Weidenbach
Technical Report, 2007
Export
BibTeX
@techreport{HillenbrandWeidenbach2007,
TITLE = {Superposition for Finite Domains},
AUTHOR = {Hillenbrand, Thomas and Weidenbach, Christoph},
LANGUAGE = {eng},
NUMBER = {MPI-I-2007-RG1-002},
LOCALID = {Local-ID: C12573CC004A8E26-1CF84BA6556F8748C12572C1002F229B-HillenbrandWeidenbach2007Report},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken, Germany},
YEAR = {2007},
DATE = {2007},
TYPE = {Max-Planck-Institut für Informatik / Research Report},
}
Endnote
%0 Report
%A Hillenbrand, Thomas
%A Weidenbach, Christoph
%+ Automation of Logic, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
%T Superposition for Finite Domains :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-20DA-8
%F EDOC: 356455
%F OTHER: Local-ID: C12573CC004A8E26-1CF84BA6556F8748C12572C1002F229B-HillenbrandWeidenbach2007Report
%Y Max-Planck-Institut für Informatik
%C Saarbrücken, Germany
%D 2007
%P 25 p.
%B Max-Planck-Institut für Informatik / Research Report
Efficient Surface Reconstruction for Piecewise Smooth Objects
P. Jenke, M. Wand and W. Strasser
Technical Report, 2007
P. Jenke, M. Wand and W. Strasser
Technical Report, 2007
Export
BibTeX
@techreport{Jenke2007,
TITLE = {Efficient Surface Reconstruction for Piecewise Smooth Objects},
AUTHOR = {Jenke, Philipp and Wand, Michael and Strasser, Wolfgang},
LANGUAGE = {eng},
ISSN = {0946-3852},
URL = {urn:nbn:de:bsz:21-opus-32001},
NUMBER = {WSI-2007-05},
INSTITUTION = {Wilhelm-Schickard-Institut / Universit{\"a}t T{\"u}bingen},
ADDRESS = {T{\"u}bingen},
YEAR = {2007},
DATE = {2007},
TYPE = {WSI},
VOLUME = {2007-05},
}
Endnote
%0 Report
%A Jenke, Philipp
%A Wand, Michael
%A Strasser, Wolfgang
%+ External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
%T Efficient Surface Reconstruction for Piecewise Smooth Objects :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0023-D3F7-A
%U urn:nbn:de:bsz:21-opus-32001
%Y Wilhelm-Schickard-Institut / Universität Tübingen
%C Tübingen
%D 2007
%P 17 p.
%B WSI
%N 2007-05
%@ false
%U http://nbn-resolving.de/urn:nbn:de:bsz:21-opus-32001
NAGA: Searching and Ranking Knowledge
G. Kasneci, F. M. Suchanek, G. Ifrim, M. Ramanath and G. Weikum
Technical Report, 2007
G. Kasneci, F. M. Suchanek, G. Ifrim, M. Ramanath and G. Weikum
Technical Report, 2007
Abstract
The Web has the potential to become the world's largest knowledge base.
In order to unleash this potential, the wealth of information available on the
web needs to be extracted and organized. There is a need for new querying
techniques that are simple yet more expressive than those provided by standard
keyword-based search engines. Search for knowledge rather than Web pages needs
to consider inherent semantic structures like entities (person, organization,
etc.) and relationships (isA,locatedIn, etc.).
In this paper, we propose {NAGA}, a new semantic search engine. {NAGA}'s
knowledge base, which is organized as a graph with typed edges, consists of
millions of entities and relationships automatically extracted fromWeb-based
corpora. A query language capable of expressing keyword search for the casual
user as well as graph queries with regular expressions for the expert, enables
the formulation of queries with additional semantic information. We introduce a
novel scoring model, based on the principles of generative language models,
which formalizes several notions like confidence, informativeness and
compactness and uses them to rank query results. We demonstrate {NAGA}'s
superior result quality over current search engines by conducting a
comprehensive evaluation, including user assessments, for advanced queries.
Export
BibTeX
@techreport{TechReportKSIRW-2007,
TITLE = {{NAGA}: Searching and Ranking Knowledge},
AUTHOR = {Kasneci, Gjergji and Suchanek, Fabian M. and Ifrim, Georgiana and Ramanath, Maya and Weikum, Gerhard},
LANGUAGE = {eng},
ISSN = {0946-011X},
NUMBER = {MPI-I-2007-5-001},
LOCALID = {Local-ID: C12573CC004A8E26-0C33A6E805909705C12572AE003DA15B-TechReportKSIRW-2007},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken, Germany},
YEAR = {2007},
DATE = {2007},
ABSTRACT = {The Web has the potential to become the world's largest knowledge base. In order to unleash this potential, the wealth of information available on the web needs to be extracted and organized. There is a need for new querying techniques that are simple yet more expressive than those provided by standard keyword-based search engines. Search for knowledge rather than Web pages needs to consider inherent semantic structures like entities (person, organization, etc.) and relationships (isA,locatedIn, etc.). In this paper, we propose {NAGA}, a new semantic search engine. {NAGA}'s knowledge base, which is organized as a graph with typed edges, consists of millions of entities and relationships automatically extracted fromWeb-based corpora. A query language capable of expressing keyword search for the casual user as well as graph queries with regular expressions for the expert, enables the formulation of queries with additional semantic information. We introduce a novel scoring model, based on the principles of generative language models, which formalizes several notions like confidence, informativeness and compactness and uses them to rank query results. We demonstrate {NAGA}'s superior result quality over current search engines by conducting a comprehensive evaluation, including user assessments, for advanced queries.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Kasneci, Gjergji
%A Suchanek, Fabian M.
%A Ifrim, Georgiana
%A Ramanath, Maya
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T NAGA: Searching and Ranking Knowledge :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-1FFC-1
%F EDOC: 356470
%@ 0946-011X
%F OTHER: Local-ID: C12573CC004A8E26-0C33A6E805909705C12572AE003DA15B-TechReportKSIRW-2007
%Y Max-Planck-Institut für Informatik
%C Saarbrücken, Germany
%D 2007
%P 42 p.
%X The Web has the potential to become the world's largest knowledge base.
In order to unleash this potential, the wealth of information available on the
web needs to be extracted and organized. There is a need for new querying
techniques that are simple yet more expressive than those provided by standard
keyword-based search engines. Search for knowledge rather than Web pages needs
to consider inherent semantic structures like entities (person, organization,
etc.) and relationships (isA,locatedIn, etc.).
In this paper, we propose {NAGA}, a new semantic search engine. {NAGA}'s
knowledge base, which is organized as a graph with typed edges, consists of
millions of entities and relationships automatically extracted fromWeb-based
corpora. A query language capable of expressing keyword search for the casual
user as well as graph queries with regular expressions for the expert, enables
the formulation of queries with additional semantic information. We introduce a
novel scoring model, based on the principles of generative language models,
which formalizes several notions like confidence, informativeness and
compactness and uses them to rank query results. We demonstrate {NAGA}'s
superior result quality over current search engines by conducting a
comprehensive evaluation, including user assessments, for advanced queries.
%B Research Report
Construction of smooth maps with mean value coordinates
T. Langer and H.-P. Seidel
Technical Report, 2007
T. Langer and H.-P. Seidel
Technical Report, 2007
Abstract
Bernstein polynomials are a classical tool in Computer Aided Design to
create smooth maps
with a high degree of local control.
They are used for the construction of B\'ezier surfaces, free-form
deformations, and many other applications.
However, classical Bernstein polynomials are only defined for simplices
and parallelepipeds.
These can in general not directly capture the shape of arbitrary
objects. Instead,
a tessellation of the desired domain has to be done first.
We construct smooth maps on arbitrary sets of polytopes
such that the restriction to each of the polytopes is a Bernstein
polynomial in mean value coordinates
(or any other generalized barycentric coordinates).
In particular, we show how smooth transitions between different
domain polytopes can be ensured.
Export
BibTeX
@techreport{LangerSeidel2007,
TITLE = {Construction of smooth maps with mean value coordinates},
AUTHOR = {Langer, Torsten and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-002},
NUMBER = {MPI-I-2007-4-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2007},
DATE = {2007},
ABSTRACT = {Bernstein polynomials are a classical tool in Computer Aided Design to create smooth maps with a high degree of local control. They are used for the construction of B\'ezier surfaces, free-form deformations, and many other applications. However, classical Bernstein polynomials are only defined for simplices and parallelepipeds. These can in general not directly capture the shape of arbitrary objects. Instead, a tessellation of the desired domain has to be done first. We construct smooth maps on arbitrary sets of polytopes such that the restriction to each of the polytopes is a Bernstein polynomial in mean value coordinates (or any other generalized barycentric coordinates). In particular, we show how smooth transitions between different domain polytopes can be ensured.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Langer, Torsten
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Construction of smooth maps with mean value coordinates :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-66DF-1
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-002
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2007
%P 22 p.
%X Bernstein polynomials are a classical tool in Computer Aided Design to
create smooth maps
with a high degree of local control.
They are used for the construction of B\'ezier surfaces, free-form
deformations, and many other applications.
However, classical Bernstein polynomials are only defined for simplices
and parallelepipeds.
These can in general not directly capture the shape of arbitrary
objects. Instead,
a tessellation of the desired domain has to be done first.
We construct smooth maps on arbitrary sets of polytopes
such that the restriction to each of the polytopes is a Bernstein
polynomial in mean value coordinates
(or any other generalized barycentric coordinates).
In particular, we show how smooth transitions between different
domain polytopes can be ensured.
%B Research Report / Max-Planck-Institut für Informatik
A volumetric approach to interactive shape editing
C. Stoll, E. de Aguiar, C. Theobalt and H.-P. Seidel
Technical Report, 2007
C. Stoll, E. de Aguiar, C. Theobalt and H.-P. Seidel
Technical Report, 2007
Abstract
We present a novel approach to real-time shape editing that produces
physically plausible deformations using an efficient and
easy-to-implement volumetric approach. Our algorithm alternates between
a linear tetrahedral Laplacian deformation step and a differential
update in which rotational transformations are approximated. By means of
this iterative process we can achieve non-linear deformation results
while having to solve only linear equation systems. The differential
update step relies on estimating the rotational component of the
deformation relative to the rest pose. This makes the method very stable
as the shape can be reverted to its rest pose even after extreme
deformations. Only a few point handles or area handles imposing
an orientation are needed to achieve high quality deformations, which
makes the approach intuitive to use. We show that our technique is well
suited for interactive shape manipulation and also provides an elegant
way to animate models with captured motion data.
Export
BibTeX
@techreport{Stoll2007,
TITLE = {A volumetric approach to interactive shape editing},
AUTHOR = {Stoll, Carsten and de Aguiar, Edilson and Theobalt, Christian and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-004},
NUMBER = {MPI-I-2007-4-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2007},
DATE = {2007},
ABSTRACT = {We present a novel approach to real-time shape editing that produces physically plausible deformations using an efficient and easy-to-implement volumetric approach. Our algorithm alternates between a linear tetrahedral Laplacian deformation step and a differential update in which rotational transformations are approximated. By means of this iterative process we can achieve non-linear deformation results while having to solve only linear equation systems. The differential update step relies on estimating the rotational component of the deformation relative to the rest pose. This makes the method very stable as the shape can be reverted to its rest pose even after extreme deformations. Only a few point handles or area handles imposing an orientation are needed to achieve high quality deformations, which makes the approach intuitive to use. We show that our technique is well suited for interactive shape manipulation and also provides an elegant way to animate models with captured motion data.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Stoll, Carsten
%A de Aguiar, Edilson
%A Theobalt, Christian
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T A volumetric approach to interactive shape editing :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-66D6-4
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-4-004
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2007
%P 28 p.
%X We present a novel approach to real-time shape editing that produces
physically plausible deformations using an efficient and
easy-to-implement volumetric approach. Our algorithm alternates between
a linear tetrahedral Laplacian deformation step and a differential
update in which rotational transformations are approximated. By means of
this iterative process we can achieve non-linear deformation results
while having to solve only linear equation systems. The differential
update step relies on estimating the rotational component of the
deformation relative to the rest pose. This makes the method very stable
as the shape can be reverted to its rest pose even after extreme
deformations. Only a few point handles or area handles imposing
an orientation are needed to achieve high quality deformations, which
makes the approach intuitive to use. We show that our technique is well
suited for interactive shape manipulation and also provides an elegant
way to animate models with captured motion data.
%B Research Report / Max-Planck-Institut für Informatik
Yago: a large ontology from Wikipedia and WordNet
F. Suchanek, G. Kasneci and G. Weikum
Technical Report, 2007
F. Suchanek, G. Kasneci and G. Weikum
Technical Report, 2007
Abstract
This article presents YAGO, a large ontology with high coverage and precision.
YAGO has been automatically derived from Wikipedia and WordNet. It
comprises entities and relations, and currently contains more than 1.7
million
entities and 15 million facts. These include the taxonomic Is-A
hierarchy as well as semantic relations between entities. The facts
for YAGO have been extracted from the category system and the
infoboxes of Wikipedia and have been combined with taxonomic relations
from WordNet. Type checking techniques help us keep YAGO's precision
at 95% -- as proven by an extensive evaluation study. YAGO is based on
a clean logical model with a decidable consistency.
Furthermore, it allows representing n-ary relations in a natural way
while maintaining compatibility with RDFS. A powerful query model
facilitates access to YAGO's data.
Export
BibTeX
@techreport{,
TITLE = {Yago: a large ontology from Wikipedia and {WordNet}},
AUTHOR = {Suchanek, Fabian and Kasneci, Gjergji and Weikum, Gerhard},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-5-003},
NUMBER = {MPI-I-2007-5-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2007},
DATE = {2007},
ABSTRACT = {This article presents YAGO, a large ontology with high coverage and precision. YAGO has been automatically derived from Wikipedia and WordNet. It comprises entities and relations, and currently contains more than 1.7 million entities and 15 million facts. These include the taxonomic Is-A hierarchy as well as semantic relations between entities. The facts for YAGO have been extracted from the category system and the infoboxes of Wikipedia and have been combined with taxonomic relations from WordNet. Type checking techniques help us keep YAGO's precision at 95% -- as proven by an extensive evaluation study. YAGO is based on a clean logical model with a decidable consistency. Furthermore, it allows representing n-ary relations in a natural way while maintaining compatibility with RDFS. A powerful query model facilitates access to YAGO's data.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Suchanek, Fabian
%A Kasneci, Gjergji
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Yago: a large ontology from Wikipedia and WordNet :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-66CA-F
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2007-5-003
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2007
%P 67 p.
%X This article presents YAGO, a large ontology with high coverage and precision.
YAGO has been automatically derived from Wikipedia and WordNet. It
comprises entities and relations, and currently contains more than 1.7
million
entities and 15 million facts. These include the taxonomic Is-A
hierarchy as well as semantic relations between entities. The facts
for YAGO have been extracted from the category system and the
infoboxes of Wikipedia and have been combined with taxonomic relations
from WordNet. Type checking techniques help us keep YAGO's precision
at 95% -- as proven by an extensive evaluation study. YAGO is based on
a clean logical model with a decidable consistency.
Furthermore, it allows representing n-ary relations in a natural way
while maintaining compatibility with RDFS. A powerful query model
facilitates access to YAGO's data.
%B Research Report / Max-Planck-Institut für Informatik
2006
Gesture modeling and animation by imitation
I. Albrecht, M. Kipp, M. P. Neff and H.-P. Seidel
Technical Report, 2006
I. Albrecht, M. Kipp, M. P. Neff and H.-P. Seidel
Technical Report, 2006
Abstract
Animated characters that move and gesticulate appropriately with spoken
text are useful in a wide range of applications. Unfortunately, they are
very difficult to generate, even more so when a unique, individual
movement style is required. We present a system that is capable of
producing full-body gesture animation for given input text in the style of
a particular performer. Our process starts with video of a performer whose
gesturing style we wish to animate. A tool-assisted annotation process is
first performed on the video, from which a statistical model of the
person.s particular gesturing style is built. Using this model and tagged
input text, our generation algorithm creates a gesture script appropriate
for the given text. As opposed to isolated singleton gestures, our gesture
script specifies a stream of continuous gestures coordinated with speech.
This script is passed to an animation system, which enhances the gesture
description with more detail and prepares a refined description of the
motion. An animation subengine can then generate either kinematic or
physically simulated motion based on this description. The system is
capable of creating animation that replicates a particular performance in
the video corpus, generating new animation for the spoken text that is
consistent with the given performer.s style and creating performances of a
given text sample in the style of different performers.
Export
BibTeX
@techreport{AlbrechtKippNeffSeidel2006,
TITLE = {Gesture modeling and animation by imitation},
AUTHOR = {Albrecht, Irene and Kipp, Michael and Neff, Michael Paul and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-008},
NUMBER = {MPI-I-2006-4-008},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {Animated characters that move and gesticulate appropriately with spoken text are useful in a wide range of applications. Unfortunately, they are very difficult to generate, even more so when a unique, individual movement style is required. We present a system that is capable of producing full-body gesture animation for given input text in the style of a particular performer. Our process starts with video of a performer whose gesturing style we wish to animate. A tool-assisted annotation process is first performed on the video, from which a statistical model of the person.s particular gesturing style is built. Using this model and tagged input text, our generation algorithm creates a gesture script appropriate for the given text. As opposed to isolated singleton gestures, our gesture script specifies a stream of continuous gestures coordinated with speech. This script is passed to an animation system, which enhances the gesture description with more detail and prepares a refined description of the motion. An animation subengine can then generate either kinematic or physically simulated motion based on this description. The system is capable of creating animation that replicates a particular performance in the video corpus, generating new animation for the spoken text that is consistent with the given performer.s style and creating performances of a given text sample in the style of different performers.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Albrecht, Irene
%A Kipp, Michael
%A Neff, Michael Paul
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Multimodal Computing and Interaction
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Gesture modeling and animation by imitation :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6979-2
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-008
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 62 p.
%X Animated characters that move and gesticulate appropriately with spoken
text are useful in a wide range of applications. Unfortunately, they are
very difficult to generate, even more so when a unique, individual
movement style is required. We present a system that is capable of
producing full-body gesture animation for given input text in the style of
a particular performer. Our process starts with video of a performer whose
gesturing style we wish to animate. A tool-assisted annotation process is
first performed on the video, from which a statistical model of the
person.s particular gesturing style is built. Using this model and tagged
input text, our generation algorithm creates a gesture script appropriate
for the given text. As opposed to isolated singleton gestures, our gesture
script specifies a stream of continuous gestures coordinated with speech.
This script is passed to an animation system, which enhances the gesture
description with more detail and prepares a refined description of the
motion. An animation subengine can then generate either kinematic or
physically simulated motion based on this description. The system is
capable of creating animation that replicates a particular performance in
the video corpus, generating new animation for the spoken text that is
consistent with the given performer.s style and creating performances of a
given text sample in the style of different performers.
%B Research Report / Max-Planck-Institut für Informatik
A neighborhood-based approach for clustering of linked document collections
R. Angelova and S. Siersdorfer
Technical Report, 2006
R. Angelova and S. Siersdorfer
Technical Report, 2006
Abstract
This technical report addresses the problem of automatically structuring
linked document collections by using clustering. In contrast to
traditional clustering, we study the clustering problem in the light of
available link structure information for the data set
(e.g., hyperlinks among web documents or co-authorship among
bibliographic data entries).
Our approach is based on iterative relaxation of cluster assignments,
and can be built on top of any clustering algorithm (e.g., k-means or
DBSCAN). These techniques result in higher cluster purity, better
overall accuracy, and make self-organization more robust. Our
comprehensive experiments on three different real-world corpora
demonstrate the benefits of our approach.
Export
BibTeX
@techreport{AngelovaSiersdorfer2006,
TITLE = {A neighborhood-based approach for clustering of linked document collections},
AUTHOR = {Angelova, Ralitsa and Siersdorfer, Stefan},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-005},
NUMBER = {MPI-I-2006-5-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {This technical report addresses the problem of automatically structuring linked document collections by using clustering. In contrast to traditional clustering, we study the clustering problem in the light of available link structure information for the data set (e.g., hyperlinks among web documents or co-authorship among bibliographic data entries). Our approach is based on iterative relaxation of cluster assignments, and can be built on top of any clustering algorithm (e.g., k-means or DBSCAN). These techniques result in higher cluster purity, better overall accuracy, and make self-organization more robust. Our comprehensive experiments on three different real-world corpora demonstrate the benefits of our approach.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Angelova, Ralitsa
%A Siersdorfer, Stefan
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T A neighborhood-based approach for clustering of linked document collections :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-670D-4
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-005
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 32 p.
%X This technical report addresses the problem of automatically structuring
linked document collections by using clustering. In contrast to
traditional clustering, we study the clustering problem in the light of
available link structure information for the data set
(e.g., hyperlinks among web documents or co-authorship among
bibliographic data entries).
Our approach is based on iterative relaxation of cluster assignments,
and can be built on top of any clustering algorithm (e.g., k-means or
DBSCAN). These techniques result in higher cluster purity, better
overall accuracy, and make self-organization more robust. Our
comprehensive experiments on three different real-world corpora
demonstrate the benefits of our approach.
%B Research Report / Max-Planck-Institut für Informatik
Output-sensitive autocompletion search
H. Bast, I. Weber and C. W. Mortensen
Technical Report, 2006
H. Bast, I. Weber and C. W. Mortensen
Technical Report, 2006
Abstract
We consider the following autocompletion search scenario: imagine a user
of a search engine typing a query; then with every keystroke display those
completions of the last query word that would lead to the best hits, and
also display the best such hits. The following problem is at the core of
this feature: for a fixed document collection, given a set $D$ of
documents, and an alphabetical range $W$ of words, compute the set of all
word-in-document pairs $(w,d)$ from the collection such that $w \in W$
and $d\in D$.
We present a new data structure with the help of which such
autocompletion queries can be processed, on the average, in time linear
in the input plus output size, independent of the size of the underlying
document collection. At the same time, our data structure uses no more
space than an inverted index. Actual query processing times on a large test collection
correlate almost perfectly with our theoretical bound.
Export
BibTeX
@techreport{,
TITLE = {Output-sensitive autocompletion search},
AUTHOR = {Bast, Holger and Weber, Ingmar and Mortensen, Christian Worm},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-1-007},
NUMBER = {MPI-I-2006-1-007},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {We consider the following autocompletion search scenario: imagine a user of a search engine typing a query; then with every keystroke display those completions of the last query word that would lead to the best hits, and also display the best such hits. The following problem is at the core of this feature: for a fixed document collection, given a set $D$ of documents, and an alphabetical range $W$ of words, compute the set of all word-in-document pairs $(w,d)$ from the collection such that $w \in W$ and $d\in D$. We present a new data structure with the help of which such autocompletion queries can be processed, on the average, in time linear in the input plus output size, independent of the size of the underlying document collection. At the same time, our data structure uses no more space than an inverted index. Actual query processing times on a large test collection correlate almost perfectly with our theoretical bound.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Bast, Holger
%A Weber, Ingmar
%A Mortensen, Christian Worm
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Output-sensitive autocompletion search :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-681A-D
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-1-007
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 17 p.
%X We consider the following autocompletion search scenario: imagine a user
of a search engine typing a query; then with every keystroke display those
completions of the last query word that would lead to the best hits, and
also display the best such hits. The following problem is at the core of
this feature: for a fixed document collection, given a set $D$ of
documents, and an alphabetical range $W$ of words, compute the set of all
word-in-document pairs $(w,d)$ from the collection such that $w \in W$
and $d\in D$.
We present a new data structure with the help of which such
autocompletion queries can be processed, on the average, in time linear
in the input plus output size, independent of the size of the underlying
document collection. At the same time, our data structure uses no more
space than an inverted index. Actual query processing times on a large test collection
correlate almost perfectly with our theoretical bound.
%B Research Report / Max-Planck-Institut für Informatik
IO-Top-k: index-access optimized top-k query processing
H. Bast, D. Majumdar, R. Schenkel, C. Theobalt and G. Weikum
Technical Report, 2006
H. Bast, D. Majumdar, R. Schenkel, C. Theobalt and G. Weikum
Technical Report, 2006
Abstract
Top-k query processing is an important building block for ranked retrieval,
with applications ranging from text and data integration to distributed
aggregation of network logs and sensor data.
Top-k queries operate on index lists for a query's elementary conditions
and aggregate scores for result candidates. One of the best implementation
methods in this setting is the family of threshold algorithms, which aim
to terminate the index scans as early as possible based on lower and upper
bounds for the final scores of result candidates. This procedure
performs sequential disk accesses for sorted index scans, but also has the option
of performing random accesses to resolve score uncertainty. This entails
scheduling for the two kinds of accesses: 1) the prioritization of different
index lists in the sequential accesses, and 2) the decision on when to perform
random accesses and for which candidates.
The prior literature has studied some of these scheduling issues, but only for each of the two access types in isolation.
The current paper takes an integrated view of the scheduling issues and develops
novel strategies that outperform prior proposals by a large margin.
Our main contributions are new, principled, scheduling methods based on a Knapsack-related
optimization for sequential accesses and a cost model for random accesses.
The methods can be further boosted by harnessing probabilistic estimators for scores,
selectivities, and index list correlations.
We also discuss efficient implementation techniques for the
underlying data structures.
In performance experiments with three different datasets (TREC Terabyte, HTTP server logs, and IMDB),
our methods achieved significant performance gains compared to the best previously known methods:
a factor of up to 3 in terms of execution costs, and a factor of 5
in terms of absolute run-times of our implementation.
Our best techniques are close to a lower bound for the execution cost of the considered class
of threshold algorithms.
Export
BibTeX
@techreport{BastMajumdarSchenkelTheobaldWeikum2006,
TITLE = {{IO}-Top-k: index-access optimized top-k query processing},
AUTHOR = {Bast, Holger and Majumdar, Debapriyo and Schenkel, Ralf and Theobalt, Christian and Weikum, Gerhard},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-002},
NUMBER = {MPI-I-2006-5-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {Top-k query processing is an important building block for ranked retrieval, with applications ranging from text and data integration to distributed aggregation of network logs and sensor data. Top-k queries operate on index lists for a query's elementary conditions and aggregate scores for result candidates. One of the best implementation methods in this setting is the family of threshold algorithms, which aim to terminate the index scans as early as possible based on lower and upper bounds for the final scores of result candidates. This procedure performs sequential disk accesses for sorted index scans, but also has the option of performing random accesses to resolve score uncertainty. This entails scheduling for the two kinds of accesses: 1) the prioritization of different index lists in the sequential accesses, and 2) the decision on when to perform random accesses and for which candidates. The prior literature has studied some of these scheduling issues, but only for each of the two access types in isolation. The current paper takes an integrated view of the scheduling issues and develops novel strategies that outperform prior proposals by a large margin. Our main contributions are new, principled, scheduling methods based on a Knapsack-related optimization for sequential accesses and a cost model for random accesses. The methods can be further boosted by harnessing probabilistic estimators for scores, selectivities, and index list correlations. We also discuss efficient implementation techniques for the underlying data structures. In performance experiments with three different datasets (TREC Terabyte, HTTP server logs, and IMDB), our methods achieved significant performance gains compared to the best previously known methods: a factor of up to 3 in terms of execution costs, and a factor of 5 in terms of absolute run-times of our implementation. Our best techniques are close to a lower bound for the execution cost of the considered class of threshold algorithms.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Bast, Holger
%A Majumdar, Debapriyo
%A Schenkel, Ralf
%A Theobalt, Christian
%A Weikum, Gerhard
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T IO-Top-k: index-access optimized top-k query processing :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6716-E
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-002
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 49 p.
%X Top-k query processing is an important building block for ranked retrieval,
with applications ranging from text and data integration to distributed
aggregation of network logs and sensor data.
Top-k queries operate on index lists for a query's elementary conditions
and aggregate scores for result candidates. One of the best implementation
methods in this setting is the family of threshold algorithms, which aim
to terminate the index scans as early as possible based on lower and upper
bounds for the final scores of result candidates. This procedure
performs sequential disk accesses for sorted index scans, but also has the option
of performing random accesses to resolve score uncertainty. This entails
scheduling for the two kinds of accesses: 1) the prioritization of different
index lists in the sequential accesses, and 2) the decision on when to perform
random accesses and for which candidates.
The prior literature has studied some of these scheduling issues, but only for each of the two access types in isolation.
The current paper takes an integrated view of the scheduling issues and develops
novel strategies that outperform prior proposals by a large margin.
Our main contributions are new, principled, scheduling methods based on a Knapsack-related
optimization for sequential accesses and a cost model for random accesses.
The methods can be further boosted by harnessing probabilistic estimators for scores,
selectivities, and index list correlations.
We also discuss efficient implementation techniques for the
underlying data structures.
In performance experiments with three different datasets (TREC Terabyte, HTTP server logs, and IMDB),
our methods achieved significant performance gains compared to the best previously known methods:
a factor of up to 3 in terms of execution costs, and a factor of 5
in terms of absolute run-times of our implementation.
Our best techniques are close to a lower bound for the execution cost of the considered class
of threshold algorithms.
%B Research Report / Max-Planck-Institut für Informatik
Mean value coordinates for arbitrary spherical polygons and polyhedra in $\mathbb{R}^{3}$
A. Belyaev, T. Langer and H.-P. Seidel
Technical Report, 2006
A. Belyaev, T. Langer and H.-P. Seidel
Technical Report, 2006
Abstract
Since their introduction, mean value coordinates enjoy ever increasing
popularity in computer graphics and computational mathematics
because they exhibit a variety of good properties. Most importantly,
they are defined in the whole plane which allows interpolation and
extrapolation without restrictions. Recently, mean value coordinates
were generalized to spheres and to $\mathbb{R}^{3}$. We show that these
spherical and 3D mean value coordinates are well-defined on the whole
sphere and the whole space $\mathbb{R}^{3}$, respectively.
Export
BibTeX
@techreport{BelyaevLangerSeidel2006,
TITLE = {Mean value coordinates for arbitrary spherical polygons and polyhedra in \${\textbackslash}mathbb{\textbraceleft}R{\textbraceright}{\textasciicircum}{\textbraceleft}3{\textbraceright}\$},
AUTHOR = {Belyaev, Alexander and Langer, Torsten and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-010},
NUMBER = {MPI-I-2006-4-010},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {Since their introduction, mean value coordinates enjoy ever increasing popularity in computer graphics and computational mathematics because they exhibit a variety of good properties. Most importantly, they are defined in the whole plane which allows interpolation and extrapolation without restrictions. Recently, mean value coordinates were generalized to spheres and to $\mathbb{R}^{3}$. We show that these spherical and 3D mean value coordinates are well-defined on the whole sphere and the whole space $\mathbb{R}^{3}$, respectively.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Belyaev, Alexander
%A Langer, Torsten
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Mean value coordinates for arbitrary spherical polygons and polyhedra in $\mathbb{R}^{3}$ :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-671C-2
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-010
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 19 p.
%X Since their introduction, mean value coordinates enjoy ever increasing
popularity in computer graphics and computational mathematics
because they exhibit a variety of good properties. Most importantly,
they are defined in the whole plane which allows interpolation and
extrapolation without restrictions. Recently, mean value coordinates
were generalized to spheres and to $\mathbb{R}^{3}$. We show that these
spherical and 3D mean value coordinates are well-defined on the whole
sphere and the whole space $\mathbb{R}^{3}$, respectively.
%B Research Report / Max-Planck-Institut für Informatik
Skeleton-driven Laplacian Mesh Deformations
A. Belyaev, S. Yoshizawa and H.-P. Seidel
Technical Report, 2006
A. Belyaev, S. Yoshizawa and H.-P. Seidel
Technical Report, 2006
Abstract
In this report, a new free-form shape deformation approach is proposed.
We combine a skeleton-driven mesh deformation technique with discrete
differential coordinates in order to create natural-looking global shape
deformations. Given a triangle mesh, we first extract a skeletal mesh, a
two-sided
Voronoi-based approximation of the medial axis. Next the skeletal mesh
is modified by free-form deformations. Then a desired global shape
deformation is obtained by reconstructing the shape corresponding to the
deformed skeletal mesh. The reconstruction is based on using discrete
differential coordinates.
Our method preserves fine geometric details and original shape
thickness because of using discrete differential coordinates and
skeleton-driven deformations. We also develop a new mesh evolution
technique which allow us to eliminate possible global and local
self-intersections of the deformed mesh while preserving fine geometric
details. Finally, we present a multiresolution version of our approach
in order to simplify and accelerate the deformation process.
Export
BibTeX
@techreport{BelyaevSeidelShin2006,
TITLE = {Skeleton-driven {Laplacian} Mesh Deformations},
AUTHOR = {Belyaev, Alexander and Yoshizawa, Shin and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-005},
NUMBER = {MPI-I-2006-4-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {In this report, a new free-form shape deformation approach is proposed. We combine a skeleton-driven mesh deformation technique with discrete differential coordinates in order to create natural-looking global shape deformations. Given a triangle mesh, we first extract a skeletal mesh, a two-sided Voronoi-based approximation of the medial axis. Next the skeletal mesh is modified by free-form deformations. Then a desired global shape deformation is obtained by reconstructing the shape corresponding to the deformed skeletal mesh. The reconstruction is based on using discrete differential coordinates. Our method preserves fine geometric details and original shape thickness because of using discrete differential coordinates and skeleton-driven deformations. We also develop a new mesh evolution technique which allow us to eliminate possible global and local self-intersections of the deformed mesh while preserving fine geometric details. Finally, we present a multiresolution version of our approach in order to simplify and accelerate the deformation process.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Belyaev, Alexander
%A Yoshizawa, Shin
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Skeleton-driven Laplacian Mesh Deformations :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-67FF-6
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-005
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 37 p.
%X In this report, a new free-form shape deformation approach is proposed.
We combine a skeleton-driven mesh deformation technique with discrete
differential coordinates in order to create natural-looking global shape
deformations. Given a triangle mesh, we first extract a skeletal mesh, a
two-sided
Voronoi-based approximation of the medial axis. Next the skeletal mesh
is modified by free-form deformations. Then a desired global shape
deformation is obtained by reconstructing the shape corresponding to the
deformed skeletal mesh. The reconstruction is based on using discrete
differential coordinates.
Our method preserves fine geometric details and original shape
thickness because of using discrete differential coordinates and
skeleton-driven deformations. We also develop a new mesh evolution
technique which allow us to eliminate possible global and local
self-intersections of the deformed mesh while preserving fine geometric
details. Finally, we present a multiresolution version of our approach
in order to simplify and accelerate the deformation process.
%B Research Report / Max-Planck-Institut für Informatik
Overlap-aware global df estimation in distributed information retrieval systems
M. Bender, S. Michel, G. Weikum and P. Triantafilou
Technical Report, 2006
M. Bender, S. Michel, G. Weikum and P. Triantafilou
Technical Report, 2006
Abstract
Peer-to-Peer (P2P) search engines and other forms of distributed
information retrieval (IR) are gaining momentum. Unlike in centralized
IR, it is difficult and expensive to compute statistical measures about
the entire document collection as it is widely distributed across many
computers in a highly dynamic network. On the other hand, such
network-wide statistics, most notably, global document frequencies of
the individual terms, would be highly beneficial for ranking global
search results that are compiled from different peers.
This paper develops an efficient and scalable method for estimating
global document frequencies in a large-scale, highly dynamic P2P network
with autonomous peers. The main difficulty that is addressed in this
paper is that the local collections of different peers
may arbitrarily overlap, as many peers may choose to gather popular
documents that fall into their specific interest profile.
Our method is based on hash sketches as an underlying technique for
compact data synopses, and exploits specific properties of hash sketches
for duplicate elimination in the counting process.
We report on experiments with real Web data that demonstrate the
accuracy of our estimation method and also the benefit for better search
result ranking.
Export
BibTeX
@techreport{BenderMichelWeikumTriantafilou2006,
TITLE = {Overlap-aware global df estimation in distributed information retrieval systems},
AUTHOR = {Bender, Matthias and Michel, Sebastian and Weikum, Gerhard and Triantafilou, Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-001},
NUMBER = {MPI-I-2006-5-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {Peer-to-Peer (P2P) search engines and other forms of distributed information retrieval (IR) are gaining momentum. Unlike in centralized IR, it is difficult and expensive to compute statistical measures about the entire document collection as it is widely distributed across many computers in a highly dynamic network. On the other hand, such network-wide statistics, most notably, global document frequencies of the individual terms, would be highly beneficial for ranking global search results that are compiled from different peers. This paper develops an efficient and scalable method for estimating global document frequencies in a large-scale, highly dynamic P2P network with autonomous peers. The main difficulty that is addressed in this paper is that the local collections of different peers may arbitrarily overlap, as many peers may choose to gather popular documents that fall into their specific interest profile. Our method is based on hash sketches as an underlying technique for compact data synopses, and exploits specific properties of hash sketches for duplicate elimination in the counting process. We report on experiments with real Web data that demonstrate the accuracy of our estimation method and also the benefit for better search result ranking.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Bender, Matthias
%A Michel, Sebastian
%A Weikum, Gerhard
%A Triantafilou, Peter
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
External Organizations
%T Overlap-aware global df estimation in distributed information retrieval systems :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6719-8
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 25 p.
%X Peer-to-Peer (P2P) search engines and other forms of distributed
information retrieval (IR) are gaining momentum. Unlike in centralized
IR, it is difficult and expensive to compute statistical measures about
the entire document collection as it is widely distributed across many
computers in a highly dynamic network. On the other hand, such
network-wide statistics, most notably, global document frequencies of
the individual terms, would be highly beneficial for ranking global
search results that are compiled from different peers.
This paper develops an efficient and scalable method for estimating
global document frequencies in a large-scale, highly dynamic P2P network
with autonomous peers. The main difficulty that is addressed in this
paper is that the local collections of different peers
may arbitrarily overlap, as many peers may choose to gather popular
documents that fall into their specific interest profile.
Our method is based on hash sketches as an underlying technique for
compact data synopses, and exploits specific properties of hash sketches
for duplicate elimination in the counting process.
We report on experiments with real Web data that demonstrate the
accuracy of our estimation method and also the benefit for better search
result ranking.
%B Research Report / Max-Planck-Institut für Informatik
Definition of File Format for Benchmark Instances for Arrangements of Quadrics
E. Berberich, F. Ebert and L. Kettner
Technical Report, 2006
E. Berberich, F. Ebert and L. Kettner
Technical Report, 2006
Export
BibTeX
@techreport{acs:bek-dffbiaq-06,
TITLE = {Definition of File Format for Benchmark Instances for Arrangements of Quadrics},
AUTHOR = {Berberich, Eric and Ebert, Franziska and Kettner, Lutz},
LANGUAGE = {eng},
NUMBER = {ACS-TR-123109-01},
INSTITUTION = {University of Groningen},
ADDRESS = {Groningen, The Netherlands},
YEAR = {2006},
DATE = {2006},
}
Endnote
%0 Report
%A Berberich, Eric
%A Ebert, Franziska
%A Kettner, Lutz
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Definition of File Format for Benchmark Instances for Arrangements of Quadrics :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0019-E509-E
%Y University of Groningen
%C Groningen, The Netherlands
%D 2006
Web-site with Benchmark Instances for Planar Curve Arrangements
E. Berberich, F. Ebert, E. Fogel and L. Kettner
Technical Report, 2006
E. Berberich, F. Ebert, E. Fogel and L. Kettner
Technical Report, 2006
Export
BibTeX
@techreport{acs:bek-wbipca-06,
TITLE = {Web-site with Benchmark Instances for Planar Curve Arrangements},
AUTHOR = {Berberich, Eric and Ebert, Franziska and Fogel, Efi and Kettner, Lutz},
LANGUAGE = {eng},
NUMBER = {ACS-TR-123108-01},
INSTITUTION = {University of Groningen},
ADDRESS = {Groningen, The Netherlands},
YEAR = {2006},
DATE = {2006},
}
Endnote
%0 Report
%A Berberich, Eric
%A Ebert, Franziska
%A Fogel, Efi
%A Kettner, Lutz
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Web-site with Benchmark Instances for Planar Curve Arrangements :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0019-E515-1
%Y University of Groningen
%C Groningen, The Netherlands
%D 2006
A framework for natural animation of digitized models
E. de Aguiar, R. Zayer, C. Theobalt, M. A. Magnor and H.-P. Seidel
Technical Report, 2006
E. de Aguiar, R. Zayer, C. Theobalt, M. A. Magnor and H.-P. Seidel
Technical Report, 2006
Abstract
We present a novel versatile, fast and simple framework to generate
highquality animations of scanned human characters from input motion data.
Our method is purely mesh-based and, in contrast to skeleton-based
animation, requires only a minimum of manual interaction. The only manual
step that is required to create moving virtual people is the placement of
a sparse set of correspondences between triangles of an input mesh and
triangles of the mesh to be animated. The proposed algorithm implicitly
generates realistic body deformations, and can easily transfer motions
between human erent shape and proportions. erent types of input data, e.g.
other animated meshes and motion capture les, in just the same way.
Finally, and most importantly, it creates animations at interactive frame
rates. We feature two working prototype systems that demonstrate that our
method can generate lifelike character animations from both marker-based
and marker-less optical motion capture data.
Export
BibTeX
@techreport{deAguiarZayerTheobaltMagnorSeidel2006,
TITLE = {A framework for natural animation of digitized models},
AUTHOR = {de Aguiar, Edilson and Zayer, Rhaleb and Theobalt, Christian and Magnor, Marcus A. and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-003},
NUMBER = {MPI-I-2006-4-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {We present a novel versatile, fast and simple framework to generate highquality animations of scanned human characters from input motion data. Our method is purely mesh-based and, in contrast to skeleton-based animation, requires only a minimum of manual interaction. The only manual step that is required to create moving virtual people is the placement of a sparse set of correspondences between triangles of an input mesh and triangles of the mesh to be animated. The proposed algorithm implicitly generates realistic body deformations, and can easily transfer motions between human erent shape and proportions. erent types of input data, e.g. other animated meshes and motion capture les, in just the same way. Finally, and most importantly, it creates animations at interactive frame rates. We feature two working prototype systems that demonstrate that our method can generate lifelike character animations from both marker-based and marker-less optical motion capture data.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A de Aguiar, Edilson
%A Zayer, Rhaleb
%A Theobalt, Christian
%A Magnor, Marcus A.
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Graphics - Optics - Vision, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T A framework for natural animation of digitized models :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-680B-F
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-003
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 27 p.
%X We present a novel versatile, fast and simple framework to generate
highquality animations of scanned human characters from input motion data.
Our method is purely mesh-based and, in contrast to skeleton-based
animation, requires only a minimum of manual interaction. The only manual
step that is required to create moving virtual people is the placement of
a sparse set of correspondences between triangles of an input mesh and
triangles of the mesh to be animated. The proposed algorithm implicitly
generates realistic body deformations, and can easily transfer motions
between human erent shape and proportions. erent types of input data, e.g.
other animated meshes and motion capture les, in just the same way.
Finally, and most importantly, it creates animations at interactive frame
rates. We feature two working prototype systems that demonstrate that our
method can generate lifelike character animations from both marker-based
and marker-less optical motion capture data.
%B Research Report / Max-Planck-Institut für Informatik
Construction of Low-discrepancy Point Sets of Small Size by Bracketing Covers and Dependent Randomized Rounding
B. Doerr and M. Gnewuch
Technical Report, 2006
B. Doerr and M. Gnewuch
Technical Report, 2006
Abstract
We provide a deterministic algorithm that constructs small point sets
exhibiting a low star discrepancy. The algorithm is based on bracketing and on
recent results on randomized roundings respecting hard constraints. It is
structurally much simpler than the previous algorithm presented for this
problem in [B. Doerr, M. Gnewuch, A. Srivastav. Bounds and constructions for
the star discrepancy via -covers. J. Complexity, 21: 691-709, 2005]. Besides
leading to better theoretical run time bounds, our approach can be implemented
with reasonable effort.
Export
BibTeX
@techreport{SemKiel,
TITLE = {Construction of Low-discrepancy Point Sets of Small Size by Bracketing Covers and Dependent Randomized Rounding},
AUTHOR = {Doerr, Benjamin and Gnewuch, Michael},
LANGUAGE = {eng},
NUMBER = {06-14},
INSTITUTION = {University Kiel},
ADDRESS = {Kiel},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {We provide a deterministic algorithm that constructs small point sets exhibiting a low star discrepancy. The algorithm is based on bracketing and on recent results on randomized roundings respecting hard constraints. It is structurally much simpler than the previous algorithm presented for this problem in [B. Doerr, M. Gnewuch, A. Srivastav. Bounds and constructions for the star discrepancy via -covers. J. Complexity, 21: 691-709, 2005]. Besides leading to better theoretical run time bounds, our approach can be implemented with reasonable effort.},
}
Endnote
%0 Report
%A Doerr, Benjamin
%A Gnewuch, Michael
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T Construction of Low-discrepancy Point Sets of Small Size by Bracketing Covers and Dependent Randomized Rounding :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0019-E49F-6
%Y University Kiel
%C Kiel
%D 2006
%X We provide a deterministic algorithm that constructs small point sets
exhibiting a low star discrepancy. The algorithm is based on bracketing and on
recent results on randomized roundings respecting hard constraints. It is
structurally much simpler than the previous algorithm presented for this
problem in [B. Doerr, M. Gnewuch, A. Srivastav. Bounds and constructions for
the star discrepancy via -covers. J. Complexity, 21: 691-709, 2005]. Besides
leading to better theoretical run time bounds, our approach can be implemented
with reasonable effort.
Design and evaluation of backward compatible high dynamic range video compression
A. Efremov, R. Mantiuk, K. Myszkowski and H.-P. Seidel
Technical Report, 2006
A. Efremov, R. Mantiuk, K. Myszkowski and H.-P. Seidel
Technical Report, 2006
Abstract
In this report we describe the details of the backward compatible high
dynamic range (HDR) video compression algorithm. The algorithm is
designed to facilitate a smooth transition from standard low dynamic
range (LDR) video to high fidelity high dynamic range content. The HDR
and the corresponding LDR video frames are decorrelated and then
compressed into a single MPEG stream, which can be played on both
existing DVD players and HDR-enabled devices.
Export
BibTeX
@techreport{EfremovMantiukMyszkowskiSeidel,
TITLE = {Design and evaluation of backward compatible high dynamic range video compression},
AUTHOR = {Efremov, Alexander and Mantiuk, Rafal and Myszkowski, Karol and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-001},
NUMBER = {MPI-I-2006-4-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {In this report we describe the details of the backward compatible high dynamic range (HDR) video compression algorithm. The algorithm is designed to facilitate a smooth transition from standard low dynamic range (LDR) video to high fidelity high dynamic range content. The HDR and the corresponding LDR video frames are decorrelated and then compressed into a single MPEG stream, which can be played on both existing DVD players and HDR-enabled devices.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Efremov, Alexander
%A Mantiuk, Rafal
%A Myszkowski, Karol
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Design and evaluation of backward compatible high dynamic range video compression :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6811-0
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 50 p.
%X In this report we describe the details of the backward compatible high
dynamic range (HDR) video compression algorithm. The algorithm is
designed to facilitate a smooth transition from standard low dynamic
range (LDR) video to high fidelity high dynamic range content. The HDR
and the corresponding LDR video frames are decorrelated and then
compressed into a single MPEG stream, which can be played on both
existing DVD players and HDR-enabled devices.
%B Research Report / Max-Planck-Institut für Informatik
On the Complexity of Monotone Boolean Duality Testing
K. Elbassioni
Technical Report, 2006
K. Elbassioni
Technical Report, 2006
Abstract
We show that the duality of a pair of monotone Boolean functions in disjunctive
normal forms can be tested in polylogarithmic time using a quasi-polynomial
number of processors. Our decomposition technique yields stronger bounds on the
complexity of the problem than those currently known and also allows for
generating all minimal transversals of a given hypergraph using only polynomial
space.
Export
BibTeX
@techreport{Elbassioni2006,
TITLE = {On the Complexity of Monotone {Boolean} Duality Testing},
AUTHOR = {Elbassioni, Khaled},
LANGUAGE = {eng},
NUMBER = {DIMACS TR: 2006-01},
INSTITUTION = {DIMACS},
ADDRESS = {Piscataway, NJ},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {We show that the duality of a pair of monotone Boolean functions in disjunctive normal forms can be tested in polylogarithmic time using a quasi-polynomial number of processors. Our decomposition technique yields stronger bounds on the complexity of the problem than those currently known and also allows for generating all minimal transversals of a given hypergraph using only polynomial space.},
}
Endnote
%0 Report
%A Elbassioni, Khaled
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On the Complexity of Monotone Boolean Duality Testing :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0019-E4CA-2
%Y DIMACS
%C Piscataway, NJ
%D 2006
%X We show that the duality of a pair of monotone Boolean functions in disjunctive
normal forms can be tested in polylogarithmic time using a quasi-polynomial
number of processors. Our decomposition technique yields stronger bounds on the
complexity of the problem than those currently known and also allows for
generating all minimal transversals of a given hypergraph using only polynomial
space.
Controlled Perturbation for Delaunay Triangulations
S. Funke, C. Klein, K. Mehlhorn and S. Schmitt
Technical Report, 2006
S. Funke, C. Klein, K. Mehlhorn and S. Schmitt
Technical Report, 2006
Export
BibTeX
@techreport{acstr123109-01,
TITLE = {Controlled Perturbation for Delaunay Triangulations},
AUTHOR = {Funke, Stefan and Klein, Christian and Mehlhorn, Kurt and Schmitt, Susanne},
LANGUAGE = {eng},
NUMBER = {ACS-TR-121103-03},
INSTITUTION = {Algorithms for Complex Shapes with certified topology and numerics},
ADDRESS = {Instituut voor Wiskunde en Informatica, Groningen, NETHERLANDS},
YEAR = {2006},
DATE = {2006},
}
Endnote
%0 Report
%A Funke, Stefan
%A Klein, Christian
%A Mehlhorn, Kurt
%A Schmitt, Susanne
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Controlled Perturbation for Delaunay Triangulations :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0027-F72F-3
%Y Algorithms for Complex Shapes with certified topology and numerics
%C Instituut voor Wiskunde en Informatica, Groningen, NETHERLANDS
%D 2006
Power assignment problems in wireless communication
S. Funke, S. Laue, R. Naujoks and L. Zvi
Technical Report, 2006
S. Funke, S. Laue, R. Naujoks and L. Zvi
Technical Report, 2006
Abstract
A fundamental class of problems in wireless communication is concerned
with the assignment of suitable transmission powers to wireless
devices/stations such that the
resulting communication graph satisfies certain desired properties and
the overall energy consumed is minimized. Many concrete communication
tasks in a
wireless network like broadcast, multicast, point-to-point routing,
creation of a communication backbone, etc. can be regarded as such a
power assignment problem.
This paper considers several problems of that kind; for example one
problem studied before in (Vittorio Bil{\`o} et al: Geometric Clustering
to Minimize the Sum
of Cluster Sizes, ESA 2005) and (Helmut Alt et al.: Minimum-cost
coverage of point sets by disks, SCG 2006) aims to select and assign
powers to $k$ of the
stations such that all other stations are within reach of at least one
of the selected stations. We improve the running time for obtaining a
$(1+\epsilon)$-approximate
solution for this problem from $n^{((\alpha/\epsilon)^{O(d)})}$ as
reported by Bil{\`o} et al. (see Vittorio Bil{\`o} et al: Geometric
Clustering to Minimize the Sum
of Cluster Sizes, ESA 2005) to
$O\left( n+ {\left(\frac{k^{2d+1}}{\epsilon^d}\right)}^{ \min{\{\;
2k,\;\; (\alpha/\epsilon)^{O(d)} \;\}} } \right)$ that is, we obtain a
running time that is \emph{linear}
in the network size. Further results include a constant approximation
algorithm for the TSP problem under squared (non-metric!) edge costs,
which can be employed to
implement a novel data aggregation protocol, as well as efficient
schemes to perform $k$-hop multicasts.
Export
BibTeX
@techreport{,
TITLE = {Power assignment problems in wireless communication},
AUTHOR = {Funke, Stefan and Laue, S{\"o}ren and Naujoks, Rouven and Zvi, Lotker},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-1-004},
NUMBER = {MPI-I-2006-1-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {A fundamental class of problems in wireless communication is concerned with the assignment of suitable transmission powers to wireless devices/stations such that the resulting communication graph satisfies certain desired properties and the overall energy consumed is minimized. Many concrete communication tasks in a wireless network like broadcast, multicast, point-to-point routing, creation of a communication backbone, etc. can be regarded as such a power assignment problem. This paper considers several problems of that kind; for example one problem studied before in (Vittorio Bil{\`o} et al: Geometric Clustering to Minimize the Sum of Cluster Sizes, ESA 2005) and (Helmut Alt et al.: Minimum-cost coverage of point sets by disks, SCG 2006) aims to select and assign powers to $k$ of the stations such that all other stations are within reach of at least one of the selected stations. We improve the running time for obtaining a $(1+\epsilon)$-approximate solution for this problem from $n^{((\alpha/\epsilon)^{O(d)})}$ as reported by Bil{\`o} et al. (see Vittorio Bil{\`o} et al: Geometric Clustering to Minimize the Sum of Cluster Sizes, ESA 2005) to $O\left( n+ {\left(\frac{k^{2d+1}}{\epsilon^d}\right)}^{ \min{\{\; 2k,\;\; (\alpha/\epsilon)^{O(d)} \;\}} } \right)$ that is, we obtain a running time that is \emph{linear} in the network size. Further results include a constant approximation algorithm for the TSP problem under squared (non-metric!) edge costs, which can be employed to implement a novel data aggregation protocol, as well as efficient schemes to perform $k$-hop multicasts.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Funke, Stefan
%A Laue, Sören
%A Naujoks, Rouven
%A Zvi, Lotker
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T Power assignment problems in wireless communication :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6820-E
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-1-004
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 25 p.
%X A fundamental class of problems in wireless communication is concerned
with the assignment of suitable transmission powers to wireless
devices/stations such that the
resulting communication graph satisfies certain desired properties and
the overall energy consumed is minimized. Many concrete communication
tasks in a
wireless network like broadcast, multicast, point-to-point routing,
creation of a communication backbone, etc. can be regarded as such a
power assignment problem.
This paper considers several problems of that kind; for example one
problem studied before in (Vittorio Bil{\`o} et al: Geometric Clustering
to Minimize the Sum
of Cluster Sizes, ESA 2005) and (Helmut Alt et al.: Minimum-cost
coverage of point sets by disks, SCG 2006) aims to select and assign
powers to $k$ of the
stations such that all other stations are within reach of at least one
of the selected stations. We improve the running time for obtaining a
$(1+\epsilon)$-approximate
solution for this problem from $n^{((\alpha/\epsilon)^{O(d)})}$ as
reported by Bil{\`o} et al. (see Vittorio Bil{\`o} et al: Geometric
Clustering to Minimize the Sum
of Cluster Sizes, ESA 2005) to
$O\left( n+ {\left(\frac{k^{2d+1}}{\epsilon^d}\right)}^{ \min{\{\;
2k,\;\; (\alpha/\epsilon)^{O(d)} \;\}} } \right)$ that is, we obtain a
running time that is \emph{linear}
in the network size. Further results include a constant approximation
algorithm for the TSP problem under squared (non-metric!) edge costs,
which can be employed to
implement a novel data aggregation protocol, as well as efficient
schemes to perform $k$-hop multicasts.
%B Research Report / Max-Planck-Institut für Informatik
On fast construction of spatial hierarchies for ray tracing
V. Havran, R. Herzog and H.-P. Seidel
Technical Report, 2006
V. Havran, R. Herzog and H.-P. Seidel
Technical Report, 2006
Abstract
In this paper we address the problem of fast construction of spatial
hierarchies for ray tracing with applications in animated environments
including non-rigid animations. We discuss properties of currently
used techniques with $O(N \log N)$ construction time for kd-trees and
bounding volume hierarchies. Further, we propose a hybrid data
structure blending between a spatial kd-tree and bounding volume
primitives. We keep our novel hierarchical data structures
algorithmically efficient and comparable with kd-trees by the use of a
cost model based on surface area heuristics. Although the time
complexity $O(N \log N)$ is a lower bound required for construction of
any spatial hierarchy that corresponds to sorting based on
comparisons, using approximate method based on discretization we
propose a new hierarchical data structures with expected $O(N \log\log N)$ time complexity. We also discuss constants behind the construction algorithms of spatial hierarchies that are important in practice. We document the performance of our algorithms by results obtained from the implementation tested on nine different scenes.
Export
BibTeX
@techreport{HavranHerzogSeidel2006,
TITLE = {On fast construction of spatial hierarchies for ray tracing},
AUTHOR = {Havran, Vlastimil and Herzog, Robert and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-004},
NUMBER = {MPI-I-2006-4-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {In this paper we address the problem of fast construction of spatial hierarchies for ray tracing with applications in animated environments including non-rigid animations. We discuss properties of currently used techniques with $O(N \log N)$ construction time for kd-trees and bounding volume hierarchies. Further, we propose a hybrid data structure blending between a spatial kd-tree and bounding volume primitives. We keep our novel hierarchical data structures algorithmically efficient and comparable with kd-trees by the use of a cost model based on surface area heuristics. Although the time complexity $O(N \log N)$ is a lower bound required for construction of any spatial hierarchy that corresponds to sorting based on comparisons, using approximate method based on discretization we propose a new hierarchical data structures with expected $O(N \log\log N)$ time complexity. We also discuss constants behind the construction algorithms of spatial hierarchies that are important in practice. We document the performance of our algorithms by results obtained from the implementation tested on nine different scenes.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Havran, Vlastimil
%A Herzog, Robert
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T On fast construction of spatial hierarchies for ray tracing :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6807-8
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-004
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 40 p.
%X In this paper we address the problem of fast construction of spatial
hierarchies for ray tracing with applications in animated environments
including non-rigid animations. We discuss properties of currently
used techniques with $O(N \log N)$ construction time for kd-trees and
bounding volume hierarchies. Further, we propose a hybrid data
structure blending between a spatial kd-tree and bounding volume
primitives. We keep our novel hierarchical data structures
algorithmically efficient and comparable with kd-trees by the use of a
cost model based on surface area heuristics. Although the time
complexity $O(N \log N)$ is a lower bound required for construction of
any spatial hierarchy that corresponds to sorting based on
comparisons, using approximate method based on discretization we
propose a new hierarchical data structures with expected $O(N \log\log N)$ time complexity. We also discuss constants behind the construction algorithms of spatial hierarchies that are important in practice. We document the performance of our algorithms by results obtained from the implementation tested on nine different scenes.
%B Research Report / Max-Planck-Institut für Informatik
Yago - a core of semantic knowledge
G. Kasneci, F. Suchanek and G. Weikum
Technical Report, 2006
G. Kasneci, F. Suchanek and G. Weikum
Technical Report, 2006
Abstract
We present YAGO, a light-weight and extensible ontology with high coverage and quality.
YAGO builds on entities and relations and currently contains roughly 900,000 entities and 5,000,000 facts.
This includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as relation{hasWonPrize}).
The facts have been automatically extracted from the unification of Wikipedia and WordNet,
using a carefully designed combination of rule-based and heuristic methods described in this paper.
The resulting knowledge base is a major step beyond WordNet: in quality by adding knowledge about
individuals like persons, organizations, products, etc. with their semantic relationships --
and in quantity by increasing the number of facts by more than an order of magnitude.
Our empirical evaluation of fact correctness shows an accuracy of about 95%.
YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS.
Finally, we show how YAGO can be further extended by state-of-the-art information extraction techniques.
Export
BibTeX
@techreport{,
TITLE = {Yago -- a core of semantic knowledge},
AUTHOR = {Kasneci, Gjergji and Suchanek, Fabian and Weikum, Gerhard},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-006},
NUMBER = {MPI-I-2006-5-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {We present YAGO, a light-weight and extensible ontology with high coverage and quality. YAGO builds on entities and relations and currently contains roughly 900,000 entities and 5,000,000 facts. This includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as relation{hasWonPrize}). The facts have been automatically extracted from the unification of Wikipedia and WordNet, using a carefully designed combination of rule-based and heuristic methods described in this paper. The resulting knowledge base is a major step beyond WordNet: in quality by adding knowledge about individuals like persons, organizations, products, etc. with their semantic relationships -- and in quantity by increasing the number of facts by more than an order of magnitude. Our empirical evaluation of fact correctness shows an accuracy of about 95%. YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS. Finally, we show how YAGO can be further extended by state-of-the-art information extraction techniques.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Kasneci, Gjergji
%A Suchanek, Fabian
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Yago - a core of semantic knowledge :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-670A-A
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-006
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 39 p.
%X We present YAGO, a light-weight and extensible ontology with high coverage and quality.
YAGO builds on entities and relations and currently contains roughly 900,000 entities and 5,000,000 facts.
This includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as relation{hasWonPrize}).
The facts have been automatically extracted from the unification of Wikipedia and WordNet,
using a carefully designed combination of rule-based and heuristic methods described in this paper.
The resulting knowledge base is a major step beyond WordNet: in quality by adding knowledge about
individuals like persons, organizations, products, etc. with their semantic relationships --
and in quantity by increasing the number of facts by more than an order of magnitude.
Our empirical evaluation of fact correctness shows an accuracy of about 95%.
YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS.
Finally, we show how YAGO can be further extended by state-of-the-art information extraction techniques.
%B Research Report / Max-Planck-Institut für Informatik
Division-free computation of subresultants using bezout matrices
M. Kerber
Technical Report, 2006
M. Kerber
Technical Report, 2006
Abstract
We present an algorithm to compute the subresultant sequence of two polynomials that completely avoids division in the ground domain, generalizing an algorithm from Abdeljaoued et al.\ (see Abdeljaoed et al.: Minors of Bezout Matrices\ldots, Int.\ J.\ of Comp.\ Math.\ 81, 2004). This is done by evaluating determinants of slightly manipulated Bezout matrices using an algorithm of Berkowitz. Experiments show that our algorithm is superior compared to pseudo-division approaches for moderate degrees if the domain contains indeterminates.
Export
BibTeX
@techreport{,
TITLE = {Division-free computation of subresultants using bezout matrices},
AUTHOR = {Kerber, Michael},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-1-006},
NUMBER = {MPI-I-2006-1-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {We present an algorithm to compute the subresultant sequence of two polynomials that completely avoids division in the ground domain, generalizing an algorithm from Abdeljaoued et al.\ (see Abdeljaoed et al.: Minors of Bezout Matrices\ldots, Int.\ J.\ of Comp.\ Math.\ 81, 2004). This is done by evaluating determinants of slightly manipulated Bezout matrices using an algorithm of Berkowitz. Experiments show that our algorithm is superior compared to pseudo-division approaches for moderate degrees if the domain contains indeterminates.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Kerber, Michael
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Division-free computation of subresultants using bezout matrices :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-681D-7
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-1-006
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 20 p.
%X We present an algorithm to compute the subresultant sequence of two polynomials that completely avoids division in the ground domain, generalizing an algorithm from Abdeljaoued et al.\ (see Abdeljaoed et al.: Minors of Bezout Matrices\ldots, Int.\ J.\ of Comp.\ Math.\ 81, 2004). This is done by evaluating determinants of slightly manipulated Bezout matrices using an algorithm of Berkowitz. Experiments show that our algorithm is superior compared to pseudo-division approaches for moderate degrees if the domain contains indeterminates.
%B Research Report / Max-Planck-Institut für Informatik
Exploiting Community Behavior for Enhanced Link Analysis and Web Search
J. Luxenburger and G. Weikum
Technical Report, 2006
J. Luxenburger and G. Weikum
Technical Report, 2006
Abstract
Methods for Web link analysis and authority ranking such as PageRank are based
on the assumption that a user endorses a Web page when creating a hyperlink to
this page. There is a wealth of additional user-behavior information that could
be considered for improving authority analysis, for example, the history of
queries that a user community posed to a search engine over an extended time
period, or observations about which query-result pages were clicked on and
which ones were not clicked on after a user saw the summary snippets of the
top-10 results. This paper enhances link analysis methods by incorporating
additional user assessments based on query logs and click streams, including
negative feedback when a query-result page does not satisfy the user demand or
is even perceived as spam. Our methods use various novel forms of advanced
Markov models whose states correspond to users and queries in addition to Web
pages and whose links also reflect the relationships derived from query-result
clicks, query refinements, and explicit ratings. Preliminary experiments are
presented as a proof of concept.
Export
BibTeX
@techreport{TechReportDelis0447_2006,
TITLE = {Exploiting Community Behavior for Enhanced Link Analysis and Web Search},
AUTHOR = {Luxenburger, Julia and Weikum, Gerhard},
LANGUAGE = {eng},
NUMBER = {DELIS-TR-0447},
INSTITUTION = {University of Paderborn, Heinz Nixdorf Institute},
ADDRESS = {Paderborn, Germany},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {Methods for Web link analysis and authority ranking such as PageRank are based on the assumption that a user endorses a Web page when creating a hyperlink to this page. There is a wealth of additional user-behavior information that could be considered for improving authority analysis, for example, the history of queries that a user community posed to a search engine over an extended time period, or observations about which query-result pages were clicked on and which ones were not clicked on after a user saw the summary snippets of the top-10 results. This paper enhances link analysis methods by incorporating additional user assessments based on query logs and click streams, including negative feedback when a query-result page does not satisfy the user demand or is even perceived as spam. Our methods use various novel forms of advanced Markov models whose states correspond to users and queries in addition to Web pages and whose links also reflect the relationships derived from query-result clicks, query refinements, and explicit ratings. Preliminary experiments are presented as a proof of concept.},
}
Endnote
%0 Report
%A Luxenburger, Julia
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Exploiting Community Behavior for Enhanced Link Analysis and Web Search :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0019-BC47-9
%Y University of Paderborn, Heinz Nixdorf Institute
%C Paderborn, Germany
%D 2006
%X Methods for Web link analysis and authority ranking such as PageRank are based
on the assumption that a user endorses a Web page when creating a hyperlink to
this page. There is a wealth of additional user-behavior information that could
be considered for improving authority analysis, for example, the history of
queries that a user community posed to a search engine over an extended time
period, or observations about which query-result pages were clicked on and
which ones were not clicked on after a user saw the summary snippets of the
top-10 results. This paper enhances link analysis methods by incorporating
additional user assessments based on query logs and click streams, including
negative feedback when a query-result page does not satisfy the user demand or
is even perceived as spam. Our methods use various novel forms of advanced
Markov models whose states correspond to users and queries in addition to Web
pages and whose links also reflect the relationships derived from query-result
clicks, query refinements, and explicit ratings. Preliminary experiments are
presented as a proof of concept.
Feature-preserving non-local denoising of static and time-varying range data
O. Schall, A. Belyaev and H.-P. Seidel
Technical Report, 2006
O. Schall, A. Belyaev and H.-P. Seidel
Technical Report, 2006
Abstract
We present a novel algorithm for accurately denoising static and
time-varying range data. Our approach is inspired by
similarity-based non-local image filtering. We show that our
proposed method is easy to implement and outperforms recent
state-of-the-art filtering approaches. Furthermore, it preserves fine
shape features and produces an accurate smoothing result in the
spatial and along the time domain.
Export
BibTeX
@techreport{SchallBelyaevSeidel2006,
TITLE = {Feature-preserving non-local denoising of static and time-varying range data},
AUTHOR = {Schall, Oliver and Belyaev, Alexander and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-007},
NUMBER = {MPI-I-2006-4-007},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {We present a novel algorithm for accurately denoising static and time-varying range data. Our approach is inspired by similarity-based non-local image filtering. We show that our proposed method is easy to implement and outperforms recent state-of-the-art filtering approaches. Furthermore, it preserves fine shape features and produces an accurate smoothing result in the spatial and along the time domain.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Schall, Oliver
%A Belyaev, Alexander
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Feature-preserving non-local denoising of static and time-varying
range data :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-673D-7
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-007
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 22 p.
%X We present a novel algorithm for accurately denoising static and
time-varying range data. Our approach is inspired by
similarity-based non-local image filtering. We show that our
proposed method is easy to implement and outperforms recent
state-of-the-art filtering approaches. Furthermore, it preserves fine
shape features and produces an accurate smoothing result in the
spatial and along the time domain.
%B Research Report / Max-Planck-Institut für Informatik
Combining linguistic and statistical analysis to extract relations from web documents
F. Suchanek, G. Ifrim and G. Weikum
Technical Report, 2006
F. Suchanek, G. Ifrim and G. Weikum
Technical Report, 2006
Abstract
Search engines, question answering systems and classification systems
alike can greatly profit from formalized world knowledge.
Unfortunately, manually compiled collections of world knowledge (such
as WordNet or the Suggested Upper Merged Ontology SUMO) often suffer
from low coverage, high assembling costs and fast aging. In contrast,
the World Wide Web provides an endless source of knowledge, assembled
by millions of people, updated constantly and available for free. In
this paper, we propose a novel method for learning arbitrary binary
relations from natural language Web documents, without human
interaction. Our system, LEILA, combines linguistic analysis and
machine learning techniques to find robust patterns in the text and to
generalize them. For initialization, we only require a set of examples
of the target relation and a set of counterexamples (e.g. from
WordNet). The architecture consists of 3 stages: Finding patterns in
the corpus based on the given examples, assessing the patterns based on
probabilistic confidence, and applying the generalized patterns to
propose pairs for the target relation. We prove the benefits and
practical viability of our approach by extensive experiments, showing
that LEILA achieves consistent improvements over existing comparable
techniques (e.g. Snowball, TextToOnto).
Export
BibTeX
@techreport{Suchanek2006,
TITLE = {Combining linguistic and statistical analysis to extract relations from web documents},
AUTHOR = {Suchanek, Fabian and Ifrim, Georgiana and Weikum, Gerhard},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-004},
NUMBER = {MPI-I-2006-5-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {Search engines, question answering systems and classification systems alike can greatly profit from formalized world knowledge. Unfortunately, manually compiled collections of world knowledge (such as WordNet or the Suggested Upper Merged Ontology SUMO) often suffer from low coverage, high assembling costs and fast aging. In contrast, the World Wide Web provides an endless source of knowledge, assembled by millions of people, updated constantly and available for free. In this paper, we propose a novel method for learning arbitrary binary relations from natural language Web documents, without human interaction. Our system, LEILA, combines linguistic analysis and machine learning techniques to find robust patterns in the text and to generalize them. For initialization, we only require a set of examples of the target relation and a set of counterexamples (e.g. from WordNet). The architecture consists of 3 stages: Finding patterns in the corpus based on the given examples, assessing the patterns based on probabilistic confidence, and applying the generalized patterns to propose pairs for the target relation. We prove the benefits and practical viability of our approach by extensive experiments, showing that LEILA achieves consistent improvements over existing comparable techniques (e.g. Snowball, TextToOnto).},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Suchanek, Fabian
%A Ifrim, Georgiana
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Combining linguistic and statistical analysis to extract relations from web documents :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6710-9
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-5-004
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 37 p.
%X Search engines, question answering systems and classification systems
alike can greatly profit from formalized world knowledge.
Unfortunately, manually compiled collections of world knowledge (such
as WordNet or the Suggested Upper Merged Ontology SUMO) often suffer
from low coverage, high assembling costs and fast aging. In contrast,
the World Wide Web provides an endless source of knowledge, assembled
by millions of people, updated constantly and available for free. In
this paper, we propose a novel method for learning arbitrary binary
relations from natural language Web documents, without human
interaction. Our system, LEILA, combines linguistic analysis and
machine learning techniques to find robust patterns in the text and to
generalize them. For initialization, we only require a set of examples
of the target relation and a set of counterexamples (e.g. from
WordNet). The architecture consists of 3 stages: Finding patterns in
the corpus based on the given examples, assessing the patterns based on
probabilistic confidence, and applying the generalized patterns to
propose pairs for the target relation. We prove the benefits and
practical viability of our approach by extensive experiments, showing
that LEILA achieves consistent improvements over existing comparable
techniques (e.g. Snowball, TextToOnto).
%B Research Report / Max-Planck-Institut für Informatik
Enhanced dynamic reflectometry for relightable free-viewpoint video
C. Theobalt, N. Ahmed, H. P. A. Lensch, M. A. Magnor and H.-P. Seidel
Technical Report, 2006
C. Theobalt, N. Ahmed, H. P. A. Lensch, M. A. Magnor and H.-P. Seidel
Technical Report, 2006
Abstract
Free-Viewpoint Video of Human Actors allows photo-
realistic rendering of real-world people under novel viewing
conditions. Dynamic Reflectometry extends the concept of free-view point
video
and allows rendering in addition under novel lighting
conditions. In this work, we present an enhanced method for capturing
human shape and motion as well as dynamic surface reflectance
properties from a sparse set of input video streams.
We augment our initial method for model-based relightable
free-viewpoint video in several ways. Firstly, a single-skin
mesh is introduced for the continuous appearance of the model.
Moreover an algorithm to detect and
compensate lateral shifting of textiles in order to improve
temporal texture registration is presented. Finally, a
structured resampling approach is introduced which enables
reliable estimation of spatially varying surface reflectance
despite a static recording setup.
The new algorithm ingredients along with the Relightable 3D
Video framework enables us to realistically reproduce the
appearance of animated virtual actors under different
lighting conditions, as well as to interchange surface
attributes among different people, e.g. for virtual
dressing. Our contribution can be used to create 3D
renditions of real-world people under arbitrary novel
lighting conditions on standard graphics hardware.
Export
BibTeX
@techreport{TheobaltAhmedLenschMagnorSeidel2006,
TITLE = {Enhanced dynamic reflectometry for relightable free-viewpoint video},
AUTHOR = {Theobalt, Christian and Ahmed, Naveed and Lensch, Hendrik P. A. and Magnor, Marcus A. and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-006},
NUMBER = {MPI-I-2006-4-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {Free-Viewpoint Video of Human Actors allows photo- realistic rendering of real-world people under novel viewing conditions. Dynamic Reflectometry extends the concept of free-view point video and allows rendering in addition under novel lighting conditions. In this work, we present an enhanced method for capturing human shape and motion as well as dynamic surface reflectance properties from a sparse set of input video streams. We augment our initial method for model-based relightable free-viewpoint video in several ways. Firstly, a single-skin mesh is introduced for the continuous appearance of the model. Moreover an algorithm to detect and compensate lateral shifting of textiles in order to improve temporal texture registration is presented. Finally, a structured resampling approach is introduced which enables reliable estimation of spatially varying surface reflectance despite a static recording setup. The new algorithm ingredients along with the Relightable 3D Video framework enables us to realistically reproduce the appearance of animated virtual actors under different lighting conditions, as well as to interchange surface attributes among different people, e.g. for virtual dressing. Our contribution can be used to create 3D renditions of real-world people under arbitrary novel lighting conditions on standard graphics hardware.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Theobalt, Christian
%A Ahmed, Naveed
%A Lensch, Hendrik P. A.
%A Magnor, Marcus A.
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Graphics - Optics - Vision, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Enhanced dynamic reflectometry for relightable free-viewpoint video :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-67F4-B
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-006
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 37 p.
%X Free-Viewpoint Video of Human Actors allows photo-
realistic rendering of real-world people under novel viewing
conditions. Dynamic Reflectometry extends the concept of free-view point
video
and allows rendering in addition under novel lighting
conditions. In this work, we present an enhanced method for capturing
human shape and motion as well as dynamic surface reflectance
properties from a sparse set of input video streams.
We augment our initial method for model-based relightable
free-viewpoint video in several ways. Firstly, a single-skin
mesh is introduced for the continuous appearance of the model.
Moreover an algorithm to detect and
compensate lateral shifting of textiles in order to improve
temporal texture registration is presented. Finally, a
structured resampling approach is introduced which enables
reliable estimation of spatially varying surface reflectance
despite a static recording setup.
The new algorithm ingredients along with the Relightable 3D
Video framework enables us to realistically reproduce the
appearance of animated virtual actors under different
lighting conditions, as well as to interchange surface
attributes among different people, e.g. for virtual
dressing. Our contribution can be used to create 3D
renditions of real-world people under arbitrary novel
lighting conditions on standard graphics hardware.
%B Research Report / Max-Planck-Institut für Informatik
GPU point list generation through histogram pyramids
G. Ziegler, A. Tevs, C. Theobalt and H.-P. Seidel
Technical Report, 2006
G. Ziegler, A. Tevs, C. Theobalt and H.-P. Seidel
Technical Report, 2006
Abstract
Image Pyramids are frequently used in porting non-local algorithms to graphics
hardware. A Histogram pyramid (short: HistoPyramid), a special version
of image pyramid, sums up the number of active entries in a 2D
image hierarchically. We show how a HistoPyramid can be utilized as an implicit indexing data
structure, allowing us to convert a sparse matrix into a coordinate list
of active cell entries (a point list) on graphics hardware . The algorithm
reduces a highly sparse matrix with N elements to a list of its M active
entries in O(N) + M (log N) steps, despite the restricted graphics
hardware architecture. Applications are numerous, including feature
detection, pixel classification and binning, conversion of 3D volumes to
particle clouds and sparse matrix compression.
Export
BibTeX
@techreport{OhtakeBelyaevSeidel2004,
TITLE = {{GPU} point list generation through histogram pyramids},
AUTHOR = {Ziegler, Gernot and Tevs, Art and Theobalt, Christian and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-002},
NUMBER = {MPI-I-2006-4-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2006},
DATE = {2006},
ABSTRACT = {Image Pyramids are frequently used in porting non-local algorithms to graphics hardware. A Histogram pyramid (short: HistoPyramid), a special version of image pyramid, sums up the number of active entries in a 2D image hierarchically. We show how a HistoPyramid can be utilized as an implicit indexing data structure, allowing us to convert a sparse matrix into a coordinate list of active cell entries (a point list) on graphics hardware . The algorithm reduces a highly sparse matrix with N elements to a list of its M active entries in O(N) + M (log N) steps, despite the restricted graphics hardware architecture. Applications are numerous, including feature detection, pixel classification and binning, conversion of 3D volumes to particle clouds and sparse matrix compression.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Ziegler, Gernot
%A Tevs, Art
%A Theobalt, Christian
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T GPU point list generation through histogram pyramids :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-680E-9
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2006-4-002
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2006
%P 13 p.
%X Image Pyramids are frequently used in porting non-local algorithms to graphics
hardware. A Histogram pyramid (short: HistoPyramid), a special version
of image pyramid, sums up the number of active entries in a 2D
image hierarchically. We show how a HistoPyramid can be utilized as an implicit indexing data
structure, allowing us to convert a sparse matrix into a coordinate list
of active cell entries (a point list) on graphics hardware . The algorithm
reduces a highly sparse matrix with N elements to a list of its M active
entries in O(N) + M (log N) steps, despite the restricted graphics
hardware architecture. Applications are numerous, including feature
detection, pixel classification and binning, conversion of 3D volumes to
particle clouds and sparse matrix compression.
%B Research Report / Max-Planck-Institut für Informatik
2005
Improved algorithms for all-pairs approximate shortest paths in weighted graphs
S. Baswana and K. Telikepalli
Technical Report, 2005
S. Baswana and K. Telikepalli
Technical Report, 2005
Abstract
The all-pairs approximate shortest-paths problem is an interesting
variant of the classical all-pairs shortest-paths problem in graphs.
The problem aims at building a data-structure for a given graph
with the following two features. Firstly, for any two vertices,
it should report an {\emph{approximate}} shortest path between them,
that is, a path which is longer than the shortest path
by some {\emph{small}} factor. Secondly, the data-structure should require
less preprocessing time (strictly sub-cubic) and occupy optimal space
(sub-quadratic), at the cost of this approximation.
In this paper, we present algorithms for computing all-pairs approximate
shortest paths in a weighted undirected graph. These algorithms significantly
improve the existing results for this problem.
Export
BibTeX
@techreport{,
TITLE = {Improved algorithms for all-pairs approximate shortest paths in weighted graphs},
AUTHOR = {Baswana, Surender and Telikepalli, Kavitha},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-003},
NUMBER = {MPI-I-2005-1-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2005},
DATE = {2005},
ABSTRACT = {The all-pairs approximate shortest-paths problem is an interesting variant of the classical all-pairs shortest-paths problem in graphs. The problem aims at building a data-structure for a given graph with the following two features. Firstly, for any two vertices, it should report an {\emph{approximate}} shortest path between them, that is, a path which is longer than the shortest path by some {\emph{small}} factor. Secondly, the data-structure should require less preprocessing time (strictly sub-cubic) and occupy optimal space (sub-quadratic), at the cost of this approximation. In this paper, we present algorithms for computing all-pairs approximate shortest paths in a weighted undirected graph. These algorithms significantly improve the existing results for this problem.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Baswana, Surender
%A Telikepalli, Kavitha
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T Improved algorithms for all-pairs approximate shortest paths in weighted graphs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6854-9
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-003
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2005
%P 26 p.
%X The all-pairs approximate shortest-paths problem is an interesting
variant of the classical all-pairs shortest-paths problem in graphs.
The problem aims at building a data-structure for a given graph
with the following two features. Firstly, for any two vertices,
it should report an {\emph{approximate}} shortest path between them,
that is, a path which is longer than the shortest path
by some {\emph{small}} factor. Secondly, the data-structure should require
less preprocessing time (strictly sub-cubic) and occupy optimal space
(sub-quadratic), at the cost of this approximation.
In this paper, we present algorithms for computing all-pairs approximate
shortest paths in a weighted undirected graph. These algorithms significantly
improve the existing results for this problem.
%B Research Report / Max-Planck-Institut für Informatik
STXXL: Standard Template Library for XXL Data Sets
R. Dementiev, L. Kettner and P. Sanders
Technical Report, 2005
R. Dementiev, L. Kettner and P. Sanders
Technical Report, 2005
Export
BibTeX
@techreport{Kettner2005StxxlReport,
TITLE = {{STXXL}: Standard Template Library for {XXL} Data Sets},
AUTHOR = {Dementiev, Roman and Kettner, Lutz and Sanders, Peter},
LANGUAGE = {eng},
NUMBER = {2005/18},
INSTITUTION = {Fakult{\"a}t f{\"u}r Informatik, University of Karlsruhe},
ADDRESS = {Karlsruhe, Germany},
YEAR = {2005},
DATE = {2005},
}
Endnote
%0 Report
%A Dementiev, Roman
%A Kettner, Lutz
%A Sanders, Peter
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T STXXL: Standard Template Library for XXL Data Sets :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0019-E689-4
%Y Fakultät für Informatik, University of Karlsruhe
%C Karlsruhe, Germany
%D 2005
An emperical model for heterogeneous translucent objects
C. Fuchs, M. Gösele, T. Chen and H.-P. Seidel
Technical Report, 2005
C. Fuchs, M. Gösele, T. Chen and H.-P. Seidel
Technical Report, 2005
Abstract
We introduce an empirical model for multiple scattering in heterogeneous
translucent objects for which classical approximations such as the
dipole approximation to the di usion equation are no longer valid.
Motivated by the exponential fall-o of scattered intensity with
distance, di use subsurface scattering is represented as a sum of
exponentials per surface point plus a modulation texture. Modeling
quality can be improved by using an anisotropic model where exponential
parameters are determined per surface location and scattering direction.
We validate the scattering model for a set of planar object samples
which were recorded under controlled conditions and quantify the
modeling error. Furthermore, several translucent objects with complex
geometry are captured and compared to the real object under similar
illumination conditions.
Export
BibTeX
@techreport{FuchsGoeseleChenSeidel,
TITLE = {An emperical model for heterogeneous translucent objects},
AUTHOR = {Fuchs, Christian and G{\"o}sele, Michael and Chen, Tongbo and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-006},
NUMBER = {MPI-I-2005-4-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2005},
DATE = {2005},
ABSTRACT = {We introduce an empirical model for multiple scattering in heterogeneous translucent objects for which classical approximations such as the dipole approximation to the di usion equation are no longer valid. Motivated by the exponential fall-o of scattered intensity with distance, di use subsurface scattering is represented as a sum of exponentials per surface point plus a modulation texture. Modeling quality can be improved by using an anisotropic model where exponential parameters are determined per surface location and scattering direction. We validate the scattering model for a set of planar object samples which were recorded under controlled conditions and quantify the modeling error. Furthermore, several translucent objects with complex geometry are captured and compared to the real object under similar illumination conditions.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Fuchs, Christian
%A Gösele, Michael
%A Chen, Tongbo
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T An emperical model for heterogeneous translucent objects :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-682F-0
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-006
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2005
%P 20 p.
%X We introduce an empirical model for multiple scattering in heterogeneous
translucent objects for which classical approximations such as the
dipole approximation to the di usion equation are no longer valid.
Motivated by the exponential fall-o of scattered intensity with
distance, di use subsurface scattering is represented as a sum of
exponentials per surface point plus a modulation texture. Modeling
quality can be improved by using an anisotropic model where exponential
parameters are determined per surface location and scattering direction.
We validate the scattering model for a set of planar object samples
which were recorded under controlled conditions and quantify the
modeling error. Furthermore, several translucent objects with complex
geometry are captured and compared to the real object under similar
illumination conditions.
%B Research Report / Max-Planck-Institut für Informatik
Reflectance from images: a model-based approach for human faces
M. Fuchs, V. Blanz, H. P. A. Lensch and H.-P. Seidel
Technical Report, 2005
M. Fuchs, V. Blanz, H. P. A. Lensch and H.-P. Seidel
Technical Report, 2005
Abstract
In this paper, we present an image-based framework that acquires the
reflectance properties of a human face. A range scan of the face is not
required.
Based on a morphable face model, the system estimates the 3D
shape, and establishes point-to-point correspondence across images taken
from different viewpoints, and across different individuals' faces.
This provides a common parameterization of all reconstructed surfaces
that can be used to compare and transfer BRDF data between different
faces. Shape estimation from images compensates deformations of the face
during the measurement process, such as facial expressions.
In the common parameterization, regions of homogeneous materials on the
face surface can be defined a-priori. We apply analytical BRDF models to
express the reflectance properties of each region, and we estimate their
parameters in a least-squares fit from the image data. For each of the
surface points, the diffuse component of the BRDF is locally refined,
which provides high detail.
We present results for multiple analytical BRDF models, rendered at
novelorientations and lighting conditions.
Export
BibTeX
@techreport{FuchsBlanzLenschSeidel2005,
TITLE = {Reflectance from images: a model-based approach for human faces},
AUTHOR = {Fuchs, Martin and Blanz, Volker and Lensch, Hendrik P. A. and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-001},
NUMBER = {MPI-I-2005-4-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2005},
DATE = {2005},
ABSTRACT = {In this paper, we present an image-based framework that acquires the reflectance properties of a human face. A range scan of the face is not required. Based on a morphable face model, the system estimates the 3D shape, and establishes point-to-point correspondence across images taken from different viewpoints, and across different individuals' faces. This provides a common parameterization of all reconstructed surfaces that can be used to compare and transfer BRDF data between different faces. Shape estimation from images compensates deformations of the face during the measurement process, such as facial expressions. In the common parameterization, regions of homogeneous materials on the face surface can be defined a-priori. We apply analytical BRDF models to express the reflectance properties of each region, and we estimate their parameters in a least-squares fit from the image data. For each of the surface points, the diffuse component of the BRDF is locally refined, which provides high detail. We present results for multiple analytical BRDF models, rendered at novelorientations and lighting conditions.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Fuchs, Martin
%A Blanz, Volker
%A Lensch, Hendrik P. A.
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Reflectance from images: a model-based approach for human faces :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-683F-C
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2005
%P 33 p.
%X In this paper, we present an image-based framework that acquires the
reflectance properties of a human face. A range scan of the face is not
required.
Based on a morphable face model, the system estimates the 3D
shape, and establishes point-to-point correspondence across images taken
from different viewpoints, and across different individuals' faces.
This provides a common parameterization of all reconstructed surfaces
that can be used to compare and transfer BRDF data between different
faces. Shape estimation from images compensates deformations of the face
during the measurement process, such as facial expressions.
In the common parameterization, regions of homogeneous materials on the
face surface can be defined a-priori. We apply analytical BRDF models to
express the reflectance properties of each region, and we estimate their
parameters in a least-squares fit from the image data. For each of the
surface points, the diffuse component of the BRDF is locally refined,
which provides high detail.
We present results for multiple analytical BRDF models, rendered at
novelorientations and lighting conditions.
%B Research Report / Max-Planck-Institut für Informatik
Cycle bases of graphs and sampled manifolds
C. Gotsman, K. Kaligosi, K. Mehlhorn, D. Michail and E. Pyrga
Technical Report, 2005
C. Gotsman, K. Kaligosi, K. Mehlhorn, D. Michail and E. Pyrga
Technical Report, 2005
Abstract
Point samples of a surface in $\R^3$ are the dominant output of a
multitude of 3D scanning devices. The usefulness of these devices rests on
being able to extract properties of the surface from the sample. We show that, under
certain sampling conditions, the minimum cycle basis of a nearest neighbor graph of
the sample encodes topological information about the surface and yields bases for the
trivial and non-trivial loops of the surface. We validate our results by experiments.
Export
BibTeX
@techreport{,
TITLE = {Cycle bases of graphs and sampled manifolds},
AUTHOR = {Gotsman, Craig and Kaligosi, Kanela and Mehlhorn, Kurt and Michail, Dimitrios and Pyrga, Evangelia},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-008},
NUMBER = {MPI-I-2005-1-008},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2005},
DATE = {2005},
ABSTRACT = {Point samples of a surface in $\R^3$ are the dominant output of a multitude of 3D scanning devices. The usefulness of these devices rests on being able to extract properties of the surface from the sample. We show that, under certain sampling conditions, the minimum cycle basis of a nearest neighbor graph of the sample encodes topological information about the surface and yields bases for the trivial and non-trivial loops of the surface. We validate our results by experiments.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Gotsman, Craig
%A Kaligosi, Kanela
%A Mehlhorn, Kurt
%A Michail, Dimitrios
%A Pyrga, Evangelia
%+ External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Cycle bases of graphs and sampled manifolds :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-684C-E
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-008
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2005
%P 30 p.
%X Point samples of a surface in $\R^3$ are the dominant output of a
multitude of 3D scanning devices. The usefulness of these devices rests on
being able to extract properties of the surface from the sample. We show that, under
certain sampling conditions, the minimum cycle basis of a nearest neighbor graph of
the sample encodes topological information about the surface and yields bases for the
trivial and non-trivial loops of the surface. We validate our results by experiments.
%B Research Report / Max-Planck-Institut für Informatik
Reachability substitutes for planar digraphs
I. Katriel, M. Kutz and M. Skutella
Technical Report, 2005
I. Katriel, M. Kutz and M. Skutella
Technical Report, 2005
Abstract
Given a digraph $G = (V,E)$ with a set $U$ of vertices marked
``interesting,'' we want to find a smaller digraph $\RS{} = (V',E')$
with $V' \supseteq U$ in such a way that the reachabilities amongst
those interesting vertices in $G$ and \RS{} are the same. So with
respect to the reachability relations within $U$, the digraph \RS{}
is a substitute for $G$.
We show that while almost all graphs do not have reachability
substitutes smaller than $\Ohmega(|U|^2/\log |U|)$, every planar
graph has a reachability substitute of size $\Oh(|U| \log^2 |U|)$.
Our result rests on two new structural results for planar
dags, a separation procedure and a reachability theorem, which
might be of independent interest.
Export
BibTeX
@techreport{,
TITLE = {Reachability substitutes for planar digraphs},
AUTHOR = {Katriel, Irit and Kutz, Martin and Skutella, Martin},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-002},
NUMBER = {MPI-I-2005-1-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2005},
DATE = {2005},
ABSTRACT = {Given a digraph $G = (V,E)$ with a set $U$ of vertices marked ``interesting,'' we want to find a smaller digraph $\RS{} = (V',E')$ with $V' \supseteq U$ in such a way that the reachabilities amongst those interesting vertices in $G$ and \RS{} are the same. So with respect to the reachability relations within $U$, the digraph \RS{} is a substitute for $G$. We show that while almost all graphs do not have reachability substitutes smaller than $\Ohmega(|U|^2/\log |U|)$, every planar graph has a reachability substitute of size $\Oh(|U| \log^2 |U|)$. Our result rests on two new structural results for planar dags, a separation procedure and a reachability theorem, which might be of independent interest.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Katriel, Irit
%A Kutz, Martin
%A Skutella, Martin
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Reachability substitutes for planar digraphs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6859-0
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-002
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2005
%P 24 p.
%X Given a digraph $G = (V,E)$ with a set $U$ of vertices marked
``interesting,'' we want to find a smaller digraph $\RS{} = (V',E')$
with $V' \supseteq U$ in such a way that the reachabilities amongst
those interesting vertices in $G$ and \RS{} are the same. So with
respect to the reachability relations within $U$, the digraph \RS{}
is a substitute for $G$.
We show that while almost all graphs do not have reachability
substitutes smaller than $\Ohmega(|U|^2/\log |U|)$, every planar
graph has a reachability substitute of size $\Oh(|U| \log^2 |U|)$.
Our result rests on two new structural results for planar
dags, a separation procedure and a reachability theorem, which
might be of independent interest.
%B Research Report / Max-Planck-Institut für Informatik
A faster algorithm for computing a longest common increasing subsequence
I. Katriel and M. Kutz
Technical Report, 2005
I. Katriel and M. Kutz
Technical Report, 2005
Abstract
Let $A=\langle a_1,\dots,a_n\rangle$ and
$B=\langle b_1,\dots,b_m \rangle$ be two sequences with $m \ge n$,
whose elements are drawn from a totally ordered set.
We present an algorithm that finds a longest
common increasing subsequence of $A$ and $B$ in $O(m\log m+n\ell\log n)$
time and $O(m + n\ell)$ space, where $\ell$ is the length of the output.
A previous algorithm by Yang et al. needs $\Theta(mn)$ time and space,
so ours is faster for a wide range of values of $m,n$ and $\ell$.
Export
BibTeX
@techreport{,
TITLE = {A faster algorithm for computing a longest common increasing subsequence},
AUTHOR = {Katriel, Irit and Kutz, Martin},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-007},
NUMBER = {MPI-I-2005-1-007},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2005},
DATE = {2005},
ABSTRACT = {Let $A=\langle a_1,\dots,a_n\rangle$ and $B=\langle b_1,\dots,b_m \rangle$ be two sequences with $m \ge n$, whose elements are drawn from a totally ordered set. We present an algorithm that finds a longest common increasing subsequence of $A$ and $B$ in $O(m\log m+n\ell\log n)$ time and $O(m + n\ell)$ space, where $\ell$ is the length of the output. A previous algorithm by Yang et al. needs $\Theta(mn)$ time and space, so ours is faster for a wide range of values of $m,n$ and $\ell$.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Katriel, Irit
%A Kutz, Martin
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A faster algorithm for computing a longest common increasing
subsequence :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-684F-8
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-007
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2005
%P 13 p.
%X Let $A=\langle a_1,\dots,a_n\rangle$ and
$B=\langle b_1,\dots,b_m \rangle$ be two sequences with $m \ge n$,
whose elements are drawn from a totally ordered set.
We present an algorithm that finds a longest
common increasing subsequence of $A$ and $B$ in $O(m\log m+n\ell\log n)$
time and $O(m + n\ell)$ space, where $\ell$ is the length of the output.
A previous algorithm by Yang et al. needs $\Theta(mn)$ time and space,
so ours is faster for a wide range of values of $m,n$ and $\ell$.
%B Research Report / Max-Planck-Institut für Informatik
Photometric calibration of high dynamic range cameras
G. Krawczyk, M. Gösele and H.-P. Seidel
Technical Report, 2005
G. Krawczyk, M. Gösele and H.-P. Seidel
Technical Report, 2005
Export
BibTeX
@techreport{KrawczykGoeseleSeidel2005,
TITLE = {Photometric calibration of high dynamic range cameras},
AUTHOR = {Krawczyk, Grzegorz and G{\"o}sele, Michael and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-005},
NUMBER = {MPI-I-2005-4-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2005},
DATE = {2005},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Krawczyk, Grzegorz
%A Gösele, Michael
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Photometric calibration of high dynamic range cameras :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6834-2
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-005
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2005
%P 21 p.
%B Research Report / Max-Planck-Institut für Informatik
Analysis and design of discrete normals and curvatures
T. Langer, A. Belyaev and H.-P. Seidel
Technical Report, 2005
T. Langer, A. Belyaev and H.-P. Seidel
Technical Report, 2005
Abstract
Accurate estimations of geometric properties of a surface (a curve) from
its discrete approximation are important for many computer graphics and
computer vision applications.
To assess and improve the quality of such an approximation we assume
that the
smooth surface (curve) is known in general form. Then we can represent the
surface (curve) by a Taylor series expansion
and compare its geometric properties with the corresponding discrete
approximations. In turn
we can either prove convergence of these approximations towards the true
properties
as the edge lengths tend to zero, or we can get hints how
to eliminate the error.
In this report we propose and study discrete schemes for estimating
the curvature and torsion of a smooth 3D curve approximated by a polyline.
Thereby we make some interesting findings about connections between
(smooth) classical curves
and certain estimation schemes for polylines.
Furthermore, we consider several popular schemes for estimating the
surface normal
of a dense triangle mesh interpolating a smooth surface,
and analyze their asymptotic properties.
Special attention is paid to the mean curvature vector, that
approximates both,
normal direction and mean curvature. We evaluate a common discrete
approximation and
show how asymptotic analysis can be used to improve it.
It turns out that the integral formulation of the mean curvature
\begin{equation*}
H = \frac{1}{2 \pi} \int_0^{2 \pi} \kappa(\phi) d\phi,
\end{equation*}
can be computed by an exact quadrature formula.
The same is true for the integral formulations of Gaussian curvature and
the Taubin tensor.
The exact quadratures are then used to obtain reliable estimates
of the curvature tensor of a smooth surface approximated by a dense triangle
mesh. The proposed method is fast and often demonstrates a better
performance
than conventional curvature tensor estimation approaches. We also show
that the curvature tensor approximated by
our approach converges towards the true curvature tensor as the edge
lengths tend to zero.
Export
BibTeX
@techreport{LangerBelyaevSeidel2005,
TITLE = {Analysis and design of discrete normals and curvatures},
AUTHOR = {Langer, Torsten and Belyaev, Alexander and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-003},
NUMBER = {MPI-I-2005-4-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2005},
DATE = {2005},
ABSTRACT = {Accurate estimations of geometric properties of a surface (a curve) from its discrete approximation are important for many computer graphics and computer vision applications. To assess and improve the quality of such an approximation we assume that the smooth surface (curve) is known in general form. Then we can represent the surface (curve) by a Taylor series expansion and compare its geometric properties with the corresponding discrete approximations. In turn we can either prove convergence of these approximations towards the true properties as the edge lengths tend to zero, or we can get hints how to eliminate the error. In this report we propose and study discrete schemes for estimating the curvature and torsion of a smooth 3D curve approximated by a polyline. Thereby we make some interesting findings about connections between (smooth) classical curves and certain estimation schemes for polylines. Furthermore, we consider several popular schemes for estimating the surface normal of a dense triangle mesh interpolating a smooth surface, and analyze their asymptotic properties. Special attention is paid to the mean curvature vector, that approximates both, normal direction and mean curvature. We evaluate a common discrete approximation and show how asymptotic analysis can be used to improve it. It turns out that the integral formulation of the mean curvature \begin{equation*} H = \frac{1}{2 \pi} \int_0^{2 \pi} \kappa(\phi) d\phi, \end{equation*} can be computed by an exact quadrature formula. The same is true for the integral formulations of Gaussian curvature and the Taubin tensor. The exact quadratures are then used to obtain reliable estimates of the curvature tensor of a smooth surface approximated by a dense triangle mesh. The proposed method is fast and often demonstrates a better performance than conventional curvature tensor estimation approaches. We also show that the curvature tensor approximated by our approach converges towards the true curvature tensor as the edge lengths tend to zero.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Langer, Torsten
%A Belyaev, Alexander
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Analysis and design of discrete normals and curvatures :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6837-B
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-003
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2005
%P 42 p.
%X Accurate estimations of geometric properties of a surface (a curve) from
its discrete approximation are important for many computer graphics and
computer vision applications.
To assess and improve the quality of such an approximation we assume
that the
smooth surface (curve) is known in general form. Then we can represent the
surface (curve) by a Taylor series expansion
and compare its geometric properties with the corresponding discrete
approximations. In turn
we can either prove convergence of these approximations towards the true
properties
as the edge lengths tend to zero, or we can get hints how
to eliminate the error.
In this report we propose and study discrete schemes for estimating
the curvature and torsion of a smooth 3D curve approximated by a polyline.
Thereby we make some interesting findings about connections between
(smooth) classical curves
and certain estimation schemes for polylines.
Furthermore, we consider several popular schemes for estimating the
surface normal
of a dense triangle mesh interpolating a smooth surface,
and analyze their asymptotic properties.
Special attention is paid to the mean curvature vector, that
approximates both,
normal direction and mean curvature. We evaluate a common discrete
approximation and
show how asymptotic analysis can be used to improve it.
It turns out that the integral formulation of the mean curvature
\begin{equation*}
H = \frac{1}{2 \pi} \int_0^{2 \pi} \kappa(\phi) d\phi,
\end{equation*}
can be computed by an exact quadrature formula.
The same is true for the integral formulations of Gaussian curvature and
the Taubin tensor.
The exact quadratures are then used to obtain reliable estimates
of the curvature tensor of a smooth surface approximated by a dense triangle
mesh. The proposed method is fast and often demonstrates a better
performance
than conventional curvature tensor estimation approaches. We also show
that the curvature tensor approximated by
our approach converges towards the true curvature tensor as the edge
lengths tend to zero.
%B Research Report / Max-Planck-Institut für Informatik
Rank-maximal through maximum weight matchings
D. Michail
Technical Report, 2005
D. Michail
Technical Report, 2005
Abstract
Given a bipartite graph $G( V, E)$, $ V = A \disjointcup B$
where $|V|=n, |E|=m$ and a partition of the edge set into
$r \le m$ disjoint subsets $E = E_1 \disjointcup E_2
\disjointcup \dots \disjointcup E_r$, which are called ranks,
the {\em rank-maximal matching} problem is to find a matching $M$
of $G$ such that $|M \cap E_1|$ is maximized and given that
$|M \cap E_2|$, and so on. Such a problem arises as an optimization
criteria over a possible assignment of a set of applicants to a
set of posts. The matching represents the assignment and the
ranks on the edges correspond to a ranking on the posts submitted
by the applicants.
The rank-maximal matching problem has been previously
studied where a $O( r \sqrt n m )$ time and linear
space algorithm~\cite{IKMMP} was
presented. In this paper we present a new simpler algorithm which
matches the running time and space complexity of the above
algorithm.
The new algorithm is based on a different approach,
by exploiting that the rank-maximal matching problem can
be reduced to a maximum weight matching problem where the
weight of an edge of rank $i$ is $2^{ \ceil{\log n} (r-i)}$.
By exploiting that these edge weights are steeply distributed
we design a scaling algorithm which scales by a factor of
$n$ in each phase. We also show that in each phase one
maximum cardinality computation is sufficient to get a new
optimal solution.
This algorithm answers an open question raised on the same
paper on whether the reduction to the maximum-weight matching
problem can help us derive an efficient algorithm.
Export
BibTeX
@techreport{,
TITLE = {Rank-maximal through maximum weight matchings},
AUTHOR = {Michail, Dimitrios},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-001},
NUMBER = {MPI-I-2005-1-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2005},
DATE = {2005},
ABSTRACT = {Given a bipartite graph $G( V, E)$, $ V = A \disjointcup B$ where $|V|=n, |E|=m$ and a partition of the edge set into $r \le m$ disjoint subsets $E = E_1 \disjointcup E_2 \disjointcup \dots \disjointcup E_r$, which are called ranks, the {\em rank-maximal matching} problem is to find a matching $M$ of $G$ such that $|M \cap E_1|$ is maximized and given that $|M \cap E_2|$, and so on. Such a problem arises as an optimization criteria over a possible assignment of a set of applicants to a set of posts. The matching represents the assignment and the ranks on the edges correspond to a ranking on the posts submitted by the applicants. The rank-maximal matching problem has been previously studied where a $O( r \sqrt n m )$ time and linear space algorithm~\cite{IKMMP} was presented. In this paper we present a new simpler algorithm which matches the running time and space complexity of the above algorithm. The new algorithm is based on a different approach, by exploiting that the rank-maximal matching problem can be reduced to a maximum weight matching problem where the weight of an edge of rank $i$ is $2^{ \ceil{\log n} (r-i)}$. By exploiting that these edge weights are steeply distributed we design a scaling algorithm which scales by a factor of $n$ in each phase. We also show that in each phase one maximum cardinality computation is sufficient to get a new optimal solution. This algorithm answers an open question raised on the same paper on whether the reduction to the maximum-weight matching problem can help us derive an efficient algorithm.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Michail, Dimitrios
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Rank-maximal through maximum weight matchings :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-685C-A
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-1-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2005
%P 22 p.
%X Given a bipartite graph $G( V, E)$, $ V = A \disjointcup B$
where $|V|=n, |E|=m$ and a partition of the edge set into
$r \le m$ disjoint subsets $E = E_1 \disjointcup E_2
\disjointcup \dots \disjointcup E_r$, which are called ranks,
the {\em rank-maximal matching} problem is to find a matching $M$
of $G$ such that $|M \cap E_1|$ is maximized and given that
$|M \cap E_2|$, and so on. Such a problem arises as an optimization
criteria over a possible assignment of a set of applicants to a
set of posts. The matching represents the assignment and the
ranks on the edges correspond to a ranking on the posts submitted
by the applicants.
The rank-maximal matching problem has been previously
studied where a $O( r \sqrt n m )$ time and linear
space algorithm~\cite{IKMMP} was
presented. In this paper we present a new simpler algorithm which
matches the running time and space complexity of the above
algorithm.
The new algorithm is based on a different approach,
by exploiting that the rank-maximal matching problem can
be reduced to a maximum weight matching problem where the
weight of an edge of rank $i$ is $2^{ \ceil{\log n} (r-i)}$.
By exploiting that these edge weights are steeply distributed
we design a scaling algorithm which scales by a factor of
$n$ in each phase. We also show that in each phase one
maximum cardinality computation is sufficient to get a new
optimal solution.
This algorithm answers an open question raised on the same
paper on whether the reduction to the maximum-weight matching
problem can help us derive an efficient algorithm.
%B Research Report / Max-Planck-Institut für Informatik
Sparse meshing of uncertain and noisy surface scattered data
O. Schall, A. Belyaev and H.-P. Seidel
Technical Report, 2005
O. Schall, A. Belyaev and H.-P. Seidel
Technical Report, 2005
Abstract
In this paper, we develop a method for generating
a high-quality approximation of a noisy set of points sampled
from a smooth surface by a sparse triangle mesh. The main
idea of the method consists of defining an appropriate set
of approximation centers and use them as the vertices
of a mesh approximating given scattered data.
To choose the approximation centers, a clustering
procedure is used. With every point of the input data
we associate a local uncertainty
measure which is used to estimate the importance of
the point contribution to the reconstructed surface.
Then a global uncertainty measure is constructed from local ones.
The approximation centers are chosen as the points where
the global uncertainty measure attains its local minima.
It allows us to achieve a high-quality approximation of uncertain and
noisy point data by a sparse mesh. An interesting feature of our
approach
is that the uncertainty measures take into account the normal
directions
estimated at the scattered points.
In particular it results in accurate reconstruction of high-curvature
regions.
Export
BibTeX
@techreport{SchallBelyaevSeidel2005,
TITLE = {Sparse meshing of uncertain and noisy surface scattered data},
AUTHOR = {Schall, Oliver and Belyaev, Alexander and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-002},
NUMBER = {MPI-I-2005-4-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2005},
DATE = {2005},
ABSTRACT = {In this paper, we develop a method for generating a high-quality approximation of a noisy set of points sampled from a smooth surface by a sparse triangle mesh. The main idea of the method consists of defining an appropriate set of approximation centers and use them as the vertices of a mesh approximating given scattered data. To choose the approximation centers, a clustering procedure is used. With every point of the input data we associate a local uncertainty measure which is used to estimate the importance of the point contribution to the reconstructed surface. Then a global uncertainty measure is constructed from local ones. The approximation centers are chosen as the points where the global uncertainty measure attains its local minima. It allows us to achieve a high-quality approximation of uncertain and noisy point data by a sparse mesh. An interesting feature of our approach is that the uncertainty measures take into account the normal directions estimated at the scattered points. In particular it results in accurate reconstruction of high-curvature regions.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Schall, Oliver
%A Belyaev, Alexander
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Sparse meshing of uncertain and noisy surface scattered data :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-683C-1
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-4-002
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2005
%P 20 p.
%X In this paper, we develop a method for generating
a high-quality approximation of a noisy set of points sampled
from a smooth surface by a sparse triangle mesh. The main
idea of the method consists of defining an appropriate set
of approximation centers and use them as the vertices
of a mesh approximating given scattered data.
To choose the approximation centers, a clustering
procedure is used. With every point of the input data
we associate a local uncertainty
measure which is used to estimate the importance of
the point contribution to the reconstructed surface.
Then a global uncertainty measure is constructed from local ones.
The approximation centers are chosen as the points where
the global uncertainty measure attains its local minima.
It allows us to achieve a high-quality approximation of uncertain and
noisy point data by a sparse mesh. An interesting feature of our
approach
is that the uncertainty measures take into account the normal
directions
estimated at the scattered points.
In particular it results in accurate reconstruction of high-curvature
regions.
%B Research Report / Max-Planck-Institut für Informatik
Automated retraining methods for document classification and their parameter tuning
S. Siersdorfer and G. Weikum
Technical Report, 2005
S. Siersdorfer and G. Weikum
Technical Report, 2005
Abstract
This paper addresses the problem of semi-supervised classification on
document collections using retraining (also called self-training). A
possible application is focused Web
crawling which may start with very few, manually selected, training
documents
but can be enhanced by automatically adding initially unlabeled,
positively classified Web pages for retraining.
Such an approach is by itself not robust and faces tuning problems
regarding parameters
like the number of selected documents, the number of retraining
iterations, and the ratio of positive
and negative classified samples used for retraining.
The paper develops methods for automatically tuning these parameters,
based on
predicting the leave-one-out error for a re-trained classifier and
avoiding that the classifier is diluted by selecting too many or weak
documents for retraining.
Our experiments
with three different datasets
confirm the practical viability of the approach.
Export
BibTeX
@techreport{SiersdorferWeikum2005,
TITLE = {Automated retraining methods for document classification and their parameter tuning},
AUTHOR = {Siersdorfer, Stefan and Weikum, Gerhard},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-5-002},
NUMBER = {MPI-I-2005-5-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2005},
DATE = {2005},
ABSTRACT = {This paper addresses the problem of semi-supervised classification on document collections using retraining (also called self-training). A possible application is focused Web crawling which may start with very few, manually selected, training documents but can be enhanced by automatically adding initially unlabeled, positively classified Web pages for retraining. Such an approach is by itself not robust and faces tuning problems regarding parameters like the number of selected documents, the number of retraining iterations, and the ratio of positive and negative classified samples used for retraining. The paper develops methods for automatically tuning these parameters, based on predicting the leave-one-out error for a re-trained classifier and avoiding that the classifier is diluted by selecting too many or weak documents for retraining. Our experiments with three different datasets confirm the practical viability of the approach.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Siersdorfer, Stefan
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Automated retraining methods for document classification and their parameter tuning :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6823-8
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2005-5-002
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2005
%P 23 p.
%X This paper addresses the problem of semi-supervised classification on
document collections using retraining (also called self-training). A
possible application is focused Web
crawling which may start with very few, manually selected, training
documents
but can be enhanced by automatically adding initially unlabeled,
positively classified Web pages for retraining.
Such an approach is by itself not robust and faces tuning problems
regarding parameters
like the number of selected documents, the number of retraining
iterations, and the ratio of positive
and negative classified samples used for retraining.
The paper develops methods for automatically tuning these parameters,
based on
predicting the leave-one-out error for a re-trained classifier and
avoiding that the classifier is diluted by selecting too many or weak
documents for retraining.
Our experiments
with three different datasets
confirm the practical viability of the approach.
%B Research Report / Max-Planck-Institut für Informatik
Joint Motion and Reflectance Capture for Creating Relightable 3D Videos
C. Theobalt, N. Ahmed, E. de Aguiar, G. Ziegler, H. Lensch, M. Magnor and H.-P. Seidel
Technical Report, 2005
C. Theobalt, N. Ahmed, E. de Aguiar, G. Ziegler, H. Lensch, M. Magnor and H.-P. Seidel
Technical Report, 2005
Abstract
\begin{abstract}
Passive optical motion capture is able to provide
authentically animated, photo-realistically and view-dependently
textured models of real people.
To import real-world characters into virtual environments, however,
also surface reflectance properties must be known.
We describe a video-based modeling approach that captures human
motion as well as reflectance characteristics from a handful of
synchronized video recordings.
The presented method is able to recover spatially varying
reflectance properties of clothes % dynamic objects ?
by exploiting the time-varying orientation of each surface point
with respect to camera and light direction.
The resulting model description enables us to match animated subject
appearance to different lighting conditions, as well as to
interchange surface attributes among different people, e.g. for
virtual dressing.
Our contribution allows populating virtual worlds with correctly relit,
real-world people.\\
\end{abstract}
Export
BibTeX
@techreport{TheobaltTR2005,
TITLE = {Joint Motion and Reflectance Capture for Creating Relightable {3D} Videos},
AUTHOR = {Theobalt, Christian and Ahmed, Naveed and de Aguiar, Edilson and Ziegler, Gernot and Lensch, Hendrik and Magnor, Marcus and Seidel, Hans-Peter},
LANGUAGE = {eng},
NUMBER = {MPI-I-2005-4-004},
LOCALID = {Local-ID: C1256BDE005F57A8-5B757D3AA9584EEBC12570A7003C813D-TheobaltTR2005},
YEAR = {2005},
DATE = {2005},
ABSTRACT = {\begin{abstract} Passive optical motion capture is able to provide authentically animated, photo-realistically and view-dependently textured models of real people. To import real-world characters into virtual environments, however, also surface reflectance properties must be known. We describe a video-based modeling approach that captures human motion as well as reflectance characteristics from a handful of synchronized video recordings. The presented method is able to recover spatially varying reflectance properties of clothes % dynamic objects ? by exploiting the time-varying orientation of each surface point with respect to camera and light direction. The resulting model description enables us to match animated subject appearance to different lighting conditions, as well as to interchange surface attributes among different people, e.g. for virtual dressing. Our contribution allows populating virtual worlds with correctly relit, real-world people.\\ \end{abstract}},
}
Endnote
%0 Report
%A Theobalt, Christian
%A Ahmed, Naveed
%A de Aguiar, Edilson
%A Ziegler, Gernot
%A Lensch, Hendrik
%A Magnor, Marcus
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Programming Logics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Graphics - Optics - Vision, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Joint Motion and Reflectance Capture for Creating Relightable 3D Videos :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-2879-B
%F EDOC: 520731
%F OTHER: Local-ID: C1256BDE005F57A8-5B757D3AA9584EEBC12570A7003C813D-TheobaltTR2005
%D 2005
%X \begin{abstract}
Passive optical motion capture is able to provide
authentically animated, photo-realistically and view-dependently
textured models of real people.
To import real-world characters into virtual environments, however,
also surface reflectance properties must be known.
We describe a video-based modeling approach that captures human
motion as well as reflectance characteristics from a handful of
synchronized video recordings.
The presented method is able to recover spatially varying
reflectance properties of clothes % dynamic objects ?
by exploiting the time-varying orientation of each surface point
with respect to camera and light direction.
The resulting model description enables us to match animated subject
appearance to different lighting conditions, as well as to
interchange surface attributes among different people, e.g. for
virtual dressing.
Our contribution allows populating virtual worlds with correctly relit,
real-world people.\\
\end{abstract}
2004
Filtering algorithms for the Same and UsedBy constraints
N. Beldiceanu, I. Katriel and S. Thiel
Technical Report, 2004
N. Beldiceanu, I. Katriel and S. Thiel
Technical Report, 2004
Export
BibTeX
@techreport{,
TITLE = {Filtering algorithms for the Same and {UsedBy} constraints},
AUTHOR = {Beldiceanu, Nicolas and Katriel, Irit and Thiel, Sven},
LANGUAGE = {eng},
NUMBER = {MPI-I-2004-1-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2004},
DATE = {2004-01},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Beldiceanu, Nicolas
%A Katriel, Irit
%A Thiel, Sven
%+ Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Filtering algorithms for the Same and UsedBy constraints :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-290C-C
%F EDOC: 237881
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2004
%P 33 p.
%B Research Report
EXACUS : Efficient and Exact Algorithms for Curves and Surfaces
E. Berberich, A. Eigenwillig, M. Hemmer, S. Hert, L. Kettner, K. Mehlhorn, J. Reichel, S. Schmitt, E. Schömer, D. Weber and N. Wolpert
Technical Report, 2004
E. Berberich, A. Eigenwillig, M. Hemmer, S. Hert, L. Kettner, K. Mehlhorn, J. Reichel, S. Schmitt, E. Schömer, D. Weber and N. Wolpert
Technical Report, 2004
Export
BibTeX
@techreport{Berberich_ECG-TR-361200-02,
TITLE = {{EXACUS} : Efficient and Exact Algorithms for Curves and Surfaces},
AUTHOR = {Berberich, Eric and Eigenwillig, Arno and Hemmer, Michael and Hert, Susan and Kettner, Lutz and Mehlhorn, Kurt and Reichel, Joachim and Schmitt, Susanne and Sch{\"o}mer, Elmar and Weber, Dennis and Wolpert, Nicola},
LANGUAGE = {eng},
NUMBER = {ECG-TR-361200-02},
INSTITUTION = {INRIA},
ADDRESS = {Sophia Antipolis},
YEAR = {2004},
DATE = {2004},
TYPE = {ECG Technical Report},
EDITOR = {{Effective Computational Geometry for Curves and Surfaces}},
}
Endnote
%0 Report
%A Berberich, Eric
%A Eigenwillig, Arno
%A Hemmer, Michael
%A Hert, Susan
%A Kettner, Lutz
%A Mehlhorn, Kurt
%A Reichel, Joachim
%A Schmitt, Susanne
%A Schömer, Elmar
%A Weber, Dennis
%A Wolpert, Nicola
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T EXACUS : Efficient and Exact Algorithms for Curves and Surfaces :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-2B89-6
%F EDOC: 237751
%Y INRIA
%C Sophia Antipolis
%D 2004
%Z name of event: Untitled Event
%Z date of event: -
%Z place of event:
%P 8 p.
%B ECG Technical Report
An empirical comparison of software for constructing arrangements of curved arcs
E. Berberich, A. Eigenwillig, I. Emiris, E. Fogel, M. Hemmer, D. Halperin, A. Kakargias, L. Kettner, K. Mehlhorn, S. Pion, E. Schömer, M. Teillaud, R. Wein and N. Wolpert
Technical Report, 2004
E. Berberich, A. Eigenwillig, I. Emiris, E. Fogel, M. Hemmer, D. Halperin, A. Kakargias, L. Kettner, K. Mehlhorn, S. Pion, E. Schömer, M. Teillaud, R. Wein and N. Wolpert
Technical Report, 2004
Export
BibTeX
@techreport{Berberich_ECG-TR-361200-01,
TITLE = {An empirical comparison of software for constructing arrangements of curved arcs},
AUTHOR = {Berberich, Eric and Eigenwillig, Arno and Emiris, Ioannis and Fogel, Efraim and Hemmer, Michael and Halperin, Dan and Kakargias, Athanasios and Kettner, Lutz and Mehlhorn, Kurt and Pion, Sylvain and Sch{\"o}mer, Elmar and Teillaud, Monique and Wein, Ron and Wolpert, Nicola},
LANGUAGE = {eng},
NUMBER = {ECG-TR-361200-01},
INSTITUTION = {INRIA},
ADDRESS = {Sophia Antipolis},
YEAR = {2004},
DATE = {2004},
TYPE = {ECG Technical Report},
EDITOR = {{Effective Computational Geometry for Curves and Surfaces}},
}
Endnote
%0 Report
%A Berberich, Eric
%A Eigenwillig, Arno
%A Emiris, Ioannis
%A Fogel, Efraim
%A Hemmer, Michael
%A Halperin, Dan
%A Kakargias, Athanasios
%A Kettner, Lutz
%A Mehlhorn, Kurt
%A Pion, Sylvain
%A Schömer, Elmar
%A Teillaud, Monique
%A Wein, Ron
%A Wolpert, Nicola
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T An empirical comparison of software for constructing arrangements of curved arcs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-2B87-A
%F EDOC: 237743
%Y INRIA
%C Sophia Antipolis
%D 2004
%Z name of event: Untitled Event
%Z date of event: -
%Z place of event:
%P 11 p.
%B ECG Technical Report
On the Hadwiger’s Conjecture for Graphs Products
L. S. Chandran and N. Sivadasan
Technical Report, 2004a
L. S. Chandran and N. Sivadasan
Technical Report, 2004a
Export
BibTeX
@techreport{TR2004,
TITLE = {On the {Hadwiger's} Conjecture for Graphs Products},
AUTHOR = {Chandran, L. Sunil and Sivadasan, N.},
LANGUAGE = {eng},
ISBN = {0946-011X},
NUMBER = {MPI-I-2004-1-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken, Germany},
YEAR = {2004},
DATE = {2004},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Chandran, L. Sunil
%A Sivadasan, N.
%+ Discrete Optimization, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On the Hadwiger's Conjecture for Graphs Products :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-001A-0C8F-A
%@ 0946-011X
%Y Max-Planck-Institut für Informatik
%C Saarbrücken, Germany
%D 2004
%B Research Report
On the Hadwiger’s conjecture for graph products
L. S. Chandran and N. Sivadasan
Technical Report, 2004b
L. S. Chandran and N. Sivadasan
Technical Report, 2004b
Export
BibTeX
@techreport{,
TITLE = {On the Hadwiger's conjecture for graph products},
AUTHOR = {Chandran, L. Sunil and Sivadasan, Naveen},
LANGUAGE = {eng},
NUMBER = {MPI-I-2004-1-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2004},
DATE = {2004},
TYPE = {Max-Planck-Institut für Informatik <Saarbrücken>: Research Report},
EDITOR = {{Max-Planck-Institut f{\"u}r Informatik {\textless}Saarbr{\"u}cken{\textgreater}}},
}
Endnote
%0 Report
%A Chandran, L. Sunil
%A Sivadasan, Naveen
%+ Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On the Hadwiger's conjecture for graph products :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-2BA6-4
%F EDOC: 241593
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2004
%P 10 p.
%B Max-Planck-Institut für Informatik <Saarbrücken>: Research Report
Faster ray tracing with SIMD shaft culling
K. Dmitriev, V. Havran and H.-P. Seidel
Technical Report, 2004
K. Dmitriev, V. Havran and H.-P. Seidel
Technical Report, 2004
Export
BibTeX
@techreport{,
TITLE = {Faster ray tracing with {SIMD} shaft culling},
AUTHOR = {Dmitriev, Kirill and Havran, Vlastimil and Seidel, Hans-Peter},
LANGUAGE = {eng},
NUMBER = {MPI-I-2004-4-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2004},
DATE = {2004-12},
TYPE = {Max-Planck-Institut für Informatik <Saarbrücken>: Research Report},
EDITOR = {{Max-Planck-Institut f{\"u}r Informatik {\textless}Saarbr{\"u}cken{\textgreater}}},
}
Endnote
%0 Report
%A Dmitriev, Kirill
%A Havran, Vlastimil
%A Seidel, Hans-Peter
%+ Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Faster ray tracing with SIMD shaft culling :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-28BB-A
%F EDOC: 237860
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2004
%P 13 p.
%B Max-Planck-Institut für Informatik <Saarbrücken>: Research Report
The LEDA class real number - extended version
S. Funke, K. Mehlhorn, S. Schmitt, C. Burnikel, R. Fleischer and S. Schirra
Technical Report, 2004
S. Funke, K. Mehlhorn, S. Schmitt, C. Burnikel, R. Fleischer and S. Schirra
Technical Report, 2004
Export
BibTeX
@techreport{Funke_ECG-TR-363110-01,
TITLE = {The {LEDA} class real number -- extended version},
AUTHOR = {Funke, Stefan and Mehlhorn, Kurt and Schmitt, Susanne and Burnikel, Christoph and Fleischer, Rudolf and Schirra, Stefan},
LANGUAGE = {eng},
NUMBER = {ECG-TR-363110-01},
INSTITUTION = {INRIA},
ADDRESS = {Sophia Antipolis},
YEAR = {2004},
DATE = {2004},
TYPE = {ECG Technical Report},
EDITOR = {{Effective Computational Geometry for Curves and Surfaces}},
}
Endnote
%0 Report
%A Funke, Stefan
%A Mehlhorn, Kurt
%A Schmitt, Susanne
%A Burnikel, Christoph
%A Fleischer, Rudolf
%A Schirra, Stefan
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T The LEDA class real number - extended version :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-2B8C-F
%F EDOC: 237780
%Y INRIA
%C Sophia Antipolis
%D 2004
%Z name of event: Untitled Event
%Z date of event: -
%Z place of event:
%P 2 p.
%B ECG Technical Report
Modeling hair using a wisp hair model
J. Haber, C. Schmitt, M. Koster and H.-P. Seidel
Technical Report, 2004
J. Haber, C. Schmitt, M. Koster and H.-P. Seidel
Technical Report, 2004
Export
BibTeX
@techreport{,
TITLE = {Modeling hair using a wisp hair model},
AUTHOR = {Haber, J{\"o}rg and Schmitt, Carina and Koster, Martin and Seidel, Hans-Peter},
LANGUAGE = {eng},
NUMBER = {MPI-I-2004-4-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2004},
DATE = {2004-05},
TYPE = {Max-Planck-Institut für Informatik <Saarbrücken>: Research Report},
EDITOR = {{Max-Planck-Institut f{\"u}r Informatik {\textless}Saarbr{\"u}cken{\textgreater}}},
}
Endnote
%0 Report
%A Haber, Jörg
%A Schmitt, Carina
%A Koster, Martin
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Modeling hair using a wisp hair model :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-28F6-4
%F EDOC: 237864
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2004
%P 38 p.
%B Max-Planck-Institut für Informatik <Saarbrücken>: Research Report
Effects of a modular filter on geometric applications
M. Hemmer, L. Kettner and E. Schömer
Technical Report, 2004
M. Hemmer, L. Kettner and E. Schömer
Technical Report, 2004
Export
BibTeX
@techreport{Hemmer_ECG-TR-363111-01,
TITLE = {Effects of a modular filter on geometric applications},
AUTHOR = {Hemmer, Michael and Kettner, Lutz and Sch{\"o}mer, Elmar},
LANGUAGE = {eng},
NUMBER = {ECG-TR-363111-01},
INSTITUTION = {INRIA},
ADDRESS = {Sophia Antipolis},
YEAR = {2004},
DATE = {2004},
TYPE = {ECG Technical Report},
EDITOR = {{Effective Computational Geometry for Curves and Surfaces}},
}
Endnote
%0 Report
%A Hemmer, Michael
%A Kettner, Lutz
%A Schömer, Elmar
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Effects of a modular filter on geometric applications :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-2B8F-9
%F EDOC: 237782
%Y INRIA
%C Sophia Antipolis
%D 2004
%Z name of event: Untitled Event
%Z date of event: -
%Z place of event:
%P 7 p.
%B ECG Technical Report
Neural meshes: surface reconstruction with a learning algorithm
I. Ivrissimtzis, W.-K. Jeong, S. Lee, Y. Lee and H.-P. Seidel
Technical Report, 2004
I. Ivrissimtzis, W.-K. Jeong, S. Lee, Y. Lee and H.-P. Seidel
Technical Report, 2004
Export
BibTeX
@techreport{,
TITLE = {Neural meshes: surface reconstruction with a learning algorithm},
AUTHOR = {Ivrissimtzis, Ioannis and Jeong, Won-Ki and Lee, Seungyong and Lee, Yunjin and Seidel, Hans-Peter},
LANGUAGE = {eng},
NUMBER = {MPI-I-2004-4-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2004},
DATE = {2004-10},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Ivrissimtzis, Ioannis
%A Jeong, Won-Ki
%A Lee, Seungyong
%A Lee, Yunjin
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Neural meshes: surface reconstruction with a learning algorithm :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-28C9-A
%F EDOC: 237862
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2004
%P 16 p.
%B Research Report
On algorithms for online topological ordering and sorting
I. Katriel
Technical Report, 2004
I. Katriel
Technical Report, 2004
Export
BibTeX
@techreport{,
TITLE = {On algorithms for online topological ordering and sorting},
AUTHOR = {Katriel, Irit},
LANGUAGE = {eng},
NUMBER = {MPI-I-2004-1-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2004},
DATE = {2004-02},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Katriel, Irit
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On algorithms for online topological ordering and sorting :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-2906-7
%F EDOC: 237878
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2004
%P 12 p.
%B Research Report
Classroom examples of robustness problems in geometric computations
L. Kettner, K. Mehlhorn, S. Pion, S. Schirra and C. Yap
Technical Report, 2004
L. Kettner, K. Mehlhorn, S. Pion, S. Schirra and C. Yap
Technical Report, 2004
Export
BibTeX
@techreport{Kettner_ECG-TR-363100-01,
TITLE = {Classroom examples of robustness problems in geometric computations},
AUTHOR = {Kettner, Lutz and Mehlhorn, Kurt and Pion, Sylvain and Schirra, Stefan and Yap, Chee},
LANGUAGE = {eng},
NUMBER = {ECG-TR-363100-01},
INSTITUTION = {INRIA},
ADDRESS = {Sophia Antipolis},
YEAR = {2004},
DATE = {2004},
TYPE = {ECG Technical Report},
EDITOR = {{Effective Computational Geometry for Curves and Surfaces}},
VOLUME = {3221},
}
Endnote
%0 Report
%A Kettner, Lutz
%A Mehlhorn, Kurt
%A Pion, Sylvain
%A Schirra, Stefan
%A Yap, Chee
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Classroom examples of robustness problems in geometric computations :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-2B92-0
%F EDOC: 237797
%Y INRIA
%C Sophia Antipolis
%D 2004
%Z name of event: Untitled Event
%Z date of event: -
%Z place of event:
%P 12 p.
%B ECG Technical Report
%N 3221
A fast root checking algorithm
C. Klein
Technical Report, 2004
C. Klein
Technical Report, 2004
Export
BibTeX
@techreport{Klein_ECG-TR-363109-02,
TITLE = {A fast root checking algorithm},
AUTHOR = {Klein, Christian},
LANGUAGE = {eng},
NUMBER = {ECG-TR-363109-02},
INSTITUTION = {INRIA},
ADDRESS = {Sophia Antipolis},
YEAR = {2004},
DATE = {2004},
TYPE = {ECG Technical Report},
EDITOR = {{Effective computational geometry for curves and surfaces}},
}
Endnote
%0 Report
%A Klein, Christian
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A fast root checking algorithm :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-2B96-8
%F EDOC: 237826
%Y INRIA
%C Sophia Antipolis
%D 2004
%Z name of event: Untitled Event
%Z date of event: -
%Z place of event:
%P 11 p.
%B ECG Technical Report
New bounds for the Descartes method
W. Krandick and K. Mehlhorn
Technical Report, 2004
W. Krandick and K. Mehlhorn
Technical Report, 2004
Export
BibTeX
@techreport{Krandick_DU-CS-04-04,
TITLE = {New bounds for the Descartes method},
AUTHOR = {Krandick, Werner and Mehlhorn, Kurt},
LANGUAGE = {eng},
NUMBER = {DU-CS-04-04},
INSTITUTION = {Drexel University},
ADDRESS = {Philadelphia, Pa.},
YEAR = {2004},
DATE = {2004},
TYPE = {Drexel University / Department of Computer Science:Technical Report},
EDITOR = {{Drexel University {\textless}Philadelphia, Pa.{\textgreater} / Department of Computer Science}},
}
Endnote
%0 Report
%A Krandick, Werner
%A Mehlhorn, Kurt
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T New bounds for the Descartes method :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-2B99-2
%F EDOC: 237829
%Y Drexel University
%C Philadelphia, Pa.
%D 2004
%P 18 p.
%B Drexel University / Department of Computer Science:Technical Report
A simpler linear time 2/3-epsilon approximation
P. Sanders and S. Pettie
Technical Report, 2004a
P. Sanders and S. Pettie
Technical Report, 2004a
Export
BibTeX
@techreport{,
TITLE = {A simpler linear time 2/3-epsilon approximation},
AUTHOR = {Sanders, Peter and Pettie, Seth},
LANGUAGE = {eng},
NUMBER = {MPI-I-2004-1-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2004},
DATE = {2004-01},
TYPE = {Max-Planck-Institut für Informatik <Saarbrücken>: Research Report},
EDITOR = {{Max-Planck-Institut f{\"u}r Informatik {\textless}Saarbr{\"u}cken{\textgreater}}},
}
Endnote
%0 Report
%A Sanders, Peter
%A Pettie, Seth
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A simpler linear time 2/3-epsilon approximation :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-2909-1
%F EDOC: 237880
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2004
%P 7 p.
%B Max-Planck-Institut für Informatik <Saarbrücken>: Research Report
A simpler linear time 2/3 - epsilon approximation for maximum weight matching
P. Sanders and S. Pettie
Technical Report, 2004b
P. Sanders and S. Pettie
Technical Report, 2004b
Abstract
We present two $\twothirds - \epsilon$ approximation algorithms for the
maximum weight matching problem that run in time
$O(m\log\frac{1}{\epsilon})$. We give a simple and practical
randomized algorithm and a somewhat more complicated deterministic
algorithm. Both algorithms are exponentially faster in
terms of $\epsilon$ than a recent algorithm by Drake and Hougardy.
We also show that our algorithms can be generalized to find a
$1-\epsilon$ approximatation to the maximum weight matching, for any
$\epsilon>0$.
Export
BibTeX
@techreport{,
TITLE = {A simpler linear time 2/3 -- epsilon approximation for maximum weight matching},
AUTHOR = {Sanders, Peter and Pettie, Seth},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2004-1-002},
NUMBER = {MPI-I-2004-1-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2004},
DATE = {2004},
ABSTRACT = {We present two $\twothirds -- \epsilon$ approximation algorithms for the maximum weight matching problem that run in time $O(m\log\frac{1}{\epsilon})$. We give a simple and practical randomized algorithm and a somewhat more complicated deterministic algorithm. Both algorithms are exponentially faster in terms of $\epsilon$ than a recent algorithm by Drake and Hougardy. We also show that our algorithms can be generalized to find a $1-\epsilon$ approximatation to the maximum weight matching, for any $\epsilon>0$.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Sanders, Peter
%A Pettie, Seth
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A simpler linear time 2/3 - epsilon approximation for maximum weight matching :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6862-B
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2004-1-002
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2004
%P 10 p.
%X We present two $\twothirds - \epsilon$ approximation algorithms for the
maximum weight matching problem that run in time
$O(m\log\frac{1}{\epsilon})$. We give a simple and practical
randomized algorithm and a somewhat more complicated deterministic
algorithm. Both algorithms are exponentially faster in
terms of $\epsilon$ than a recent algorithm by Drake and Hougardy.
We also show that our algorithms can be generalized to find a
$1-\epsilon$ approximatation to the maximum weight matching, for any
$\epsilon>0$.
%B Research Report / Max-Planck-Institut für Informatik
Common subexpression search in LEDA_reals : a study of the diamond-operator
S. Schmitt
Technical Report, 2004a
S. Schmitt
Technical Report, 2004a
Export
BibTeX
@techreport{Schmitt_ECG-TR-363109-01,
TITLE = {Common subexpression search in {LEDA}{\textunderscore}reals : a study of the diamond-operator},
AUTHOR = {Schmitt, Susanne},
LANGUAGE = {eng},
NUMBER = {ECG-TR-363109-01},
INSTITUTION = {INRIA},
ADDRESS = {Sophia Antipolis},
YEAR = {2004},
DATE = {2004},
TYPE = {ECG Technical Report},
EDITOR = {{Effective Computational Geometry for Curves and Surfaces}},
}
Endnote
%0 Report
%A Schmitt, Susanne
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Common subexpression search in LEDA_reals : a study of the diamond-operator :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-2B9C-B
%F EDOC: 237830
%Y INRIA
%C Sophia Antipolis
%D 2004
%Z name of event: Untitled Event
%Z date of event: -
%Z place of event:
%P 5 p.
%B ECG Technical Report
Improved separation bounds for the diamond operator
S. Schmitt
Technical Report, 2004b
S. Schmitt
Technical Report, 2004b
Export
BibTeX
@techreport{Schmitt_ECG-TR-363108-01,
TITLE = {Improved separation bounds for the diamond operator},
AUTHOR = {Schmitt, Susanne},
LANGUAGE = {eng},
NUMBER = {ECG-TR-363108-01},
INSTITUTION = {INRIA},
ADDRESS = {Sophia Antipolis},
YEAR = {2004},
DATE = {2004},
TYPE = {ECG Techical Report},
EDITOR = {{Effective Computational Geometry for Curves and Surfaces}},
}
Endnote
%0 Report
%A Schmitt, Susanne
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Improved separation bounds for the diamond operator :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-2B9F-5
%F EDOC: 237831
%Y INRIA
%C Sophia Antipolis
%D 2004
%Z name of event: Untitled Event
%Z date of event: -
%Z place of event:
%P 13 p.
%B ECG Techical Report
A comparison of polynomial evaluation schemes
S. Schmitt and L. Fousse
Technical Report, 2004
S. Schmitt and L. Fousse
Technical Report, 2004
Export
BibTeX
@techreport{,
TITLE = {A comparison of polynomial evaluation schemes},
AUTHOR = {Schmitt, Susanne and Fousse, Laurent},
LANGUAGE = {eng},
NUMBER = {MPI-I-2004-1-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2004},
DATE = {2004-06},
TYPE = {Max-Planck-Institut für Informatik <Saarbrücken>: Research Report},
EDITOR = {Becker and {Max-Planck-Institut f{\"u}r Informatik {\textless}Saarbr{\"u}cken{\textgreater}}},
}
Endnote
%0 Report
%A Schmitt, Susanne
%A Fousse, Laurent
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A comparison of polynomial evaluation schemes :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-28EC-B
%F EDOC: 237875
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2004
%P 16 p.
%B Max-Planck-Institut für Informatik <Saarbrücken>: Research Report
Goal-oriented methods and meta methods for document classification and their parameter tuning
S. Siersdorfer, S. Sizov and G. Weikum
Technical Report, 2004
S. Siersdorfer, S. Sizov and G. Weikum
Technical Report, 2004
Export
BibTeX
@techreport{,
TITLE = {Goal-oriented methods and meta methods for document classification and their parameter tuning},
AUTHOR = {Siersdorfer, Stefan and Sizov, Sergej and Weikum, Gerhard},
LANGUAGE = {eng},
NUMBER = {MPI-I-2004-5-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2004},
DATE = {2004-05},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Siersdorfer, Stefan
%A Sizov, Sergej
%A Weikum, Gerhard
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Goal-oriented methods and meta methods for document classification and their parameter tuning :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-28F3-A
%F EDOC: 237842
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2004
%P 36 p.
%B Research Report
On scheduling with bounded migration
N. Sivadasan, P. Sanders and M. Skutella
Technical Report, 2004a
N. Sivadasan, P. Sanders and M. Skutella
Technical Report, 2004a
Export
BibTeX
@techreport{,
TITLE = {On scheduling with bounded migration},
AUTHOR = {Sivadasan, Naveen and Sanders, Peter and Skutella, Martin},
LANGUAGE = {eng},
NUMBER = {MPI-I-2004-1-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2004},
DATE = {2004-05},
TYPE = {Max-Planck-Institut für Informatik <Saarbrücken>: Research Report},
EDITOR = {{Max-Planck-Institut f{\"u}r Informatik {\textless}Saarbr{\"u}cken{\textgreater}}},
}
Endnote
%0 Report
%A Sivadasan, Naveen
%A Sanders, Peter
%A Skutella, Martin
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On scheduling with bounded migration :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-28F9-D
%F EDOC: 237877
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2004
%P 22 p.
%B Max-Planck-Institut für Informatik <Saarbrücken>: Research Report
Online scheduling with bounded migration
N. Sivadasan, P. Sanders and M. Skutella
Technical Report, 2004b
N. Sivadasan, P. Sanders and M. Skutella
Technical Report, 2004b
Export
BibTeX
@techreport{,
TITLE = {Online scheduling with bounded migration},
AUTHOR = {Sivadasan, Naveen and Sanders, Peter and Skutella, Martin},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2004-1-004},
NUMBER = {MPI-I-2004-1-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2004},
DATE = {2004},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Sivadasan, Naveen
%A Sanders, Peter
%A Skutella, Martin
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Online scheduling with bounded migration :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-685F-4
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2004-1-004
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2004
%P 21 p.
%B Research Report / Max-Planck-Institut für Informatik
r-Adaptive parameterization of surfaces
R. Zayer, C. Rössl and H.-P. Seidel
Technical Report, 2004
R. Zayer, C. Rössl and H.-P. Seidel
Technical Report, 2004
Export
BibTeX
@techreport{,
TITLE = {r-Adaptive parameterization of surfaces},
AUTHOR = {Zayer, Rhaleb and R{\"o}ssl, Christian and Seidel, Hans-Peter},
LANGUAGE = {eng},
NUMBER = {MPI-I-2004-4-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2004},
DATE = {2004-06},
TYPE = {Max-Planck-Institut für Informatik <Saarbrücken>: Research Report},
EDITOR = {{Max-Planck-Institut f{\"u}r Informatik {\textless}Saarbr{\"u}cken{\textgreater}}},
}
Endnote
%0 Report
%A Zayer, Rhaleb
%A Rössl, Christian
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T r-Adaptive parameterization of surfaces :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-000F-28E9-2
%F EDOC: 237863
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2004
%P 10 p.
%B Max-Planck-Institut für Informatik <Saarbrücken>: Research Report
2003
Improving linear programming approaches for the Steiner tree problem
E. Althaus, T. Polzin and S. Daneshmand
Technical Report, 2003
E. Althaus, T. Polzin and S. Daneshmand
Technical Report, 2003
Abstract
We present two theoretically interesting and empirically successful
techniques for improving the linear programming approaches, namely
graph transformation and local cuts, in the context of the
Steiner problem. We show the impact of these techniques on the
solution of the largest benchmark instances ever solved.
Export
BibTeX
@techreport{MPI-I-2003-1-004,
TITLE = {Improving linear programming approaches for the Steiner tree problem},
AUTHOR = {Althaus, Ernst and Polzin, Tobias and Daneshmand, Siavash},
LANGUAGE = {eng},
NUMBER = {MPI-I-2003-1-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {We present two theoretically interesting and empirically successful techniques for improving the linear programming approaches, namely graph transformation and local cuts, in the context of the Steiner problem. We show the impact of these techniques on the solution of the largest benchmark instances ever solved.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Althaus, Ernst
%A Polzin, Tobias
%A Daneshmand, Siavash
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Improving linear programming approaches for the Steiner tree problem :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6BB9-F
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 19 p.
%X We present two theoretically interesting and empirically successful
techniques for improving the linear programming approaches, namely
graph transformation and local cuts, in the context of the
Steiner problem. We show the impact of these techniques on the
solution of the largest benchmark instances ever solved.
%B Research Report / Max-Planck-Institut für Informatik
Random knapsack in expected polynomial time
R. Beier and B. Vöcking
Technical Report, 2003
R. Beier and B. Vöcking
Technical Report, 2003
Abstract
In this paper, we present the first average-case analysis proving an expected
polynomial running time for an exact algorithm for the 0/1 knapsack problem.
In particular, we prove, for various input distributions, that the number of
{\em dominating solutions\/} (i.e., Pareto-optimal knapsack fillings)
to this problem is polynomially bounded in the number of available items.
An algorithm by Nemhauser and Ullmann can enumerate these solutions very
efficiently so that a polynomial upper bound on the number of dominating
solutions implies an algorithm with expected polynomial running time.
The random input model underlying our analysis is very general
and not restricted to a particular input distribution. We assume adversarial
weights and randomly drawn profits (or vice versa). Our analysis covers
general probability
distributions with finite mean, and, in its most general form, can even
handle different probability distributions for the profits of different items.
This feature enables us to study the effects of correlations between profits
and weights. Our analysis confirms and explains practical studies showing
that so-called strongly correlated instances are harder to solve than
weakly correlated ones.
Export
BibTeX
@techreport{,
TITLE = {Random knapsack in expected polynomial time},
AUTHOR = {Beier, Ren{\'e} and V{\"o}cking, Berthold},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-003},
NUMBER = {MPI-I-2003-1-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {In this paper, we present the first average-case analysis proving an expected polynomial running time for an exact algorithm for the 0/1 knapsack problem. In particular, we prove, for various input distributions, that the number of {\em dominating solutions\/} (i.e., Pareto-optimal knapsack fillings) to this problem is polynomially bounded in the number of available items. An algorithm by Nemhauser and Ullmann can enumerate these solutions very efficiently so that a polynomial upper bound on the number of dominating solutions implies an algorithm with expected polynomial running time. The random input model underlying our analysis is very general and not restricted to a particular input distribution. We assume adversarial weights and randomly drawn profits (or vice versa). Our analysis covers general probability distributions with finite mean, and, in its most general form, can even handle different probability distributions for the profits of different items. This feature enables us to study the effects of correlations between profits and weights. Our analysis confirms and explains practical studies showing that so-called strongly correlated instances are harder to solve than weakly correlated ones.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Beier, René
%A Vöcking, Berthold
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Random knapsack in expected polynomial time :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6BBC-9
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-003
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 22 p.
%X In this paper, we present the first average-case analysis proving an expected
polynomial running time for an exact algorithm for the 0/1 knapsack problem.
In particular, we prove, for various input distributions, that the number of
{\em dominating solutions\/} (i.e., Pareto-optimal knapsack fillings)
to this problem is polynomially bounded in the number of available items.
An algorithm by Nemhauser and Ullmann can enumerate these solutions very
efficiently so that a polynomial upper bound on the number of dominating
solutions implies an algorithm with expected polynomial running time.
The random input model underlying our analysis is very general
and not restricted to a particular input distribution. We assume adversarial
weights and randomly drawn profits (or vice versa). Our analysis covers
general probability
distributions with finite mean, and, in its most general form, can even
handle different probability distributions for the profits of different items.
This feature enables us to study the effects of correlations between profits
and weights. Our analysis confirms and explains practical studies showing
that so-called strongly correlated instances are harder to solve than
weakly correlated ones.
%B Research Report / Max-Planck-Institut für Informatik
A custom designed density estimation method for light transport
P. Bekaert, P. Slusallek, R. Cools, V. Havran and H.-P. Seidel
Technical Report, 2003
P. Bekaert, P. Slusallek, R. Cools, V. Havran and H.-P. Seidel
Technical Report, 2003
Abstract
We present a new Monte Carlo method for solving the global illumination
problem in environments with general geometry descriptions and
light emission and scattering properties. Current
Monte Carlo global illumination algorithms are based
on generic density estimation techniques that do not take into account any
knowledge about the nature of the data points --- light and potential
particle hit points --- from which a global illumination solution is to be
reconstructed. We propose a novel estimator, especially designed
for solving linear integral equations such as the rendering equation.
The resulting single-pass global illumination algorithm promises to
combine the flexibility and robustness of bi-directional
path tracing with the efficiency of algorithms such as photon mapping.
Export
BibTeX
@techreport{BekaertSlusallekCoolsHavranSeidel,
TITLE = {A custom designed density estimation method for light transport},
AUTHOR = {Bekaert, Philippe and Slusallek, Philipp and Cools, Ronald and Havran, Vlastimil and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-4-004},
NUMBER = {MPI-I-2003-4-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {We present a new Monte Carlo method for solving the global illumination problem in environments with general geometry descriptions and light emission and scattering properties. Current Monte Carlo global illumination algorithms are based on generic density estimation techniques that do not take into account any knowledge about the nature of the data points --- light and potential particle hit points --- from which a global illumination solution is to be reconstructed. We propose a novel estimator, especially designed for solving linear integral equations such as the rendering equation. The resulting single-pass global illumination algorithm promises to combine the flexibility and robustness of bi-directional path tracing with the efficiency of algorithms such as photon mapping.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Bekaert, Philippe
%A Slusallek, Philipp
%A Cools, Ronald
%A Havran, Vlastimil
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Cluster of Excellence Multimodal Computing and Interaction
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T A custom designed density estimation method for light transport :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6922-2
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-4-004
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 28 p.
%X We present a new Monte Carlo method for solving the global illumination
problem in environments with general geometry descriptions and
light emission and scattering properties. Current
Monte Carlo global illumination algorithms are based
on generic density estimation techniques that do not take into account any
knowledge about the nature of the data points --- light and potential
particle hit points --- from which a global illumination solution is to be
reconstructed. We propose a novel estimator, especially designed
for solving linear integral equations such as the rendering equation.
The resulting single-pass global illumination algorithm promises to
combine the flexibility and robustness of bi-directional
path tracing with the efficiency of algorithms such as photon mapping.
%B Research Report / Max-Planck-Institut für Informatik
Girth and treewidth
S. Chandran Leela and C. R. Subramanian
Technical Report, 2003
S. Chandran Leela and C. R. Subramanian
Technical Report, 2003
Export
BibTeX
@techreport{,
TITLE = {Girth and treewidth},
AUTHOR = {Chandran Leela, Sunil and Subramanian, C. R.},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-NWG2-001},
NUMBER = {MPI-I-2003-NWG2-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Chandran Leela, Sunil
%A Subramanian, C. R.
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Girth and treewidth :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6868-0
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-NWG2-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 11 p.
%B Research Report / Max-Planck-Institut für Informatik
On the Bollob’as -- Eldridge conjecture for bipartite graphs
B. Csaba
Technical Report, 2003
B. Csaba
Technical Report, 2003
Abstract
Let $G$ be a simple graph on $n$ vertices. A conjecture of
Bollob\'as and Eldridge~\cite{be78} asserts that if $\delta (G)\ge {kn-1 \over
k+1}$
then $G$ contains any $n$ vertex graph $H$ with $\Delta(H) = k$.
We strengthen this conjecture: we prove that if $H$ is bipartite,
$3 \le \Delta(H)$ is bounded and $n$ is sufficiently large , then there exists
$\beta >0$ such that if $\delta (G)\ge {\Delta \over {\Delta +1}}(1-\beta)n$,
then
$H \subset G$.
Export
BibTeX
@techreport{Csaba2003,
TITLE = {On the Bollob{\textbackslash}'as -- Eldridge conjecture for bipartite graphs},
AUTHOR = {Csaba, Bela},
LANGUAGE = {eng},
NUMBER = {MPI-I-2003-1-009},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {Let $G$ be a simple graph on $n$ vertices. A conjecture of Bollob\'as and Eldridge~\cite{be78} asserts that if $\delta (G)\ge {kn-1 \over k+1}$ then $G$ contains any $n$ vertex graph $H$ with $\Delta(H) = k$. We strengthen this conjecture: we prove that if $H$ is bipartite, $3 \le \Delta(H)$ is bounded and $n$ is sufficiently large , then there exists $\beta >0$ such that if $\delta (G)\ge {\Delta \over {\Delta +1}}(1-\beta)n$, then $H \subset G$.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Csaba, Bela
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On the Bollob\'as -- Eldridge conjecture for bipartite graphs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6B3A-F
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 29 p.
%X Let $G$ be a simple graph on $n$ vertices. A conjecture of
Bollob\'as and Eldridge~\cite{be78} asserts that if $\delta (G)\ge {kn-1 \over
k+1}$
then $G$ contains any $n$ vertex graph $H$ with $\Delta(H) = k$.
We strengthen this conjecture: we prove that if $H$ is bipartite,
$3 \le \Delta(H)$ is bounded and $n$ is sufficiently large , then there exists
$\beta >0$ such that if $\delta (G)\ge {\Delta \over {\Delta +1}}(1-\beta)n$,
then
$H \subset G$.
%B Research Report / Max-Planck-Institut für Informatik
On the probability of rendezvous in graphs
M. Dietzfelbinger and H. Tamaki
Technical Report, 2003
M. Dietzfelbinger and H. Tamaki
Technical Report, 2003
Abstract
In a simple graph $G$ without isolated nodes the
following random experiment is carried out:
each node chooses one
of its neighbors uniformly at random.
We say a rendezvous occurs
if there are adjacent nodes $u$ and $v$
such that $u$ chooses $v$
and $v$ chooses $u$;
the probability that this happens is denoted by $s(G)$.
M{\'e}tivier \emph{et al.} (2000) asked
whether it is true
that $s(G)\ge s(K_n)$
for all $n$-node graphs $G$,
where $K_n$ is the complete graph on $n$ nodes.
We show that this is the case.
Moreover, we show that evaluating $s(G)$
for a given graph $G$ is a \numberP-complete problem,
even if only $d$-regular graphs are considered,
for any $d\ge5$.
Export
BibTeX
@techreport{MPI-I-94-224,
TITLE = {On the probability of rendezvous in graphs},
AUTHOR = {Dietzfelbinger, Martin and Tamaki, Hisao},
LANGUAGE = {eng},
NUMBER = {MPI-I-2003-1-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {In a simple graph $G$ without isolated nodes the following random experiment is carried out: each node chooses one of its neighbors uniformly at random. We say a rendezvous occurs if there are adjacent nodes $u$ and $v$ such that $u$ chooses $v$ and $v$ chooses $u$; the probability that this happens is denoted by $s(G)$. M{\'e}tivier \emph{et al.} (2000) asked whether it is true that $s(G)\ge s(K_n)$ for all $n$-node graphs $G$, where $K_n$ is the complete graph on $n$ nodes. We show that this is the case. Moreover, we show that evaluating $s(G)$ for a given graph $G$ is a \numberP-complete problem, even if only $d$-regular graphs are considered, for any $d\ge5$.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Dietzfelbinger, Martin
%A Tamaki, Hisao
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On the probability of rendezvous in graphs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6B83-7
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 30 p.
%X In a simple graph $G$ without isolated nodes the
following random experiment is carried out:
each node chooses one
of its neighbors uniformly at random.
We say a rendezvous occurs
if there are adjacent nodes $u$ and $v$
such that $u$ chooses $v$
and $v$ chooses $u$;
the probability that this happens is denoted by $s(G)$.
M{\'e}tivier \emph{et al.} (2000) asked
whether it is true
that $s(G)\ge s(K_n)$
for all $n$-node graphs $G$,
where $K_n$ is the complete graph on $n$ nodes.
We show that this is the case.
Moreover, we show that evaluating $s(G)$
for a given graph $G$ is a \numberP-complete problem,
even if only $d$-regular graphs are considered,
for any $d\ge5$.
%B Research Report / Max-Planck-Institut für Informatik
Almost random graphs with simple hash functions
M. Dietzfelbinger and P. Woelfel
Technical Report, 2003
M. Dietzfelbinger and P. Woelfel
Technical Report, 2003
Abstract
We describe a simple randomized construction for generating pairs of
hash functions h_1,h_2 from a universe U to ranges V=[m]={0,1,...,m-1}
and W=[m] so that for every key set S\subseteq U with
n=|S| <= m/(1+epsilon) the (random) bipartite (multi)graph with node
set V + W and edge set {(h_1(x),h_2(x)) | x in S} exhibits a structure
that is essentially random. The construction combines d-wise
independent classes for d a relatively small constant with the
well-known technique of random offsets. While keeping the space
needed to store the description of h_1 and h_2 at O(n^zeta), for
zeta<1 fixed arbitrarily, we obtain a much smaller (constant)
evaluation time than previous constructions of this kind, which
involved Siegel's high-performance hash classes. The main new
technique is the combined analysis of the graph structure and the
inner structure of the hash functions, as well as a new way of looking
at the cycle structure of random (multi)graphs. The construction may
be applied to improve on Pagh and Rodler's ``cuckoo hashing'' (2001),
to obtain a simpler and faster alternative to a recent construction of
"Ostlin and Pagh (2002/03) for simulating uniform hashing on a key set
S, and to the simulation of shared memory on distributed memory
machines. We also describe a novel way of implementing (approximate)
d-wise independent hashing without using polynomials.
Export
BibTeX
@techreport{,
TITLE = {Almost random graphs with simple hash functions},
AUTHOR = {Dietzfelbinger, Martin and Woelfel, Philipp},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-005},
NUMBER = {MPI-I-2003-1-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {We describe a simple randomized construction for generating pairs of hash functions h_1,h_2 from a universe U to ranges V=[m]={0,1,...,m-1} and W=[m] so that for every key set S\subseteq U with n=|S| <= m/(1+epsilon) the (random) bipartite (multi)graph with node set V + W and edge set {(h_1(x),h_2(x)) | x in S} exhibits a structure that is essentially random. The construction combines d-wise independent classes for d a relatively small constant with the well-known technique of random offsets. While keeping the space needed to store the description of h_1 and h_2 at O(n^zeta), for zeta<1 fixed arbitrarily, we obtain a much smaller (constant) evaluation time than previous constructions of this kind, which involved Siegel's high-performance hash classes. The main new technique is the combined analysis of the graph structure and the inner structure of the hash functions, as well as a new way of looking at the cycle structure of random (multi)graphs. The construction may be applied to improve on Pagh and Rodler's ``cuckoo hashing'' (2001), to obtain a simpler and faster alternative to a recent construction of "Ostlin and Pagh (2002/03) for simulating uniform hashing on a key set S, and to the simulation of shared memory on distributed memory machines. We also describe a novel way of implementing (approximate) d-wise independent hashing without using polynomials.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Dietzfelbinger, Martin
%A Woelfel, Philipp
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T Almost random graphs with simple hash functions :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6BB3-C
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-005
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 23 p.
%X We describe a simple randomized construction for generating pairs of
hash functions h_1,h_2 from a universe U to ranges V=[m]={0,1,...,m-1}
and W=[m] so that for every key set S\subseteq U with
n=|S| <= m/(1+epsilon) the (random) bipartite (multi)graph with node
set V + W and edge set {(h_1(x),h_2(x)) | x in S} exhibits a structure
that is essentially random. The construction combines d-wise
independent classes for d a relatively small constant with the
well-known technique of random offsets. While keeping the space
needed to store the description of h_1 and h_2 at O(n^zeta), for
zeta<1 fixed arbitrarily, we obtain a much smaller (constant)
evaluation time than previous constructions of this kind, which
involved Siegel's high-performance hash classes. The main new
technique is the combined analysis of the graph structure and the
inner structure of the hash functions, as well as a new way of looking
at the cycle structure of random (multi)graphs. The construction may
be applied to improve on Pagh and Rodler's ``cuckoo hashing'' (2001),
to obtain a simpler and faster alternative to a recent construction of
"Ostlin and Pagh (2002/03) for simulating uniform hashing on a key set
S, and to the simulation of shared memory on distributed memory
machines. We also describe a novel way of implementing (approximate)
d-wise independent hashing without using polynomials.
%B Research Report / Max-Planck-Institut für Informatik
Specification of the Traits Classes for CGAL Arrangements of Curves
E. Fogel, D. Halperin, R. Wein, M. Teillaud, E. Berberich, A. Eigenwillig, S. Hert and L. Kettner
Technical Report, 2003
E. Fogel, D. Halperin, R. Wein, M. Teillaud, E. Berberich, A. Eigenwillig, S. Hert and L. Kettner
Technical Report, 2003
Export
BibTeX
@techreport{ecg:fhw-stcca-03,
TITLE = {Specification of the Traits Classes for {CGAL} Arrangements of Curves},
AUTHOR = {Fogel, Efi and Halperin, Dan and Wein, Ron and Teillaud, Monique and Berberich, Eric and Eigenwillig, Arno and Hert, Susan and Kettner, Lutz},
LANGUAGE = {eng},
NUMBER = {ECG-TR-241200-01},
INSTITUTION = {INRIA},
ADDRESS = {Sophia-Antipolis},
YEAR = {2003},
DATE = {2003},
TYPE = {Technical Report},
}
Endnote
%0 Report
%A Fogel, Efi
%A Halperin, Dan
%A Wein, Ron
%A Teillaud, Monique
%A Berberich, Eric
%A Eigenwillig, Arno
%A Hert, Susan
%A Kettner, Lutz
%+ External Organizations
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Specification of the Traits Classes for CGAL Arrangements of Curves :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0027-B4C6-5
%Y INRIA
%C Sophia-Antipolis
%D 2003
%B Technical Report
The dimension of $C^1$ splines of arbitrary degree on a tetrahedral partition
T. Hangelbroek, G. Nürnberger, C. Rössl, H.-P. Seidel and F. Zeilfelder
Technical Report, 2003
T. Hangelbroek, G. Nürnberger, C. Rössl, H.-P. Seidel and F. Zeilfelder
Technical Report, 2003
Abstract
We consider the linear space of piecewise polynomials in three variables
which are globally smooth, i.e., trivariate $C^1$ splines. The splines are
defined on a uniform tetrahedral partition $\Delta$, which is a natural
generalization of the four-directional mesh. By using Bernstein-B{\´e}zier
techniques, we establish formulae for the dimension of the $C^1$ splines
of arbitrary degree.
Export
BibTeX
@techreport{HangelbroekNurnbergerRoesslSeidelZeilfelder2003,
TITLE = {The dimension of \$C{\textasciicircum}1\$ splines of arbitrary degree on a tetrahedral partition},
AUTHOR = {Hangelbroek, Thomas and N{\"u}rnberger, G{\"u}nther and R{\"o}ssl, Christian and Seidel, Hans-Peter and Zeilfelder, Frank},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-4-005},
NUMBER = {MPI-I-2003-4-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {We consider the linear space of piecewise polynomials in three variables which are globally smooth, i.e., trivariate $C^1$ splines. The splines are defined on a uniform tetrahedral partition $\Delta$, which is a natural generalization of the four-directional mesh. By using Bernstein-B{\´e}zier techniques, we establish formulae for the dimension of the $C^1$ splines of arbitrary degree.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Hangelbroek, Thomas
%A Nürnberger, Günther
%A Rössl, Christian
%A Seidel, Hans-Peter
%A Zeilfelder, Frank
%+ Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T The dimension of $C^1$ splines of arbitrary degree on a tetrahedral partition :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6887-A
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-4-005
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 39 p.
%X We consider the linear space of piecewise polynomials in three variables
which are globally smooth, i.e., trivariate $C^1$ splines. The splines are
defined on a uniform tetrahedral partition $\Delta$, which is a natural
generalization of the four-directional mesh. By using Bernstein-B{\´e}zier
techniques, we establish formulae for the dimension of the $C^1$ splines
of arbitrary degree.
%B Research Report / Max-Planck-Institut für Informatik
Fast bound consistency for the global cardinality constraint
I. Katriel and S. Thiel
Technical Report, 2003
I. Katriel and S. Thiel
Technical Report, 2003
Abstract
We show an algorithm for bound consistency of {\em global cardinality
constraints}, which runs in time $O(n+n')$ plus the time required to sort
the
assignment variables by range endpoints, where $n$ is the number of
assignment
variables and $n'$ is the number of values in the union of their ranges.
We
thus offer a fast alternative to R\'egin's
arc consistency algorithm~\cite{Regin} which runs
in time $O(n^{3/2}n')$ and space $O(n\cdot n')$. Our algorithm
also achieves bound consistency for the number of occurrences
of each value, which has not been done before.
Export
BibTeX
@techreport{,
TITLE = {Fast bound consistency for the global cardinality constraint},
AUTHOR = {Katriel, Irit and Thiel, Sven},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-013},
NUMBER = {MPI-I-2003-1-013},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {We show an algorithm for bound consistency of {\em global cardinality constraints}, which runs in time $O(n+n')$ plus the time required to sort the assignment variables by range endpoints, where $n$ is the number of assignment variables and $n'$ is the number of values in the union of their ranges. We thus offer a fast alternative to R\'egin's arc consistency algorithm~\cite{Regin} which runs in time $O(n^{3/2}n')$ and space $O(n\cdot n')$. Our algorithm also achieves bound consistency for the number of occurrences of each value, which has not been done before.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Katriel, Irit
%A Thiel, Sven
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Fast bound consistency for the global cardinality constraint :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6B1F-D
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-013
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 30 p.
%X We show an algorithm for bound consistency of {\em global cardinality
constraints}, which runs in time $O(n+n')$ plus the time required to sort
the
assignment variables by range endpoints, where $n$ is the number of
assignment
variables and $n'$ is the number of values in the union of their ranges.
We
thus offer a fast alternative to R\'egin's
arc consistency algorithm~\cite{Regin} which runs
in time $O(n^{3/2}n')$ and space $O(n\cdot n')$. Our algorithm
also achieves bound consistency for the number of occurrences
of each value, which has not been done before.
%B Research Report / Max-Planck-Institut für Informatik
Sum-Multicoloring on paths
A. Kovács
Technical Report, 2003
A. Kovács
Technical Report, 2003
Abstract
The question, whether the preemptive Sum Multicoloring (pSMC)
problem is hard on paths was raised by Halldorsson
et al. ["Multi-coloring trees", Information and Computation,
180(2):113-129,2002]. The pSMC problem is a scheduling problem where the
pairwise conflicting jobs are represented by a conflict graph, and the
time lengths of jobs by integer weights on the nodes. The goal is to
schedule the jobs so that the sum of their finishing times is
minimized. In the paper we give an O(n^3p) time algorithm
for the pSMC problem on paths, where n is the number of nodes and p is
the largest time length. The result easily carries over to cycles.
Export
BibTeX
@techreport{,
TITLE = {Sum-Multicoloring on paths},
AUTHOR = {Kov{\'a}cs, Annamaria},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-015},
NUMBER = {MPI-I-2003-1-015},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {The question, whether the preemptive Sum Multicoloring (pSMC) problem is hard on paths was raised by Halldorsson et al. ["Multi-coloring trees", Information and Computation, 180(2):113-129,2002]. The pSMC problem is a scheduling problem where the pairwise conflicting jobs are represented by a conflict graph, and the time lengths of jobs by integer weights on the nodes. The goal is to schedule the jobs so that the sum of their finishing times is minimized. In the paper we give an O(n^3p) time algorithm for the pSMC problem on paths, where n is the number of nodes and p is the largest time length. The result easily carries over to cycles.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Kovács, Annamaria
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Sum-Multicoloring on paths :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6B18-C
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-015
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 20 p.
%X The question, whether the preemptive Sum Multicoloring (pSMC)
problem is hard on paths was raised by Halldorsson
et al. ["Multi-coloring trees", Information and Computation,
180(2):113-129,2002]. The pSMC problem is a scheduling problem where the
pairwise conflicting jobs are represented by a conflict graph, and the
time lengths of jobs by integer weights on the nodes. The goal is to
schedule the jobs so that the sum of their finishing times is
minimized. In the paper we give an O(n^3p) time algorithm
for the pSMC problem on paths, where n is the number of nodes and p is
the largest time length. The result easily carries over to cycles.
%B Research Report / Max-Planck-Institut für Informatik
Selfish traffic allocation for server farms
P. Krysta, A. Czumaj and B. Vöcking
Technical Report, 2003
P. Krysta, A. Czumaj and B. Vöcking
Technical Report, 2003
Abstract
We study the price of selfish routing in non-cooperative
networks like the Internet. In particular, we investigate the
price of selfish routing using the coordination ratio and
other (e.g., bicriteria) measures in the recently introduced game
theoretic network model of Koutsoupias and Papadimitriou. We generalize
this model towards general, monotone families of cost functions and
cost functions from queueing theory. A summary of our main results
for general, monotone cost functions is as follows.
Export
BibTeX
@techreport{,
TITLE = {Selfish traffic allocation for server farms},
AUTHOR = {Krysta, Piotr and Czumaj, Artur and V{\"o}cking, Berthold},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-011},
NUMBER = {MPI-I-2003-1-011},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {We study the price of selfish routing in non-cooperative networks like the Internet. In particular, we investigate the price of selfish routing using the coordination ratio and other (e.g., bicriteria) measures in the recently introduced game theoretic network model of Koutsoupias and Papadimitriou. We generalize this model towards general, monotone families of cost functions and cost functions from queueing theory. A summary of our main results for general, monotone cost functions is as follows.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Krysta, Piotr
%A Czumaj, Artur
%A Vöcking, Berthold
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Selfish traffic allocation for server farms :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6B33-E
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-011
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 43 p.
%X We study the price of selfish routing in non-cooperative
networks like the Internet. In particular, we investigate the
price of selfish routing using the coordination ratio and
other (e.g., bicriteria) measures in the recently introduced game
theoretic network model of Koutsoupias and Papadimitriou. We generalize
this model towards general, monotone families of cost functions and
cost functions from queueing theory. A summary of our main results
for general, monotone cost functions is as follows.
%B Research Report / Max-Planck-Institut für Informatik
Scheduling and traffic allocation for tasks with bounded splittability
P. Krysta, P. Sanders and B. Vöcking
Technical Report, 2003
P. Krysta, P. Sanders and B. Vöcking
Technical Report, 2003
Abstract
We investigate variants of the well studied problem of scheduling
tasks on uniformly related machines to minimize the makespan.
In the $k$-splittable scheduling problem each task can be broken into
at most $k \ge 2$ pieces each of which has to be assigned to a different
machine. In the slightly more general SAC problem each task $j$ comes with
its own splittability parameter $k_j$, where we assume $k_j \ge 2$.
These problems are known to be $\npc$-hard and, hence, previous
research mainly focuses on approximation algorithms.
Our motivation to study these scheduling problems is traffic allocation
for server farms based on a variant of the Internet Domain Name Service
(DNS) that uses a stochastic splitting of request streams. Optimal
solutions for the $k$-splittable scheduling problem yield optimal
solutions for this traffic allocation problem. Approximation ratios,
however, do not translate from one problem to the other because of
non-linear latency functions. In fact, we can prove that the traffic
allocation problem with standard latency functions from Queueing Theory
cannot be approximated in polynomial time within any finite factor
because of the extreme behavior of these functions.
Because of the inapproximability, we turn our attention to fixed-parameter
tractable algorithms. Our main result is a polynomial time algorithm
computing an exact solution for the $k$-splittable scheduling problem as
well as the SAC problem for any fixed number of machines.
The running time of our algorithm increases exponentially with the
number of machines but is only linear in the number of tasks.
This result is the first proof that bounded splittability reduces
the complexity of scheduling as the unsplittable scheduling is known
to be $\npc$-hard already for two machines. Furthermore, since our
algorithm solves the scheduling problem exactly, it also solves the
traffic allocation problem that motivated our study.
Export
BibTeX
@techreport{MPI-I-2003-1-002,
TITLE = {Scheduling and traffic allocation for tasks with bounded splittability},
AUTHOR = {Krysta, Piotr and Sanders, Peter and V{\"o}cking, Berthold},
LANGUAGE = {eng},
NUMBER = {MPI-I-2003-1-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {We investigate variants of the well studied problem of scheduling tasks on uniformly related machines to minimize the makespan. In the $k$-splittable scheduling problem each task can be broken into at most $k \ge 2$ pieces each of which has to be assigned to a different machine. In the slightly more general SAC problem each task $j$ comes with its own splittability parameter $k_j$, where we assume $k_j \ge 2$. These problems are known to be $\npc$-hard and, hence, previous research mainly focuses on approximation algorithms. Our motivation to study these scheduling problems is traffic allocation for server farms based on a variant of the Internet Domain Name Service (DNS) that uses a stochastic splitting of request streams. Optimal solutions for the $k$-splittable scheduling problem yield optimal solutions for this traffic allocation problem. Approximation ratios, however, do not translate from one problem to the other because of non-linear latency functions. In fact, we can prove that the traffic allocation problem with standard latency functions from Queueing Theory cannot be approximated in polynomial time within any finite factor because of the extreme behavior of these functions. Because of the inapproximability, we turn our attention to fixed-parameter tractable algorithms. Our main result is a polynomial time algorithm computing an exact solution for the $k$-splittable scheduling problem as well as the SAC problem for any fixed number of machines. The running time of our algorithm increases exponentially with the number of machines but is only linear in the number of tasks. This result is the first proof that bounded splittability reduces the complexity of scheduling as the unsplittable scheduling is known to be $\npc$-hard already for two machines. Furthermore, since our algorithm solves the scheduling problem exactly, it also solves the traffic allocation problem that motivated our study.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Krysta, Piotr
%A Sanders, Peter
%A Vöcking, Berthold
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Scheduling and traffic allocation for tasks with bounded splittability :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6BD1-8
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 15 p.
%X We investigate variants of the well studied problem of scheduling
tasks on uniformly related machines to minimize the makespan.
In the $k$-splittable scheduling problem each task can be broken into
at most $k \ge 2$ pieces each of which has to be assigned to a different
machine. In the slightly more general SAC problem each task $j$ comes with
its own splittability parameter $k_j$, where we assume $k_j \ge 2$.
These problems are known to be $\npc$-hard and, hence, previous
research mainly focuses on approximation algorithms.
Our motivation to study these scheduling problems is traffic allocation
for server farms based on a variant of the Internet Domain Name Service
(DNS) that uses a stochastic splitting of request streams. Optimal
solutions for the $k$-splittable scheduling problem yield optimal
solutions for this traffic allocation problem. Approximation ratios,
however, do not translate from one problem to the other because of
non-linear latency functions. In fact, we can prove that the traffic
allocation problem with standard latency functions from Queueing Theory
cannot be approximated in polynomial time within any finite factor
because of the extreme behavior of these functions.
Because of the inapproximability, we turn our attention to fixed-parameter
tractable algorithms. Our main result is a polynomial time algorithm
computing an exact solution for the $k$-splittable scheduling problem as
well as the SAC problem for any fixed number of machines.
The running time of our algorithm increases exponentially with the
number of machines but is only linear in the number of tasks.
This result is the first proof that bounded splittability reduces
the complexity of scheduling as the unsplittable scheduling is known
to be $\npc$-hard already for two machines. Furthermore, since our
algorithm solves the scheduling problem exactly, it also solves the
traffic allocation problem that motivated our study.
%B Research Report / Max-Planck-Institut für Informatik
Visualization of volume data with quadratic super splines
C. Rössl, F. Zeilfelder, G. Nürnberger and H.-P. Seidel
Technical Report, 2003
C. Rössl, F. Zeilfelder, G. Nürnberger and H.-P. Seidel
Technical Report, 2003
Abstract
We develop a new approach to reconstruct non-discrete models from gridded
volume samples. As a model, we use quadratic, trivariate super splines on
a uniform tetrahedral partition $\Delta$. The approximating splines are
determined in a natural and completely symmetric way by averaging local
data samples such that appropriate smoothness conditions are automatically
satisfied. On each tetrahedron of $\Delta$ , the spline is a polynomial of
total degree two which provides several advantages including the e cient
computation, evaluation and visualization of the model. We apply
Bernstein-B{\´e}zier techniques wellknown in Computer Aided Geometric
Design to compute and evaluate the trivariate spline and its gradient.
With this approach the volume data can be visualized e ciently e.g. with
isosurface ray-casting. Along an arbitrary ray the splines are univariate,
piecewise quadratics and thus the exact intersection for a prescribed
isovalue can be easily determined in an analytic and exact way. Our
results confirm the e ciency of the method and demonstrate a high visual
quality for rendered isosurfaces.
Export
BibTeX
@techreport{RoesslZeilfelderNurnbergerSeidel2003,
TITLE = {Visualization of volume data with quadratic super splines},
AUTHOR = {R{\"o}ssl, Christian and Zeilfelder, Frank and N{\"u}rnberger, G{\"u}nther and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2004-4-006},
NUMBER = {MPI-I-2004-4-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {We develop a new approach to reconstruct non-discrete models from gridded volume samples. As a model, we use quadratic, trivariate super splines on a uniform tetrahedral partition $\Delta$. The approximating splines are determined in a natural and completely symmetric way by averaging local data samples such that appropriate smoothness conditions are automatically satisfied. On each tetrahedron of $\Delta$ , the spline is a polynomial of total degree two which provides several advantages including the e cient computation, evaluation and visualization of the model. We apply Bernstein-B{\´e}zier techniques wellknown in Computer Aided Geometric Design to compute and evaluate the trivariate spline and its gradient. With this approach the volume data can be visualized e ciently e.g. with isosurface ray-casting. Along an arbitrary ray the splines are univariate, piecewise quadratics and thus the exact intersection for a prescribed isovalue can be easily determined in an analytic and exact way. Our results confirm the e ciency of the method and demonstrate a high visual quality for rendered isosurfaces.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Rössl, Christian
%A Zeilfelder, Frank
%A Nürnberger, Günther
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
%T Visualization of volume data with quadratic super splines :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6AE8-D
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2004-4-006
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 15 p.
%X We develop a new approach to reconstruct non-discrete models from gridded
volume samples. As a model, we use quadratic, trivariate super splines on
a uniform tetrahedral partition $\Delta$. The approximating splines are
determined in a natural and completely symmetric way by averaging local
data samples such that appropriate smoothness conditions are automatically
satisfied. On each tetrahedron of $\Delta$ , the spline is a polynomial of
total degree two which provides several advantages including the e cient
computation, evaluation and visualization of the model. We apply
Bernstein-B{\´e}zier techniques wellknown in Computer Aided Geometric
Design to compute and evaluate the trivariate spline and its gradient.
With this approach the volume data can be visualized e ciently e.g. with
isosurface ray-casting. Along an arbitrary ray the splines are univariate,
piecewise quadratics and thus the exact intersection for a prescribed
isovalue can be easily determined in an analytic and exact way. Our
results confirm the e ciency of the method and demonstrate a high visual
quality for rendered isosurfaces.
%B Research Report
Asynchronous parallel disk sorting
P. Sanders and R. Dementiev
Technical Report, 2003
P. Sanders and R. Dementiev
Technical Report, 2003
Abstract
We develop an algorithm for parallel disk sorting, whose I/O cost
approaches the lower bound and that guarantees almost perfect
overlap between I/O and computation. Previous algorithms have
either suboptimal I/O volume or cannot guarantee that I/O and
computations can always be overlapped. We give an efficient
implementation that can (at least) compete with the best practical
implementations but gives additional performance guarantees.
For the experiments we have configured a state of the art machine
that can sustain full bandwidth I/O with eight disks and is very cost
effective.
Export
BibTeX
@techreport{MPI-I-2003-1-001,
TITLE = {Asynchronous parallel disk sorting},
AUTHOR = {Sanders, Peter and Dementiev, Roman},
LANGUAGE = {eng},
NUMBER = {MPI-I-2003-1-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {We develop an algorithm for parallel disk sorting, whose I/O cost approaches the lower bound and that guarantees almost perfect overlap between I/O and computation. Previous algorithms have either suboptimal I/O volume or cannot guarantee that I/O and computations can always be overlapped. We give an efficient implementation that can (at least) compete with the best practical implementations but gives additional performance guarantees. For the experiments we have configured a state of the art machine that can sustain full bandwidth I/O with eight disks and is very cost effective.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Sanders, Peter
%A Dementiev, Roman
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Asynchronous parallel disk sorting :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6C80-5
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 22 p.
%X We develop an algorithm for parallel disk sorting, whose I/O cost
approaches the lower bound and that guarantees almost perfect
overlap between I/O and computation. Previous algorithms have
either suboptimal I/O volume or cannot guarantee that I/O and
computations can always be overlapped. We give an efficient
implementation that can (at least) compete with the best practical
implementations but gives additional performance guarantees.
For the experiments we have configured a state of the art machine
that can sustain full bandwidth I/O with eight disks and is very cost
effective.
%B Research Report / Max-Planck-Institut für Informatik
Polynomial time algorithms for network information flow
P. Sanders
Technical Report, 2003
P. Sanders
Technical Report, 2003
Abstract
The famous max-flow min-cut theorem states that a source node $s$ can
send information through a network (V,E) to a sink node t at a
rate determined by the min-cut separating s and t. Recently it
has been shown that this rate can also be achieved for multicasting to
several sinks provided that the intermediate nodes are allowed to
reencode the information they receive. We give
polynomial time algorithms for solving this problem. We additionally
underline the potential benefit of coding by showing that multicasting
without coding sometimes only allows a rate that is a factor
Omega(log |V|) smaller.
Export
BibTeX
@techreport{,
TITLE = {Polynomial time algorithms for network information flow},
AUTHOR = {Sanders, Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-008},
NUMBER = {MPI-I-2003-1-008},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {The famous max-flow min-cut theorem states that a source node $s$ can send information through a network (V,E) to a sink node t at a rate determined by the min-cut separating s and t. Recently it has been shown that this rate can also be achieved for multicasting to several sinks provided that the intermediate nodes are allowed to reencode the information they receive. We give polynomial time algorithms for solving this problem. We additionally underline the potential benefit of coding by showing that multicasting without coding sometimes only allows a rate that is a factor Omega(log |V|) smaller.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Sanders, Peter
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Polynomial time algorithms for network information flow :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6B4A-B
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-008
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 18 p.
%X The famous max-flow min-cut theorem states that a source node $s$ can
send information through a network (V,E) to a sink node t at a
rate determined by the min-cut separating s and t. Recently it
has been shown that this rate can also be achieved for multicasting to
several sinks provided that the intermediate nodes are allowed to
reencode the information they receive. We give
polynomial time algorithms for solving this problem. We additionally
underline the potential benefit of coding by showing that multicasting
without coding sometimes only allows a rate that is a factor
Omega(log |V|) smaller.
%B Research Report / Max-Planck-Institut für Informatik
Cross-monotonic cost sharing methods for connected facility location games
G. Schäfer and S. Leonardi
Technical Report, 2003
G. Schäfer and S. Leonardi
Technical Report, 2003
Abstract
We present cost sharing methods for connected facility location
games that are cross-monotonic, competitive, and recover a constant
fraction of the cost of the constructed solution.
The novelty of this paper is that we use randomized algorithms, and
that we share the expected cost among the participating users.
As a consequence, our cost sharing methods are simple, and achieve
attractive approximation ratios for various NP-hard problems.
We also provide a primal-dual cost sharing method for the connected
facility location game with opening costs.
Export
BibTeX
@techreport{,
TITLE = {Cross-monotonic cost sharing methods for connected facility location games},
AUTHOR = {Sch{\"a}fer, Guido and Leonardi, Stefano},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-017},
NUMBER = {MPI-I-2003-1-017},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {We present cost sharing methods for connected facility location games that are cross-monotonic, competitive, and recover a constant fraction of the cost of the constructed solution. The novelty of this paper is that we use randomized algorithms, and that we share the expected cost among the participating users. As a consequence, our cost sharing methods are simple, and achieve attractive approximation ratios for various NP-hard problems. We also provide a primal-dual cost sharing method for the connected facility location game with opening costs.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Schäfer, Guido
%A Leonardi, Stefano
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Cross-monotonic cost sharing methods for connected facility location games :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6B12-7
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-017
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 10 p.
%X We present cost sharing methods for connected facility location
games that are cross-monotonic, competitive, and recover a constant
fraction of the cost of the constructed solution.
The novelty of this paper is that we use randomized algorithms, and
that we share the expected cost among the participating users.
As a consequence, our cost sharing methods are simple, and achieve
attractive approximation ratios for various NP-hard problems.
We also provide a primal-dual cost sharing method for the connected
facility location game with opening costs.
%B Research Report / Max-Planck-Institut für Informatik
Topology matters: smoothed competitive analysis of metrical task systems
G. Schäfer and N. Sivadasan
Technical Report, 2003
G. Schäfer and N. Sivadasan
Technical Report, 2003
Abstract
We consider online problems that can be modeled as \emph{metrical
task systems}:
An online algorithm resides in a graph $G$ of $n$ nodes and may move
in this graph at a cost equal to the distance.
The algorithm has to service a sequence of \emph{tasks} that arrive
online; each task specifies for each node a \emph{request cost} that
is incurred if the algorithm services the task in this particular node.
The objective is to minimize the total request cost plus the total
travel cost.
Several important online problems can be modeled as metrical task
systems.
Borodin, Linial and Saks \cite{BLS92} presented a deterministic
\emph{work function algorithm} (WFA) for metrical task systems
having a tight competitive ratio of $2n-1$.
However, the competitive ratio often is an over-pessimistic
estimation of the true performance of an online algorithm.
In this paper, we present a \emph{smoothed competitive analysis}
of WFA.
Given an adversarial task sequence, we smoothen the request costs
by means of a symmetric additive smoothing model and analyze the
competitive ratio of WFA on the smoothed task sequence.
Our analysis reveals that the smoothed competitive ratio of WFA
is much better than $O(n)$ and that it depends on several
topological parameters of the underlying graph $G$, such as
the minimum edge length $U_{\min}$, the maximum degree $D$,
and the edge diameter $diam$.
Assuming that the ratio between the maximum and the minimum edge length
of $G$ is bounded by a constant, the smoothed competitive ratio of WFA
becomes $O(diam (U_{\min}/\sigma + \log(D)))$ and
$O(\sqrt{n \cdot (U_{\min}/\sigma + \log(D))})$, where
$\sigma$ denotes the standard deviation of the smoothing distribution.
For example, already for perturbations with $\sigma = \Theta(U_{\min})$
the competitive ratio reduces to $O(\log n)$ on a clique and to
$O(\sqrt{n})$ on a line.
We also prove that for a large class of graphs these bounds are
asymptotically tight.
Furthermore, we provide two lower bounds for any arbitrary graph.
We obtain a better bound of
$O(\beta \cdot (U_{\min}/\psigma + \log(D)))$ on
the smoothed competitive ratio of WFA if each adversarial
task contains at most $\beta$ non-zero entries.
Our analysis holds for various probability distributions,
including the uniform and the normal distribution.
We also provide the first average case analysis of WFA.
We prove that WFA has $O(\log(D))$ expected competitive
ratio if the request costs are chosen randomly from an arbitrary
non-increasing distribution with standard deviation.
Export
BibTeX
@techreport{,
TITLE = {Topology matters: smoothed competitive analysis of metrical task systems},
AUTHOR = {Sch{\"a}fer, Guido and Sivadasan, Naveen},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-016},
NUMBER = {MPI-I-2003-1-016},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {We consider online problems that can be modeled as \emph{metrical task systems}: An online algorithm resides in a graph $G$ of $n$ nodes and may move in this graph at a cost equal to the distance. The algorithm has to service a sequence of \emph{tasks} that arrive online; each task specifies for each node a \emph{request cost} that is incurred if the algorithm services the task in this particular node. The objective is to minimize the total request cost plus the total travel cost. Several important online problems can be modeled as metrical task systems. Borodin, Linial and Saks \cite{BLS92} presented a deterministic \emph{work function algorithm} (WFA) for metrical task systems having a tight competitive ratio of $2n-1$. However, the competitive ratio often is an over-pessimistic estimation of the true performance of an online algorithm. In this paper, we present a \emph{smoothed competitive analysis} of WFA. Given an adversarial task sequence, we smoothen the request costs by means of a symmetric additive smoothing model and analyze the competitive ratio of WFA on the smoothed task sequence. Our analysis reveals that the smoothed competitive ratio of WFA is much better than $O(n)$ and that it depends on several topological parameters of the underlying graph $G$, such as the minimum edge length $U_{\min}$, the maximum degree $D$, and the edge diameter $diam$. Assuming that the ratio between the maximum and the minimum edge length of $G$ is bounded by a constant, the smoothed competitive ratio of WFA becomes $O(diam (U_{\min}/\sigma + \log(D)))$ and $O(\sqrt{n \cdot (U_{\min}/\sigma + \log(D))})$, where $\sigma$ denotes the standard deviation of the smoothing distribution. For example, already for perturbations with $\sigma = \Theta(U_{\min})$ the competitive ratio reduces to $O(\log n)$ on a clique and to $O(\sqrt{n})$ on a line. We also prove that for a large class of graphs these bounds are asymptotically tight. Furthermore, we provide two lower bounds for any arbitrary graph. We obtain a better bound of $O(\beta \cdot (U_{\min}/\psigma + \log(D)))$ on the smoothed competitive ratio of WFA if each adversarial task contains at most $\beta$ non-zero entries. Our analysis holds for various probability distributions, including the uniform and the normal distribution. We also provide the first average case analysis of WFA. We prove that WFA has $O(\log(D))$ expected competitive ratio if the request costs are chosen randomly from an arbitrary non-increasing distribution with standard deviation.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Schäfer, Guido
%A Sivadasan, Naveen
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Topology matters: smoothed competitive analysis of metrical task systems :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6B15-1
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-016
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 28 p.
%X We consider online problems that can be modeled as \emph{metrical
task systems}:
An online algorithm resides in a graph $G$ of $n$ nodes and may move
in this graph at a cost equal to the distance.
The algorithm has to service a sequence of \emph{tasks} that arrive
online; each task specifies for each node a \emph{request cost} that
is incurred if the algorithm services the task in this particular node.
The objective is to minimize the total request cost plus the total
travel cost.
Several important online problems can be modeled as metrical task
systems.
Borodin, Linial and Saks \cite{BLS92} presented a deterministic
\emph{work function algorithm} (WFA) for metrical task systems
having a tight competitive ratio of $2n-1$.
However, the competitive ratio often is an over-pessimistic
estimation of the true performance of an online algorithm.
In this paper, we present a \emph{smoothed competitive analysis}
of WFA.
Given an adversarial task sequence, we smoothen the request costs
by means of a symmetric additive smoothing model and analyze the
competitive ratio of WFA on the smoothed task sequence.
Our analysis reveals that the smoothed competitive ratio of WFA
is much better than $O(n)$ and that it depends on several
topological parameters of the underlying graph $G$, such as
the minimum edge length $U_{\min}$, the maximum degree $D$,
and the edge diameter $diam$.
Assuming that the ratio between the maximum and the minimum edge length
of $G$ is bounded by a constant, the smoothed competitive ratio of WFA
becomes $O(diam (U_{\min}/\sigma + \log(D)))$ and
$O(\sqrt{n \cdot (U_{\min}/\sigma + \log(D))})$, where
$\sigma$ denotes the standard deviation of the smoothing distribution.
For example, already for perturbations with $\sigma = \Theta(U_{\min})$
the competitive ratio reduces to $O(\log n)$ on a clique and to
$O(\sqrt{n})$ on a line.
We also prove that for a large class of graphs these bounds are
asymptotically tight.
Furthermore, we provide two lower bounds for any arbitrary graph.
We obtain a better bound of
$O(\beta \cdot (U_{\min}/\psigma + \log(D)))$ on
the smoothed competitive ratio of WFA if each adversarial
task contains at most $\beta$ non-zero entries.
Our analysis holds for various probability distributions,
including the uniform and the normal distribution.
We also provide the first average case analysis of WFA.
We prove that WFA has $O(\log(D))$ expected competitive
ratio if the request costs are chosen randomly from an arbitrary
non-increasing distribution with standard deviation.
%B Research Report / Max-Planck-Institut für Informatik
A note on the smoothed complexity of the single-source shortest path problem
G. Schäfer
Technical Report, 2003
G. Schäfer
Technical Report, 2003
Abstract
Banderier, Beier and Mehlhorn showed that the single-source shortest
path problem has smoothed complexity $O(m+n(K-k))$ if the edge costs are
$K$-bit integers and the last $k$ least significant bits are perturbed
randomly. Their analysis holds if each bit is set to $0$ or $1$ with
probability $\frac{1}{2}$.
We extend their result and show that the same analysis goes through for
a large class of probability distributions:
We prove a smoothed complexity of $O(m+n(K-k))$ if the last $k$ bits of
each edge cost are replaced by some random number chosen from $[0,
\dots, 2^k-1]$ according to some \emph{arbitrary} probability
distribution $\pdist$ whose expectation is not too close to zero.
We do not require that the edge costs are perturbed independently.
The same time bound holds even if the random perturbations are
heterogeneous.
If $k=K$ our analysis implies a linear average case running time for
various probability distributions.
We also show that the running time is $O(m+n(K-k))$ with high
probability if the random replacements are chosen independently.
Export
BibTeX
@techreport{,
TITLE = {A note on the smoothed complexity of the single-source shortest path problem},
AUTHOR = {Sch{\"a}fer, Guido},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-018},
NUMBER = {MPI-I-2003-1-018},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {Banderier, Beier and Mehlhorn showed that the single-source shortest path problem has smoothed complexity $O(m+n(K-k))$ if the edge costs are $K$-bit integers and the last $k$ least significant bits are perturbed randomly. Their analysis holds if each bit is set to $0$ or $1$ with probability $\frac{1}{2}$. We extend their result and show that the same analysis goes through for a large class of probability distributions: We prove a smoothed complexity of $O(m+n(K-k))$ if the last $k$ bits of each edge cost are replaced by some random number chosen from $[0, \dots, 2^k-1]$ according to some \emph{arbitrary} probability distribution $\pdist$ whose expectation is not too close to zero. We do not require that the edge costs are perturbed independently. The same time bound holds even if the random perturbations are heterogeneous. If $k=K$ our analysis implies a linear average case running time for various probability distributions. We also show that the running time is $O(m+n(K-k))$ with high probability if the random replacements are chosen independently.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Schäfer, Guido
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A note on the smoothed complexity of the single-source shortest path problem :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6B0D-6
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-018
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 8 p.
%X Banderier, Beier and Mehlhorn showed that the single-source shortest
path problem has smoothed complexity $O(m+n(K-k))$ if the edge costs are
$K$-bit integers and the last $k$ least significant bits are perturbed
randomly. Their analysis holds if each bit is set to $0$ or $1$ with
probability $\frac{1}{2}$.
We extend their result and show that the same analysis goes through for
a large class of probability distributions:
We prove a smoothed complexity of $O(m+n(K-k))$ if the last $k$ bits of
each edge cost are replaced by some random number chosen from $[0,
\dots, 2^k-1]$ according to some \emph{arbitrary} probability
distribution $\pdist$ whose expectation is not too close to zero.
We do not require that the edge costs are perturbed independently.
The same time bound holds even if the random perturbations are
heterogeneous.
If $k=K$ our analysis implies a linear average case running time for
various probability distributions.
We also show that the running time is $O(m+n(K-k))$ with high
probability if the random replacements are chosen independently.
%B Research Report / Max-Planck-Institut für Informatik
Average case and smoothed competitive analysis of the multi-level feedback algorithm
G. Schäfer, L. Becchetti, S. Leonardi, A. Marchetti-Spaccamela and T. Vredeveld
Technical Report, 2003
G. Schäfer, L. Becchetti, S. Leonardi, A. Marchetti-Spaccamela and T. Vredeveld
Technical Report, 2003
Abstract
In this paper we introduce the notion of smoothed competitive analysis
of online algorithms. Smoothed analysis has been proposed by Spielman
and Teng [\emph{Smoothed analysis of algorithms: Why the simplex
algorithm usually takes polynomial time}, STOC, 2001] to explain the behaviour
of algorithms that work well in practice while performing very poorly
from a worst case analysis point of view.
We apply this notion to analyze the Multi-Level Feedback (MLF)
algorithm to minimize the total flow time on a sequence of jobs
released over time when the processing time of a job is only known at time of
completion.
The initial processing times are integers in the range $[1,2^K]$.
We use a partial bit randomization model, where the initial processing
times are smoothened by changing the $k$ least significant bits under
a quite general class of probability distributions.
We show that MLF admits a smoothed competitive ratio of
$O((2^k/\sigma)^3 + (2^k/\sigma)^2 2^{K-k})$, where $\sigma$ denotes
the standard deviation of the distribution.
In particular, we obtain a competitive ratio of $O(2^{K-k})$ if
$\sigma = \Theta(2^k)$.
We also prove an $\Omega(2^{K-k})$ lower bound for any deterministic
algorithm that is run on processing times smoothened according to the
partial bit randomization model.
For various other smoothening models, including the additive symmetric
smoothening model used by Spielman and Teng, we give a higher lower
bound of $\Omega(2^K)$.
A direct consequence of our result is also the first average case
analysis of MLF. We show a constant expected ratio of the total flow time of
MLF to the optimum under several distributions including the uniform
distribution.
Export
BibTeX
@techreport{,
TITLE = {Average case and smoothed competitive analysis of the multi-level feedback algorithm},
AUTHOR = {Sch{\"a}fer, Guido and Becchetti, Luca and Leonardi, Stefano and Marchetti-Spaccamela, Alberto and Vredeveld, Tjark},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-014},
NUMBER = {MPI-I-2003-1-014},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {In this paper we introduce the notion of smoothed competitive analysis of online algorithms. Smoothed analysis has been proposed by Spielman and Teng [\emph{Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time}, STOC, 2001] to explain the behaviour of algorithms that work well in practice while performing very poorly from a worst case analysis point of view. We apply this notion to analyze the Multi-Level Feedback (MLF) algorithm to minimize the total flow time on a sequence of jobs released over time when the processing time of a job is only known at time of completion. The initial processing times are integers in the range $[1,2^K]$. We use a partial bit randomization model, where the initial processing times are smoothened by changing the $k$ least significant bits under a quite general class of probability distributions. We show that MLF admits a smoothed competitive ratio of $O((2^k/\sigma)^3 + (2^k/\sigma)^2 2^{K-k})$, where $\sigma$ denotes the standard deviation of the distribution. In particular, we obtain a competitive ratio of $O(2^{K-k})$ if $\sigma = \Theta(2^k)$. We also prove an $\Omega(2^{K-k})$ lower bound for any deterministic algorithm that is run on processing times smoothened according to the partial bit randomization model. For various other smoothening models, including the additive symmetric smoothening model used by Spielman and Teng, we give a higher lower bound of $\Omega(2^K)$. A direct consequence of our result is also the first average case analysis of MLF. We show a constant expected ratio of the total flow time of MLF to the optimum under several distributions including the uniform distribution.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Schäfer, Guido
%A Becchetti, Luca
%A Leonardi, Stefano
%A Marchetti-Spaccamela, Alberto
%A Vredeveld, Tjark
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
%T Average case and smoothed competitive analysis of the multi-level feedback algorithm :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6B1C-4
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-014
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 31 p.
%X In this paper we introduce the notion of smoothed competitive analysis
of online algorithms. Smoothed analysis has been proposed by Spielman
and Teng [\emph{Smoothed analysis of algorithms: Why the simplex
algorithm usually takes polynomial time}, STOC, 2001] to explain the behaviour
of algorithms that work well in practice while performing very poorly
from a worst case analysis point of view.
We apply this notion to analyze the Multi-Level Feedback (MLF)
algorithm to minimize the total flow time on a sequence of jobs
released over time when the processing time of a job is only known at time of
completion.
The initial processing times are integers in the range $[1,2^K]$.
We use a partial bit randomization model, where the initial processing
times are smoothened by changing the $k$ least significant bits under
a quite general class of probability distributions.
We show that MLF admits a smoothed competitive ratio of
$O((2^k/\sigma)^3 + (2^k/\sigma)^2 2^{K-k})$, where $\sigma$ denotes
the standard deviation of the distribution.
In particular, we obtain a competitive ratio of $O(2^{K-k})$ if
$\sigma = \Theta(2^k)$.
We also prove an $\Omega(2^{K-k})$ lower bound for any deterministic
algorithm that is run on processing times smoothened according to the
partial bit randomization model.
For various other smoothening models, including the additive symmetric
smoothening model used by Spielman and Teng, we give a higher lower
bound of $\Omega(2^K)$.
A direct consequence of our result is also the first average case
analysis of MLF. We show a constant expected ratio of the total flow time of
MLF to the optimum under several distributions including the uniform
distribution.
%B Research Report / Max-Planck-Institut für Informatik
The Diamond Operator for Real Algebraic Numbers
S. Schmitt
Technical Report, 2003
S. Schmitt
Technical Report, 2003
Abstract
Real algebraic numbers are real roots of polynomials with integral
coefficients. They can be represented as expressions whose
leaves are integers and whose internal nodes are additions, subtractions,
multiplications, divisions, k-th root operations for integral k,
or taking roots of polynomials whose coefficients are given by the value
of subexpressions. This last operator is called the diamond operator.
I explain the implementation of the diamond operator in a LEDA extension
package.
Export
BibTeX
@techreport{s-doran-03,
TITLE = {The Diamond Operator for Real Algebraic Numbers},
AUTHOR = {Schmitt, Susanne},
LANGUAGE = {eng},
NUMBER = {ECG-TR-243107-01},
INSTITUTION = {Effective Computational Geometry for Curves and Surfaces},
ADDRESS = {Sophia Antipolis, FRANCE},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {Real algebraic numbers are real roots of polynomials with integral coefficients. They can be represented as expressions whose leaves are integers and whose internal nodes are additions, subtractions, multiplications, divisions, k-th root operations for integral k, or taking roots of polynomials whose coefficients are given by the value of subexpressions. This last operator is called the diamond operator. I explain the implementation of the diamond operator in a LEDA extension package.},
}
Endnote
%0 Report
%A Schmitt, Susanne
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T The Diamond Operator for Real Algebraic Numbers :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0019-EBB1-B
%Y Effective Computational Geometry for Curves and Surfaces
%C Sophia Antipolis, FRANCE
%D 2003
%X Real algebraic numbers are real roots of polynomials with integral
coefficients. They can be represented as expressions whose
leaves are integers and whose internal nodes are additions, subtractions,
multiplications, divisions, k-th root operations for integral k,
or taking roots of polynomials whose coefficients are given by the value
of subexpressions. This last operator is called the diamond operator.
I explain the implementation of the diamond operator in a LEDA extension
package.
A linear time heuristic for the branch-decomposition of planar graphs
H. Tamaki
Technical Report, 2003a
H. Tamaki
Technical Report, 2003a
Abstract
Let $G$ be a biconnected planar graph given together with its planar drawing.
A {\em face-vertex walk} in $G$ of length $k$
is an alternating sequence $x_0, \ldots x_k$ of
vertices and faces (i.e., if $x_{i - 1}$ is a face then $x_i$ is
a vertex and vice versa) such that $x_{i - 1}$ and $x_i$ are incident
with each other for $1 \leq i \leq k$.
For each vertex or face $x$ of $G$, let $\alpha_x$ denote
the length of the shortest face-vertex walk from the outer face of $G$ to $x$.
Let $\alpha_G$ denote the maximum of $\alpha_x$ over all vertices/faces $x$.
We show that there always exits a branch-decomposition of $G$ with width
$\alpha_G$ and that such a decomposition
can be constructed in linear time. We also give experimental results,
in which we compare the width of our decomposition with the optimal
width and with the width obtained by some heuristics for general
graphs proposed by previous researchers, on test instances used
by those researchers.
On 56 out of the total 59 test instances, our
method gives a decomposition within additive 2 of the optimum width and
on 33 instances it achieves the optimum width.
Export
BibTeX
@techreport{,
TITLE = {A linear time heuristic for the branch-decomposition of planar graphs},
AUTHOR = {Tamaki, Hisao},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-010},
NUMBER = {MPI-I-2003-1-010},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {Let $G$ be a biconnected planar graph given together with its planar drawing. A {\em face-vertex walk} in $G$ of length $k$ is an alternating sequence $x_0, \ldots x_k$ of vertices and faces (i.e., if $x_{i -- 1}$ is a face then $x_i$ is a vertex and vice versa) such that $x_{i -- 1}$ and $x_i$ are incident with each other for $1 \leq i \leq k$. For each vertex or face $x$ of $G$, let $\alpha_x$ denote the length of the shortest face-vertex walk from the outer face of $G$ to $x$. Let $\alpha_G$ denote the maximum of $\alpha_x$ over all vertices/faces $x$. We show that there always exits a branch-decomposition of $G$ with width $\alpha_G$ and that such a decomposition can be constructed in linear time. We also give experimental results, in which we compare the width of our decomposition with the optimal width and with the width obtained by some heuristics for general graphs proposed by previous researchers, on test instances used by those researchers. On 56 out of the total 59 test instances, our method gives a decomposition within additive 2 of the optimum width and on 33 instances it achieves the optimum width.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Tamaki, Hisao
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A linear time heuristic for the branch-decomposition of planar graphs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6B37-6
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-010
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 18 p.
%X Let $G$ be a biconnected planar graph given together with its planar drawing.
A {\em face-vertex walk} in $G$ of length $k$
is an alternating sequence $x_0, \ldots x_k$ of
vertices and faces (i.e., if $x_{i - 1}$ is a face then $x_i$ is
a vertex and vice versa) such that $x_{i - 1}$ and $x_i$ are incident
with each other for $1 \leq i \leq k$.
For each vertex or face $x$ of $G$, let $\alpha_x$ denote
the length of the shortest face-vertex walk from the outer face of $G$ to $x$.
Let $\alpha_G$ denote the maximum of $\alpha_x$ over all vertices/faces $x$.
We show that there always exits a branch-decomposition of $G$ with width
$\alpha_G$ and that such a decomposition
can be constructed in linear time. We also give experimental results,
in which we compare the width of our decomposition with the optimal
width and with the width obtained by some heuristics for general
graphs proposed by previous researchers, on test instances used
by those researchers.
On 56 out of the total 59 test instances, our
method gives a decomposition within additive 2 of the optimum width and
on 33 instances it achieves the optimum width.
%B Research Report / Max-Planck-Institut für Informatik
Alternating cycles contribution: a strategy of tour-merging for the traveling salesman problem
H. Tamaki
Technical Report, 2003b
H. Tamaki
Technical Report, 2003b
Abstract
A strategy of merging several traveling salesman tours
into a better tour, called ACC (Alternating Cycles Contribution)
is introduced. Two algorithms embodying this strategy for
geometric instances is
implemented and used to enhance Helsgaun's implementaton
of his variant of the Lin-Kernighan heuristic. Experiments
on the large instances in TSPLIB show that a significant
improvement of performance is obtained.
These algorithms were used in September 2002 to find a
new best tour for the largest instance pla85900
in TSPLIB.
Export
BibTeX
@techreport{,
TITLE = {Alternating cycles contribution: a strategy of tour-merging for the traveling salesman problem},
AUTHOR = {Tamaki, Hisao},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-007},
NUMBER = {MPI-I-2003-1-007},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {A strategy of merging several traveling salesman tours into a better tour, called ACC (Alternating Cycles Contribution) is introduced. Two algorithms embodying this strategy for geometric instances is implemented and used to enhance Helsgaun's implementaton of his variant of the Lin-Kernighan heuristic. Experiments on the large instances in TSPLIB show that a significant improvement of performance is obtained. These algorithms were used in September 2002 to find a new best tour for the largest instance pla85900 in TSPLIB.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Tamaki, Hisao
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Alternating cycles contribution: a strategy of tour-merging for the traveling salesman problem :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6B66-B
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-1-007
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 22 p.
%X A strategy of merging several traveling salesman tours
into a better tour, called ACC (Alternating Cycles Contribution)
is introduced. Two algorithms embodying this strategy for
geometric instances is
implemented and used to enhance Helsgaun's implementaton
of his variant of the Lin-Kernighan heuristic. Experiments
on the large instances in TSPLIB show that a significant
improvement of performance is obtained.
These algorithms were used in September 2002 to find a
new best tour for the largest instance pla85900
in TSPLIB.
%B Research Report / Max-Planck-Institut für Informatik
3D acquisition of mirroring objects
M. Tarini, H. P. A. Lensch, M. Gösele and H.-P. Seidel
Technical Report, 2003
M. Tarini, H. P. A. Lensch, M. Gösele and H.-P. Seidel
Technical Report, 2003
Abstract
Objects with mirroring optical characteristics are left out of the
scope of most 3D scanning methods. We present here a new automatic
acquisition approach, shape-from-distortion, that focuses on that
category of objects, requires only a still camera and a color
monitor, and produces range scans (plus a normal and a reflectance
map) of the target.
Our technique consists of two steps: first, an improved
environment matte is captured for the mirroring object, using the
interference of patterns with different frequencies in order to
obtain sub-pixel accuracy. Then, the matte is converted into a
normal and a depth map by exploiting the self coherence of a
surface when integrating the normal map along different paths.
The results show very high accuracy, capturing even smallest
surface details. The acquired depth maps can be further processed
using standard techniques to produce a complete 3D mesh of the
object.
Export
BibTeX
@techreport{TariniLenschGoeseleSeidel2003,
TITLE = {{3D} acquisition of mirroring objects},
AUTHOR = {Tarini, Marco and Lensch, Hendrik P. A. and G{\"o}sele, Michael and Seidel, Hans-Peter},
LANGUAGE = {eng},
NUMBER = {MPI-I-2003-4-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {Objects with mirroring optical characteristics are left out of the scope of most 3D scanning methods. We present here a new automatic acquisition approach, shape-from-distortion, that focuses on that category of objects, requires only a still camera and a color monitor, and produces range scans (plus a normal and a reflectance map) of the target. Our technique consists of two steps: first, an improved environment matte is captured for the mirroring object, using the interference of patterns with different frequencies in order to obtain sub-pixel accuracy. Then, the matte is converted into a normal and a depth map by exploiting the self coherence of a surface when integrating the normal map along different paths. The results show very high accuracy, capturing even smallest surface details. The acquired depth maps can be further processed using standard techniques to produce a complete 3D mesh of the object.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Tarini, Marco
%A Lensch, Hendrik P. A.
%A Gösele, Michael
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T 3D acquisition of mirroring objects :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6AF5-F
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 37 p.
%X Objects with mirroring optical characteristics are left out of the
scope of most 3D scanning methods. We present here a new automatic
acquisition approach, shape-from-distortion, that focuses on that
category of objects, requires only a still camera and a color
monitor, and produces range scans (plus a normal and a reflectance
map) of the target.
Our technique consists of two steps: first, an improved
environment matte is captured for the mirroring object, using the
interference of patterns with different frequencies in order to
obtain sub-pixel accuracy. Then, the matte is converted into a
normal and a depth map by exploiting the self coherence of a
surface when integrating the normal map along different paths.
The results show very high accuracy, capturing even smallest
surface details. The acquired depth maps can be further processed
using standard techniques to produce a complete 3D mesh of the
object.
%B Research Report / Max-Planck-Institut für Informatik
A flexible and versatile studio for synchronized multi-view video recording
C. Theobalt, M. Li, M. A. Magnor and H.-P. Seidel
Technical Report, 2003
C. Theobalt, M. Li, M. A. Magnor and H.-P. Seidel
Technical Report, 2003
Abstract
In recent years, the convergence of Computer Vision and Computer Graphics has put forth
new research areas that work on scene reconstruction from and analysis of multi-view video
footage. In free-viewpoint video, for example, new views of a scene are generated from an arbitrary viewpoint
in real-time from a set of real multi-view input video streams.
The analysis of real-world scenes from multi-view video
to extract motion information or reflection models is another field of research that
greatly benefits from high-quality input data.
Building a recording setup for multi-view video involves a great effort on the hardware
as well as the software side. The amount of image data to be processed is huge,
a decent lighting and camera setup is essential for a naturalistic scene appearance and
robust background subtraction, and the computing infrastructure has to enable
real-time processing of the recorded material.
This paper describes the recording setup for multi-view video acquisition that enables the
synchronized recording
of dynamic scenes from multiple camera positions under controlled conditions. The requirements
to the room and their implementation in the separate components of the studio are described in detail.
The efficiency and flexibility of the room is demonstrated on the basis of the results
that we obtain with a real-time 3D scene reconstruction system, a system for non-intrusive optical
motion capture and a model-based free-viewpoint video system for human actors.
~
Export
BibTeX
@techreport{TheobaltMingMagnorSeidel2003,
TITLE = {A flexible and versatile studio for synchronized multi-view video recording},
AUTHOR = {Theobalt, Christian and Li, Ming and Magnor, Marcus A. and Seidel, Hans-Peter},
LANGUAGE = {eng},
NUMBER = {MPI-I-2003-4-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {In recent years, the convergence of Computer Vision and Computer Graphics has put forth new research areas that work on scene reconstruction from and analysis of multi-view video footage. In free-viewpoint video, for example, new views of a scene are generated from an arbitrary viewpoint in real-time from a set of real multi-view input video streams. The analysis of real-world scenes from multi-view video to extract motion information or reflection models is another field of research that greatly benefits from high-quality input data. Building a recording setup for multi-view video involves a great effort on the hardware as well as the software side. The amount of image data to be processed is huge, a decent lighting and camera setup is essential for a naturalistic scene appearance and robust background subtraction, and the computing infrastructure has to enable real-time processing of the recorded material. This paper describes the recording setup for multi-view video acquisition that enables the synchronized recording of dynamic scenes from multiple camera positions under controlled conditions. The requirements to the room and their implementation in the separate components of the studio are described in detail. The efficiency and flexibility of the room is demonstrated on the basis of the results that we obtain with a real-time 3D scene reconstruction system, a system for non-intrusive optical motion capture and a model-based free-viewpoint video system for human actors. ~},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Theobalt, Christian
%A Li, Ming
%A Magnor, Marcus A.
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Graphics - Optics - Vision, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T A flexible and versatile studio for synchronized multi-view video recording :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6AF2-6
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 18 p.
%X In recent years, the convergence of Computer Vision and Computer Graphics has put forth
new research areas that work on scene reconstruction from and analysis of multi-view video
footage. In free-viewpoint video, for example, new views of a scene are generated from an arbitrary viewpoint
in real-time from a set of real multi-view input video streams.
The analysis of real-world scenes from multi-view video
to extract motion information or reflection models is another field of research that
greatly benefits from high-quality input data.
Building a recording setup for multi-view video involves a great effort on the hardware
as well as the software side. The amount of image data to be processed is huge,
a decent lighting and camera setup is essential for a naturalistic scene appearance and
robust background subtraction, and the computing infrastructure has to enable
real-time processing of the recorded material.
This paper describes the recording setup for multi-view video acquisition that enables the
synchronized recording
of dynamic scenes from multiple camera positions under controlled conditions. The requirements
to the room and their implementation in the separate components of the studio are described in detail.
The efficiency and flexibility of the room is demonstrated on the basis of the results
that we obtain with a real-time 3D scene reconstruction system, a system for non-intrusive optical
motion capture and a model-based free-viewpoint video system for human actors.
~
%B Research Report / Max-Planck-Institut für Informatik
FaceSketch: an interface for sketching and coloring cartoon faces
N. Zakaria
Technical Report, 2003
N. Zakaria
Technical Report, 2003
Abstract
We discuss FaceSketch, an interface for 2D facial human-like cartoon
sketching. The basic paradigm in FaceSketch is to offer a 2D interaction
style and feel while employing 3D techniques to facilitate various
tasks involved in drawing and redrawing faces from different views.
The system works by accepting freeform strokes denoting head, eyes, nose,
and other facial features, constructing an internal 3D model that
conforms to the input silhouettes, and redisplaying the result in
simple sketchy style from any user-specified viewing direction. In a
manner similar to conventional 2D drawing process, the displayed shape may
be changed by oversketching silhouettes, and hatches and strokes may be
added within its boundary. Implementation-wise, we demonstrate the
feasibility of using simple point primitive as a fundamental 3D
modeling primitive in a sketch-based system. We discuss relatively
simple but robust and efficient point-based algorithms for shape
inflation, modification and display in 3D view. We discuss the
feasibility of our ideas using a number of example interactions and
facial sketches.
Export
BibTeX
@techreport{Zakaria2003,
TITLE = {{FaceSketch}: an interface for sketching and coloring cartoon faces},
AUTHOR = {Zakaria, Nordin},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-4-009},
NUMBER = {MPI-I-2003-4-009},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {We discuss FaceSketch, an interface for 2D facial human-like cartoon sketching. The basic paradigm in FaceSketch is to offer a 2D interaction style and feel while employing 3D techniques to facilitate various tasks involved in drawing and redrawing faces from different views. The system works by accepting freeform strokes denoting head, eyes, nose, and other facial features, constructing an internal 3D model that conforms to the input silhouettes, and redisplaying the result in simple sketchy style from any user-specified viewing direction. In a manner similar to conventional 2D drawing process, the displayed shape may be changed by oversketching silhouettes, and hatches and strokes may be added within its boundary. Implementation-wise, we demonstrate the feasibility of using simple point primitive as a fundamental 3D modeling primitive in a sketch-based system. We discuss relatively simple but robust and efficient point-based algorithms for shape inflation, modification and display in 3D view. We discuss the feasibility of our ideas using a number of example interactions and facial sketches.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Zakaria, Nordin
%+ Computer Graphics, MPI for Informatics, Max Planck Society
%T FaceSketch: an interface for sketching and coloring cartoon faces :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-686B-A
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-4-009
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 30 p.
%X We discuss FaceSketch, an interface for 2D facial human-like cartoon
sketching. The basic paradigm in FaceSketch is to offer a 2D interaction
style and feel while employing 3D techniques to facilitate various
tasks involved in drawing and redrawing faces from different views.
The system works by accepting freeform strokes denoting head, eyes, nose,
and other facial features, constructing an internal 3D model that
conforms to the input silhouettes, and redisplaying the result in
simple sketchy style from any user-specified viewing direction. In a
manner similar to conventional 2D drawing process, the displayed shape may
be changed by oversketching silhouettes, and hatches and strokes may be
added within its boundary. Implementation-wise, we demonstrate the
feasibility of using simple point primitive as a fundamental 3D
modeling primitive in a sketch-based system. We discuss relatively
simple but robust and efficient point-based algorithms for shape
inflation, modification and display in 3D view. We discuss the
feasibility of our ideas using a number of example interactions and
facial sketches.
%B Research Report / Max-Planck-Institut für Informatik
Convex boundary angle based flattening
R. Zayer, C. Rössl and H.-P. Seidel
Technical Report, 2003
R. Zayer, C. Rössl and H.-P. Seidel
Technical Report, 2003
Abstract
Angle Based Flattening is a robust parameterization method that finds a
quasi-conformal mapping by solving a non-linear optimization problem. We
take advantage of a characterization of convex planar drawings of
triconnected graphs to introduce new boundary constraints. This prevents
boundary intersections and avoids post-processing of the parameterized
mesh. We present a simple transformation to e ectively relax the
constrained minimization problem, which improves the convergence of the
optimization method. As a natural extension, we discuss the construction
of Delaunay flat meshes. This may further enhance the quality of the
resulting parameterization.
Export
BibTeX
@techreport{ZayerRoesslSeidel2003,
TITLE = {Convex boundary angle based flattening},
AUTHOR = {Zayer, Rhaleb and R{\"o}ssl, Christian and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-4-003},
NUMBER = {MPI-I-2003-4-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2003},
DATE = {2003},
ABSTRACT = {Angle Based Flattening is a robust parameterization method that finds a quasi-conformal mapping by solving a non-linear optimization problem. We take advantage of a characterization of convex planar drawings of triconnected graphs to introduce new boundary constraints. This prevents boundary intersections and avoids post-processing of the parameterized mesh. We present a simple transformation to e ectively relax the constrained minimization problem, which improves the convergence of the optimization method. As a natural extension, we discuss the construction of Delaunay flat meshes. This may further enhance the quality of the resulting parameterization.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Zayer, Rhaleb
%A Rössl, Christian
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Convex boundary angle based flattening :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6AED-3
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2003-4-003
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2003
%P 16 p.
%X Angle Based Flattening is a robust parameterization method that finds a
quasi-conformal mapping by solving a non-linear optimization problem. We
take advantage of a characterization of convex planar drawings of
triconnected graphs to introduce new boundary constraints. This prevents
boundary intersections and avoids post-processing of the parameterized
mesh. We present a simple transformation to e ectively relax the
constrained minimization problem, which improves the convergence of the
optimization method. As a natural extension, we discuss the construction
of Delaunay flat meshes. This may further enhance the quality of the
resulting parameterization.
%B Research Report / Max-Planck-Institut für Informatik
2002
Cost-filtering Algorithms for the two Sides of the Sum of Weights of Distinct Values Constraint
N. Beldiceanu, M. Carlsson and S. Thiel
Technical Report, 2002
N. Beldiceanu, M. Carlsson and S. Thiel
Technical Report, 2002
Abstract
This article introduces the sum of weights of distinct values
constraint, which can be seen as a generalization of the number of
distinct values as well as of the alldifferent, and the relaxed
alldifferent constraints. This constraint holds if a cost variable is
equal to the sum of the weights associated to the distinct values taken
by a given set of variables. For the first aspect, which is related to
domination, we present four filtering algorithms. Two of them lead to
perfect pruning when each domain variable consists of one set of
consecutive values, while the two others take advantage of holes in the
domains. For the second aspect, which is connected to maximum matching
in a bipartite graph, we provide a complete filtering algorithm for the
general case. Finally we introduce several generic deduction rules,
which link both aspects of the constraint. These rules can be applied to
other optimization constraints such as the minimum weight alldifferent
constraint or the global cardinality constraint with costs. They also
allow taking into account external constraints for getting enhanced
bounds for the cost variable. In practice, the sum of weights of
distinct values constraint occurs in assignment problems where using a
resource once or several times costs the same. It also captures
domination problems where one has to select a set of vertices in order
to control every vertex of a graph.
Export
BibTeX
@techreport{BCT2002:SumOfWeights,
TITLE = {Cost-filtering Algorithms for the two Sides of the Sum of Weights of Distinct Values Constraint},
AUTHOR = {Beldiceanu, Nicolas and Carlsson, Mats and Thiel, Sven},
LANGUAGE = {eng},
ISSN = {1100-3154},
NUMBER = {SICS-T-2002:14-SE},
INSTITUTION = {Swedish Institute of Computer Science},
ADDRESS = {Kista},
YEAR = {2002},
DATE = {2002},
ABSTRACT = {This article introduces the sum of weights of distinct values constraint, which can be seen as a generalization of the number of distinct values as well as of the alldifferent, and the relaxed alldifferent constraints. This constraint holds if a cost variable is equal to the sum of the weights associated to the distinct values taken by a given set of variables. For the first aspect, which is related to domination, we present four filtering algorithms. Two of them lead to perfect pruning when each domain variable consists of one set of consecutive values, while the two others take advantage of holes in the domains. For the second aspect, which is connected to maximum matching in a bipartite graph, we provide a complete filtering algorithm for the general case. Finally we introduce several generic deduction rules, which link both aspects of the constraint. These rules can be applied to other optimization constraints such as the minimum weight alldifferent constraint or the global cardinality constraint with costs. They also allow taking into account external constraints for getting enhanced bounds for the cost variable. In practice, the sum of weights of distinct values constraint occurs in assignment problems where using a resource once or several times costs the same. It also captures domination problems where one has to select a set of vertices in order to control every vertex of a graph.},
TYPE = {SICS Technical Report},
}
Endnote
%0 Report
%A Beldiceanu, Nicolas
%A Carlsson, Mats
%A Thiel, Sven
%+ External Organizations
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Cost-filtering Algorithms for the two Sides of the Sum of Weights of Distinct Values Constraint :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0019-EBAD-A
%Y Swedish Institute of Computer Science
%C Uppsala, Sweden
%D 2002
%X This article introduces the sum of weights of distinct values
constraint, which can be seen as a generalization of the number of
distinct values as well as of the alldifferent, and the relaxed
alldifferent constraints. This constraint holds if a cost variable is
equal to the sum of the weights associated to the distinct values taken
by a given set of variables. For the first aspect, which is related to
domination, we present four filtering algorithms. Two of them lead to
perfect pruning when each domain variable consists of one set of
consecutive values, while the two others take advantage of holes in the
domains. For the second aspect, which is connected to maximum matching
in a bipartite graph, we provide a complete filtering algorithm for the
general case. Finally we introduce several generic deduction rules,
which link both aspects of the constraint. These rules can be applied to
other optimization constraints such as the minimum weight alldifferent
constraint or the global cardinality constraint with costs. They also
allow taking into account external constraints for getting enhanced
bounds for the cost variable. In practice, the sum of weights of
distinct values constraint occurs in assignment problems where using a
resource once or several times costs the same. It also captures
domination problems where one has to select a set of vertices in order
to control every vertex of a graph.
%B SICS Technical Report
%@ false
Perceptual evaluation of tone mapping operators with regard to similarity and preference
F. Drago, W. Martens, K. Myszkowski and H.-P. Seidel
Technical Report, 2002
F. Drago, W. Martens, K. Myszkowski and H.-P. Seidel
Technical Report, 2002
Abstract
Seven tone mapping methods currently available to display high
dynamic range images were submitted to perceptual evaluation in
order to find the attributes most predictive of the success of
a robust all-around tone mapping algorithm.
The two most salient Stimulus Space dimensions underlying the perception of
a set of images produced by six of the tone mappings were revealed
using INdividual Differences SCALing (INDSCAL) analysis;
and an ideal preference point
within the INDSCAL-derived Stimulus Space was determined for a group of
11 observers using PREFerence MAPping (PREFMAP) analysis.
Interpretation of the INDSCAL results was aided by
pairwise comparisons of images that led to an ordering of the images according
to which were more or less natural looking.
Export
BibTeX
@techreport{DragoMartensMyszkowskiSeidel2002,
TITLE = {Perceptual evaluation of tone mapping operators with regard to similarity and preference},
AUTHOR = {Drago, Frederic and Martens, William and Myszkowski, Karol and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2002-4-002},
NUMBER = {MPI-I-2002-4-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2002},
DATE = {2002},
ABSTRACT = {Seven tone mapping methods currently available to display high dynamic range images were submitted to perceptual evaluation in order to find the attributes most predictive of the success of a robust all-around tone mapping algorithm. The two most salient Stimulus Space dimensions underlying the perception of a set of images produced by six of the tone mappings were revealed using INdividual Differences SCALing (INDSCAL) analysis; and an ideal preference point within the INDSCAL-derived Stimulus Space was determined for a group of 11 observers using PREFerence MAPping (PREFMAP) analysis. Interpretation of the INDSCAL results was aided by pairwise comparisons of images that led to an ordering of the images according to which were more or less natural looking.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Drago, Frederic
%A Martens, William
%A Myszkowski, Karol
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Perceptual evaluation of tone mapping operators with regard to similarity and preference :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6C83-0
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2002-4-002
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2002
%P 30 p.
%X Seven tone mapping methods currently available to display high
dynamic range images were submitted to perceptual evaluation in
order to find the attributes most predictive of the success of
a robust all-around tone mapping algorithm.
The two most salient Stimulus Space dimensions underlying the perception of
a set of images produced by six of the tone mappings were revealed
using INdividual Differences SCALing (INDSCAL) analysis;
and an ideal preference point
within the INDSCAL-derived Stimulus Space was determined for a group of
11 observers using PREFerence MAPping (PREFMAP) analysis.
Interpretation of the INDSCAL results was aided by
pairwise comparisons of images that led to an ordering of the images according
to which were more or less natural looking.
%B Research Report / Max-Planck-Institut für Informatik
Sweeping Arrangements of Cubic Segments Exactly and Efficiently
A. Eigenwillig, E. Schömer and N. Wolpert
Technical Report, 2002
A. Eigenwillig, E. Schömer and N. Wolpert
Technical Report, 2002
Abstract
A method is presented to compute the planar arrangement
induced by segments of algebraic curves of degree three
(or less), using an improved Bentley-Ottmann sweep-line
algorithm. Our method is exact (it provides the
mathematically correct result), complete (it handles
all possible geometric degeneracies), and efficient
(the implementation can handle hundreds of segments).
The range of possible input segments comprises conic
arcs and cubic splines as special cases of particular
practical importance.
Export
BibTeX
@techreport{esw-sacsee-02,
TITLE = {Sweeping Arrangements of Cubic Segments Exactly and Efficiently},
AUTHOR = {Eigenwillig, Arno and Sch{\"o}mer, Elmar and Wolpert, Nicola},
LANGUAGE = {eng},
NUMBER = {ECG-TR-182202-01},
INSTITUTION = {Effective Computational Geometry for Curves and Surfaces},
ADDRESS = {Sophia Antipolis},
YEAR = {2002},
DATE = {2002},
ABSTRACT = {A method is presented to compute the planar arrangement induced by segments of algebraic curves of degree three (or less), using an improved Bentley-Ottmann sweep-line algorithm. Our method is exact (it provides the mathematically correct result), complete (it handles all possible geometric degeneracies), and efficient (the implementation can handle hundreds of segments). The range of possible input segments comprises conic arcs and cubic splines as special cases of particular practical importance.},
}
Endnote
%0 Report
%A Eigenwillig, Arno
%A Schömer, Elmar
%A Wolpert, Nicola
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Sweeping Arrangements of Cubic Segments Exactly and Efficiently :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0019-EB42-7
%Y Effective Computational Geometry for Curves and Surfaces
%C Sophia Antipolis
%D 2002
%X A method is presented to compute the planar arrangement
induced by segments of algebraic curves of degree three
(or less), using an improved Bentley-Ottmann sweep-line
algorithm. Our method is exact (it provides the
mathematically correct result), complete (it handles
all possible geometric degeneracies), and efficient
(the implementation can handle hundreds of segments).
The range of possible input segments comprises conic
arcs and cubic splines as special cases of particular
practical importance.
Tutorial notes ACM SM 02: a framework for the acquisition, processing and interactive display of high quality 3D models
M. Gösele, J. Kautz, J. Lang, H. P. A. Lensch and H.-P. Seidel
Technical Report, 2002
M. Gösele, J. Kautz, J. Lang, H. P. A. Lensch and H.-P. Seidel
Technical Report, 2002
Abstract
This tutorial highlights some recent results on the acquisition and
interactive display of high quality 3D models. For further use in
photorealistic rendering or interactive display, a high quality
representation must capture two different things: the shape of the
model represented as a geometric description of its surface and on the
other hand the physical properties of the object. The physics of the
material which an object is made of determine its appearance, e.g. the
object's color, texture, deformation or reflection properties.
The tutorial shows how computer vision and computer graphics
techniques can be seamlessly integrated into a single framework for
the acquisition, processing, and interactive display of high quality
3D models.
Export
BibTeX
@techreport{GoeseleKautzLangLenschSeidel2002,
TITLE = {Tutorial notes {ACM} {SM} 02: a framework for the acquisition, processing and interactive display of high quality {3D} models},
AUTHOR = {G{\"o}sele, Michael and Kautz, Jan and Lang, Jochen and Lensch, Hendrik P. A. and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2002-4-001},
NUMBER = {MPI-I-2002-4-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2002},
DATE = {2002},
ABSTRACT = {This tutorial highlights some recent results on the acquisition and interactive display of high quality 3D models. For further use in photorealistic rendering or interactive display, a high quality representation must capture two different things: the shape of the model represented as a geometric description of its surface and on the other hand the physical properties of the object. The physics of the material which an object is made of determine its appearance, e.g. the object's color, texture, deformation or reflection properties. The tutorial shows how computer vision and computer graphics techniques can be seamlessly integrated into a single framework for the acquisition, processing, and interactive display of high quality 3D models.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Gösele, Michael
%A Kautz, Jan
%A Lang, Jochen
%A Lensch, Hendrik P. A.
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Tutorial notes ACM SM 02: a framework for the acquisition, processing and interactive display of high quality 3D models :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6C86-A
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2002-4-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2002
%P 50 p.
%X This tutorial highlights some recent results on the acquisition and
interactive display of high quality 3D models. For further use in
photorealistic rendering or interactive display, a high quality
representation must capture two different things: the shape of the
model represented as a geometric description of its surface and on the
other hand the physical properties of the object. The physics of the
material which an object is made of determine its appearance, e.g. the
object's color, texture, deformation or reflection properties.
The tutorial shows how computer vision and computer graphics
techniques can be seamlessly integrated into a single framework for
the acquisition, processing, and interactive display of high quality
3D models.
%B Research Report / Max-Planck-Institut für Informatik
Exp Lab: a tool set for computational experiments
S. Hert, T. Polzin, L. Kettner and G. Schäfer
Technical Report, 2002
S. Hert, T. Polzin, L. Kettner and G. Schäfer
Technical Report, 2002
Abstract
We describe a set of tools that support the running, documentation,
and evaluation of computational experiments. The tool set is designed
not only to make computational experimentation easier but also to support
good scientific practice by making results reproducable and more easily
comparable to others' results by automatically documenting the experimental
environment. The tools can be used separately or in concert and support
all manner of experiments (\textit{i.e.}, any executable can be an experiment).
The tools capitalize on the rich functionality available in Python
to provide extreme flexibility and ease of use, but one need know nothing
of Python to use the tools.
Export
BibTeX
@techreport{MPI-I-2002-1-004,
TITLE = {Exp Lab: a tool set for computational experiments},
AUTHOR = {Hert, Susan and Polzin, Tobias and Kettner, Lutz and Sch{\"a}fer, Guido},
LANGUAGE = {eng},
NUMBER = {MPI-I-2002-1-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2002},
DATE = {2002},
ABSTRACT = {We describe a set of tools that support the running, documentation, and evaluation of computational experiments. The tool set is designed not only to make computational experimentation easier but also to support good scientific practice by making results reproducable and more easily comparable to others' results by automatically documenting the experimental environment. The tools can be used separately or in concert and support all manner of experiments (\textit{i.e.}, any executable can be an experiment). The tools capitalize on the rich functionality available in Python to provide extreme flexibility and ease of use, but one need know nothing of Python to use the tools.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Hert, Susan
%A Polzin, Tobias
%A Kettner, Lutz
%A Schäfer, Guido
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Exp Lab: a tool set for computational experiments :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6C95-8
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2002
%P 59 p.
%X We describe a set of tools that support the running, documentation,
and evaluation of computational experiments. The tool set is designed
not only to make computational experimentation easier but also to support
good scientific practice by making results reproducable and more easily
comparable to others' results by automatically documenting the experimental
environment. The tools can be used separately or in concert and support
all manner of experiments (\textit{i.e.}, any executable can be an experiment).
The tools capitalize on the rich functionality available in Python
to provide extreme flexibility and ease of use, but one need know nothing
of Python to use the tools.
%B Research Report
Performance of heuristic and approximation algorithms for the uncapacitated facility location problem
M. Hoefer
Technical Report, 2002
M. Hoefer
Technical Report, 2002
Abstract
The uncapacitated facility location problem (UFLP) is a problem that has been studied intensively in operational
research. Recently a variety of new deterministic and heuristic approximation algorithms have evolved. In this paper,
we compare five new approaches to this problem - the JMS- and the MYZ-approximation algorithms, a version of local
search, a Tabu Search algorithm as well as a version of the Volume algorithm with randomized rounding. We compare
solution quality and running times on different standard benchmark instances. With these instances and additional
material a web page was set up, where the material used in this study is accessible.
Export
BibTeX
@techreport{MPI-I-2002-1-005,
TITLE = {Performance of heuristic and approximation algorithms for the uncapacitated facility location problem},
AUTHOR = {Hoefer, Martin},
LANGUAGE = {eng},
NUMBER = {MPI-I-2002-1-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2002},
DATE = {2002},
ABSTRACT = {The uncapacitated facility location problem (UFLP) is a problem that has been studied intensively in operational research. Recently a variety of new deterministic and heuristic approximation algorithms have evolved. In this paper, we compare five new approaches to this problem -- the JMS- and the MYZ-approximation algorithms, a version of local search, a Tabu Search algorithm as well as a version of the Volume algorithm with randomized rounding. We compare solution quality and running times on different standard benchmark instances. With these instances and additional material a web page was set up, where the material used in this study is accessible.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Hoefer, Martin
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Performance of heuristic and approximation algorithms for the uncapacitated facility location problem :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6C92-E
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2002
%P 27 p.
%X The uncapacitated facility location problem (UFLP) is a problem that has been studied intensively in operational
research. Recently a variety of new deterministic and heuristic approximation algorithms have evolved. In this paper,
we compare five new approaches to this problem - the JMS- and the MYZ-approximation algorithms, a version of local
search, a Tabu Search algorithm as well as a version of the Volume algorithm with randomized rounding. We compare
solution quality and running times on different standard benchmark instances. With these instances and additional
material a web page was set up, where the material used in this study is accessible.
%B Research Report / Max-Planck-Institut für Informatik
A practical minimum spanning tree algorithm using the cycle property
I. Katriel, P. Sanders and J. L. Träff
Technical Report, 2002
I. Katriel, P. Sanders and J. L. Träff
Technical Report, 2002
Abstract
We present a simple new algorithm for computing minimum spanning trees
that is more than two times faster than the best previously known
algorithms (for dense, ``difficult'' inputs). It is of conceptual interest
that the algorithm uses the property that the heaviest edge in a cycle can
be discarded. Previously this has only been exploited in asymptotically
optimal algorithms that are considered to be impractical. An additional
advantage is that the algorithm can greatly profit from pipelined memory
access. Hence, an implementation on a vector machine is up to 13 times
faster than previous algorithms. We outline additional refinements for
MSTs of implicitly defined graphs and the use of the central data
structure for querying the heaviest edge between two nodes in the MST.
The latter result is also interesting for sparse graphs.
Export
BibTeX
@techreport{MPI-I-2002-1-003,
TITLE = {A practical minimum spanning tree algorithm using the cycle property},
AUTHOR = {Katriel, Irit and Sanders, Peter and Tr{\"a}ff, Jesper Larsson},
LANGUAGE = {eng},
NUMBER = {MPI-I-2002-1-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2002},
DATE = {2002},
ABSTRACT = {We present a simple new algorithm for computing minimum spanning trees that is more than two times faster than the best previously known algorithms (for dense, ``difficult'' inputs). It is of conceptual interest that the algorithm uses the property that the heaviest edge in a cycle can be discarded. Previously this has only been exploited in asymptotically optimal algorithms that are considered to be impractical. An additional advantage is that the algorithm can greatly profit from pipelined memory access. Hence, an implementation on a vector machine is up to 13 times faster than previous algorithms. We outline additional refinements for MSTs of implicitly defined graphs and the use of the central data structure for querying the heaviest edge between two nodes in the MST. The latter result is also interesting for sparse graphs.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Katriel, Irit
%A Sanders, Peter
%A Träff, Jesper Larsson
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A practical minimum spanning tree algorithm using the cycle property :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6C98-2
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2002
%P 21 p.
%X We present a simple new algorithm for computing minimum spanning trees
that is more than two times faster than the best previously known
algorithms (for dense, ``difficult'' inputs). It is of conceptual interest
that the algorithm uses the property that the heaviest edge in a cycle can
be discarded. Previously this has only been exploited in asymptotically
optimal algorithms that are considered to be impractical. An additional
advantage is that the algorithm can greatly profit from pipelined memory
access. Hence, an implementation on a vector machine is up to 13 times
faster than previous algorithms. We outline additional refinements for
MSTs of implicitly defined graphs and the use of the central data
structure for querying the heaviest edge between two nodes in the MST.
The latter result is also interesting for sparse graphs.
%B Research Report / Max-Planck-Institut für Informatik
Using (sub)graphs of small width for solving the Steiner problem
T. Polzin and S. Vahdati
Technical Report, 2002
T. Polzin and S. Vahdati
Technical Report, 2002
Abstract
For the Steiner tree problem in networks, we present a practical algorithm
that uses the fixed-parameter tractability of
the problem with respect to a certain width parameter closely
related to pathwidth. The running time of the algorithm is linear in
the number of vertices when the pathwidth is constant. Combining this
algorithm with our previous techniques, we can already profit from
small width in subgraphs of an instance. Integrating this
algorithm into our program package for the Steiner problem
accelerates the solution process on some groups of instances and
leads to a fast solution of
some previously unsolved benchmark instances.
Export
BibTeX
@techreport{,
TITLE = {Using (sub)graphs of small width for solving the Steiner problem},
AUTHOR = {Polzin, Tobias and Vahdati, Siavash},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2002-1-001},
NUMBER = {MPI-I-2002-1-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2002},
DATE = {2002},
ABSTRACT = {For the Steiner tree problem in networks, we present a practical algorithm that uses the fixed-parameter tractability of the problem with respect to a certain width parameter closely related to pathwidth. The running time of the algorithm is linear in the number of vertices when the pathwidth is constant. Combining this algorithm with our previous techniques, we can already profit from small width in subgraphs of an instance. Integrating this algorithm into our program package for the Steiner problem accelerates the solution process on some groups of instances and leads to a fast solution of some previously unsolved benchmark instances.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Polzin, Tobias
%A Vahdati, Siavash
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T Using (sub)graphs of small width for solving the Steiner problem :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6C9E-5
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2002-1-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2002
%P 9 p.
%X For the Steiner tree problem in networks, we present a practical algorithm
that uses the fixed-parameter tractability of
the problem with respect to a certain width parameter closely
related to pathwidth. The running time of the algorithm is linear in
the number of vertices when the pathwidth is constant. Combining this
algorithm with our previous techniques, we can already profit from
small width in subgraphs of an instance. Integrating this
algorithm into our program package for the Steiner problem
accelerates the solution process on some groups of instances and
leads to a fast solution of
some previously unsolved benchmark instances.
%B Research Report / Max-Planck-Institut für Informatik
The factor algorithm for all-to-all communication on clusters of SMP nodes
P. Sanders and J. L. Träff
Technical Report, 2002
P. Sanders and J. L. Träff
Technical Report, 2002
Abstract
We present an algorithm for all-to-all personalized
communication, in which every processor has an individual message to
deliver to every other processor. The machine model we consider is a
cluster of processing nodes where each node, possibly consisting of
several processors, can participate in only one communication
operation with another node at a time. The nodes may have different
numbers of processors. This general model is important for the
implementation of all-to-all communication in libraries such as MPI
where collective communication may take place over arbitrary subsets
of processors. The algorithm is simple and optimal up to an additive
term that is small if the total number of processors is large compared
to the maximal number of processors in a node.
Export
BibTeX
@techreport{MPI-I-2002-1-008,
TITLE = {The factor algorithm for all-to-all communication on clusters of {SMP} nodes},
AUTHOR = {Sanders, Peter and Tr{\"a}ff, Jesper Larsson},
LANGUAGE = {eng},
NUMBER = {MPI-I-2002-1-008},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2002},
DATE = {2002},
ABSTRACT = {We present an algorithm for all-to-all personalized communication, in which every processor has an individual message to deliver to every other processor. The machine model we consider is a cluster of processing nodes where each node, possibly consisting of several processors, can participate in only one communication operation with another node at a time. The nodes may have different numbers of processors. This general model is important for the implementation of all-to-all communication in libraries such as MPI where collective communication may take place over arbitrary subsets of processors. The algorithm is simple and optimal up to an additive term that is small if the total number of processors is large compared to the maximal number of processors in a node.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Sanders, Peter
%A Träff, Jesper Larsson
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T The factor algorithm for all-to-all communication on clusters of SMP nodes :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6C8F-7
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2002
%P 8 p.
%X We present an algorithm for all-to-all personalized
communication, in which every processor has an individual message to
deliver to every other processor. The machine model we consider is a
cluster of processing nodes where each node, possibly consisting of
several processors, can participate in only one communication
operation with another node at a time. The nodes may have different
numbers of processors. This general model is important for the
implementation of all-to-all communication in libraries such as MPI
where collective communication may take place over arbitrary subsets
of processors. The algorithm is simple and optimal up to an additive
term that is small if the total number of processors is large compared
to the maximal number of processors in a node.
%B Research Report / Max-Planck-Institut für Informatik
2001
Linear one-sided stability of MAT for weakly injective domain
S. W. Choi and H.-P. Seidel
Technical Report, 2001
S. W. Choi and H.-P. Seidel
Technical Report, 2001
Abstract
Medial axis transform (MAT) is very sensitive to the noise,
in the sense that, even if a shape is perturbed only slightly,
the Hausdorff distance between the MATs of the original shape and
the perturbed one may be large. But it turns out that MAT is stable,
if we view this phenomenon with the one-sided Hausdorff distance,
rather than with the two-sided Hausdorff distance. In this paper,
we show that, if the original domain is weakly injective,
which means that the MAT of the domain has no end point which
is the center of an inscribed circle osculating the boundary at
only one point, the one-sided Hausdorff distance of the original
domain's MAT with respect to that of the perturbed one is bounded
linearly with the Hausdorff distance of the perturbation.
We also show by example that the linearity of this bound cannot be
achieved for the domains which are not weakly injective. In particular,
these results apply to the domains with the sharp corners, which
were excluded in the past. One consequence of these results is that
we can clarify theoretically the notion of extracting ``the essential
part of the MAT'', which is the heart of the existing pruning methods.
Export
BibTeX
@techreport{ChoiSeidel2001,
TITLE = {Linear one-sided stability of {MAT} for weakly injective domain},
AUTHOR = {Choi, Sung Woo and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-4-004},
NUMBER = {MPI-I-2001-4-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2001},
DATE = {2001},
ABSTRACT = {Medial axis transform (MAT) is very sensitive to the noise, in the sense that, even if a shape is perturbed only slightly, the Hausdorff distance between the MATs of the original shape and the perturbed one may be large. But it turns out that MAT is stable, if we view this phenomenon with the one-sided Hausdorff distance, rather than with the two-sided Hausdorff distance. In this paper, we show that, if the original domain is weakly injective, which means that the MAT of the domain has no end point which is the center of an inscribed circle osculating the boundary at only one point, the one-sided Hausdorff distance of the original domain's MAT with respect to that of the perturbed one is bounded linearly with the Hausdorff distance of the perturbation. We also show by example that the linearity of this bound cannot be achieved for the domains which are not weakly injective. In particular, these results apply to the domains with the sharp corners, which were excluded in the past. One consequence of these results is that we can clarify theoretically the notion of extracting ``the essential part of the MAT'', which is the heart of the existing pruning methods.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Choi, Sung Woo
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Linear one-sided stability of MAT for weakly injective domain :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6CA4-6
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-4-004
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2001
%P 18 p.
%X Medial axis transform (MAT) is very sensitive to the noise,
in the sense that, even if a shape is perturbed only slightly,
the Hausdorff distance between the MATs of the original shape and
the perturbed one may be large. But it turns out that MAT is stable,
if we view this phenomenon with the one-sided Hausdorff distance,
rather than with the two-sided Hausdorff distance. In this paper,
we show that, if the original domain is weakly injective,
which means that the MAT of the domain has no end point which
is the center of an inscribed circle osculating the boundary at
only one point, the one-sided Hausdorff distance of the original
domain's MAT with respect to that of the perturbed one is bounded
linearly with the Hausdorff distance of the perturbation.
We also show by example that the linearity of this bound cannot be
achieved for the domains which are not weakly injective. In particular,
these results apply to the domains with the sharp corners, which
were excluded in the past. One consequence of these results is that
we can clarify theoretically the notion of extracting ``the essential
part of the MAT'', which is the heart of the existing pruning methods.
%B Research Report / Max-Planck-Institut für Informatik
A Randomized On-line Algorithm for the k-Server Problem on a Line
B. Csaba and S. Lodha
Technical Report, 2001
B. Csaba and S. Lodha
Technical Report, 2001
Abstract
We give a O(n^2 \over 3}\log{n})-competitive randomized k--server
algorithm when the underlying metric space is given by n equally spaced
points on a line. For n = k + o(k^{3 \over 2}/\log{k), this algorithm is
o(k)--competitive.
Export
BibTeX
@techreport{Csaba2001,
TITLE = {A Randomized On-line Algorithm for the k-Server Problem on a Line},
AUTHOR = {Csaba, Bela and Lodha, Sachin},
LANGUAGE = {eng},
NUMBER = {DIMACS TechReport 2001-34},
INSTITUTION = {DIMACS-Center for Discrete Mathematics \& Theoretical Computer Science},
ADDRESS = {Piscataway, NJ},
YEAR = {2001},
DATE = {2001},
ABSTRACT = {We give a O(n^2 \over 3}\log{n})-competitive randomized k--server algorithm when the underlying metric space is given by n equally spaced points on a line. For n = k + o(k^{3 \over 2}/\log{k), this algorithm is o(k)--competitive.},
}
Endnote
%0 Report
%A Csaba, Bela
%A Lodha, Sachin
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T A Randomized On-line Algorithm for the k-Server Problem on a Line :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0019-EBC5-0
%Y DIMACS-Center for Discrete Mathematics & Theoretical Computer Science
%C Piscataway, NJ
%D 2001
%X We give a O(n^2 \over 3}\log{n})-competitive randomized k--server
algorithm when the underlying metric space is given by n equally spaced
points on a line. For n = k + o(k^{3 \over 2}/\log{k), this algorithm is
o(k)--competitive.
Efficient light transport using precomputed visibility
K. Daubert, W. Heinrich, J. Kautz, J.-M. Dischler and H.-P. Seidel
Technical Report, 2001
K. Daubert, W. Heinrich, J. Kautz, J.-M. Dischler and H.-P. Seidel
Technical Report, 2001
Abstract
Visibility computations are the most time-consuming part of global
illumination algorithms. The cost is amplified by the fact that
quite often identical or similar information is recomputed multiple
times. In particular this is the case when multiple images of the
same scene are to be generated under varying lighting conditions
and/or viewpoints. But even for a single image with static
illumination, the computations could be accelerated by reusing
visibility information for many different light paths.
In this report we describe a general method of precomputing, storing,
and reusing visibility information for light transport in a number
of different types of scenes. In particular, we consider general
parametric surfaces, triangle meshes without a global
parameterization, and participating media.
We also reorder the light transport in such a way that the
visibility information is accessed in structured memory access
patterns. This yields a method that is well suited for SIMD-style
parallelization of the light transport, and can efficiently be
implemented both in software and using graphics hardware. We finally
demonstrate applications of the method to highly efficient
precomputation of BRDFs, bidirectional texture functions, light
fields, as well as near-interactive volume lighting.
Export
BibTeX
@techreport{DaubertHeidrichKautzDischlerSeidel2001,
TITLE = {Efficient light transport using precomputed visibility},
AUTHOR = {Daubert, Katja and Heinrich, Wolfgang and Kautz, Jan and Dischler, Jean-Michel and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-4-003},
NUMBER = {MPI-I-2001-4-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2001},
DATE = {2001},
ABSTRACT = {Visibility computations are the most time-consuming part of global illumination algorithms. The cost is amplified by the fact that quite often identical or similar information is recomputed multiple times. In particular this is the case when multiple images of the same scene are to be generated under varying lighting conditions and/or viewpoints. But even for a single image with static illumination, the computations could be accelerated by reusing visibility information for many different light paths. In this report we describe a general method of precomputing, storing, and reusing visibility information for light transport in a number of different types of scenes. In particular, we consider general parametric surfaces, triangle meshes without a global parameterization, and participating media. We also reorder the light transport in such a way that the visibility information is accessed in structured memory access patterns. This yields a method that is well suited for SIMD-style parallelization of the light transport, and can efficiently be implemented both in software and using graphics hardware. We finally demonstrate applications of the method to highly efficient precomputation of BRDFs, bidirectional texture functions, light fields, as well as near-interactive volume lighting.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Daubert, Katja
%A Heinrich, Wolfgang
%A Kautz, Jan
%A Dischler, Jean-Michel
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
External Organizations
Computer Graphics, MPI for Informatics, Max Planck Society
%T Efficient light transport using precomputed visibility :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6CA7-F
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-4-003
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2001
%P 32 p.
%X Visibility computations are the most time-consuming part of global
illumination algorithms. The cost is amplified by the fact that
quite often identical or similar information is recomputed multiple
times. In particular this is the case when multiple images of the
same scene are to be generated under varying lighting conditions
and/or viewpoints. But even for a single image with static
illumination, the computations could be accelerated by reusing
visibility information for many different light paths.
In this report we describe a general method of precomputing, storing,
and reusing visibility information for light transport in a number
of different types of scenes. In particular, we consider general
parametric surfaces, triangle meshes without a global
parameterization, and participating media.
We also reorder the light transport in such a way that the
visibility information is accessed in structured memory access
patterns. This yields a method that is well suited for SIMD-style
parallelization of the light transport, and can efficiently be
implemented both in software and using graphics hardware. We finally
demonstrate applications of the method to highly efficient
precomputation of BRDFs, bidirectional texture functions, light
fields, as well as near-interactive volume lighting.
%B Research Report / Max-Planck-Institut für Informatik
An adaptable and extensible geometry kernel
S. Hert, M. Hoffmann, L. Kettner, S. Pion and M. Seel
Technical Report, 2001
S. Hert, M. Hoffmann, L. Kettner, S. Pion and M. Seel
Technical Report, 2001
Abstract
Geometric algorithms are based on geometric objects such as points,
lines and circles. The term \textit{kernel\/} refers to a collection
of representations for constant-size geometric objects and
operations on these representations. This paper describes how such a
geometry kernel can be designed and implemented in C++, having
special emphasis on adaptability, extensibility and efficiency. We
achieve these goals following the generic programming paradigm and
using templates as our tools. These ideas are realized and tested in
\cgal~\cite{svy-cgal}, the Computational Geometry Algorithms
Library.
Export
BibTeX
@techreport{,
TITLE = {An adaptable and extensible geometry kernel},
AUTHOR = {Hert, Susan and Hoffmann, Michael and Kettner, Lutz and Pion, Sylvain and Seel, Michael},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-1-004},
NUMBER = {MPI-I-2001-1-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2001},
DATE = {2001},
ABSTRACT = {Geometric algorithms are based on geometric objects such as points, lines and circles. The term \textit{kernel\/} refers to a collection of representations for constant-size geometric objects and operations on these representations. This paper describes how such a geometry kernel can be designed and implemented in C++, having special emphasis on adaptability, extensibility and efficiency. We achieve these goals following the generic programming paradigm and using templates as our tools. These ideas are realized and tested in \cgal~\cite{svy-cgal}, the Computational Geometry Algorithms Library.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Hert, Susan
%A Hoffmann, Michael
%A Kettner, Lutz
%A Pion, Sylvain
%A Seel, Michael
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T An adaptable and extensible geometry kernel :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6D22-4
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-1-004
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2001
%P 27 p.
%X Geometric algorithms are based on geometric objects such as points,
lines and circles. The term \textit{kernel\/} refers to a collection
of representations for constant-size geometric objects and
operations on these representations. This paper describes how such a
geometry kernel can be designed and implemented in C++, having
special emphasis on adaptability, extensibility and efficiency. We
achieve these goals following the generic programming paradigm and
using templates as our tools. These ideas are realized and tested in
\cgal~\cite{svy-cgal}, the Computational Geometry Algorithms
Library.
%B Research Report / Max-Planck-Institut für Informatik
Approximating minimum size 1,2-connected networks
P. Krysta
Technical Report, 2001
P. Krysta
Technical Report, 2001
Abstract
The problem of finding the minimum size 2-connected subgraph is
a classical problem in network design. It is known to be NP-hard even on
cubic planar graphs and Max-SNP hard.
We study the generalization of this problem, where requirements of 1 or 2
edge or vertex disjoint paths are specified between every pair of vertices,
and the aim is to find a minimum subgraph satisfying these requirements.
For both problems we give $3/2$-approximation algorithms. This improves on
the straightforward $2$-approximation algorithms for these problems, and
generalizes earlier results for 2-connectivity.
We also give analyses of the classical local optimization heuristics for
these two network design problems.
Export
BibTeX
@techreport{,
TITLE = {Approximating minimum size 1,2-connected networks},
AUTHOR = {Krysta, Piotr},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-1-001},
NUMBER = {MPI-I-2001-1-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2001},
DATE = {2001},
ABSTRACT = {The problem of finding the minimum size 2-connected subgraph is a classical problem in network design. It is known to be NP-hard even on cubic planar graphs and Max-SNP hard. We study the generalization of this problem, where requirements of 1 or 2 edge or vertex disjoint paths are specified between every pair of vertices, and the aim is to find a minimum subgraph satisfying these requirements. For both problems we give $3/2$-approximation algorithms. This improves on the straightforward $2$-approximation algorithms for these problems, and generalizes earlier results for 2-connectivity. We also give analyses of the classical local optimization heuristics for these two network design problems.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Krysta, Piotr
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Approximating minimum size 1,2-connected networks :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6D47-0
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-1-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2001
%P 22 p.
%X The problem of finding the minimum size 2-connected subgraph is
a classical problem in network design. It is known to be NP-hard even on
cubic planar graphs and Max-SNP hard.
We study the generalization of this problem, where requirements of 1 or 2
edge or vertex disjoint paths are specified between every pair of vertices,
and the aim is to find a minimum subgraph satisfying these requirements.
For both problems we give $3/2$-approximation algorithms. This improves on
the straightforward $2$-approximation algorithms for these problems, and
generalizes earlier results for 2-connectivity.
We also give analyses of the classical local optimization heuristics for
these two network design problems.
%B Research Report / Max-Planck-Institut für Informatik
Image-based reconstruction of spatially varying materials
H. P. A. Lensch, J. Kautz, M. Gösele, W. Heidrich and H.-P. Seidel
Technical Report, 2001
H. P. A. Lensch, J. Kautz, M. Gösele, W. Heidrich and H.-P. Seidel
Technical Report, 2001
Abstract
The measurement of accurate material properties is an important step
towards photorealistic rendering. Many real-world objects are composed
of a number of materials that often show subtle changes even within a
single material. Thus, for photorealistic rendering both the general
surface properties as well as the spatially varying effects of the
object are needed.
We present an image-based measuring method that robustly detects the
different materials of real objects and fits an average bidirectional
reflectance distribution function (BRDF) to each of them. In order to
model the local changes as well, we project the measured data for each
surface point into a basis formed by the recovered BRDFs leading to a
truly spatially varying BRDF representation.
A high quality model of a real object can be generated with relatively
few input data. The generated model allows for rendering under
arbitrary viewing and lighting conditions and realistically reproduces
the appearance of the original object.
Export
BibTeX
@techreport{LenschKautzGoeseleHeidrichSeidel2001,
TITLE = {Image-based reconstruction of spatially varying materials},
AUTHOR = {Lensch, Hendrik P. A. and Kautz, Jan and G{\"o}sele, Michael and Heidrich, Wolfgang and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-4-001},
NUMBER = {MPI-I-2001-4-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2001},
DATE = {2001},
ABSTRACT = {The measurement of accurate material properties is an important step towards photorealistic rendering. Many real-world objects are composed of a number of materials that often show subtle changes even within a single material. Thus, for photorealistic rendering both the general surface properties as well as the spatially varying effects of the object are needed. We present an image-based measuring method that robustly detects the different materials of real objects and fits an average bidirectional reflectance distribution function (BRDF) to each of them. In order to model the local changes as well, we project the measured data for each surface point into a basis formed by the recovered BRDFs leading to a truly spatially varying BRDF representation. A high quality model of a real object can be generated with relatively few input data. The generated model allows for rendering under arbitrary viewing and lighting conditions and realistically reproduces the appearance of the original object.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Lensch, Hendrik P. A.
%A Kautz, Jan
%A Gösele, Michael
%A Heidrich, Wolfgang
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Image-based reconstruction of spatially varying materials :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6CAD-3
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-4-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2001
%P 20 p.
%X The measurement of accurate material properties is an important step
towards photorealistic rendering. Many real-world objects are composed
of a number of materials that often show subtle changes even within a
single material. Thus, for photorealistic rendering both the general
surface properties as well as the spatially varying effects of the
object are needed.
We present an image-based measuring method that robustly detects the
different materials of real objects and fits an average bidirectional
reflectance distribution function (BRDF) to each of them. In order to
model the local changes as well, we project the measured data for each
surface point into a basis formed by the recovered BRDFs leading to a
truly spatially varying BRDF representation.
A high quality model of a real object can be generated with relatively
few input data. The generated model allows for rendering under
arbitrary viewing and lighting conditions and realistically reproduces
the appearance of the original object.
%B Research Report / Max-Planck-Institut für Informatik
A framework for the acquisition, processing, transmission, and interactive display of high quality 3D models on the Web
H. P. A. Lensch, J. Kautz, M. Gösele and H.-P. Seidel
Technical Report, 2001
H. P. A. Lensch, J. Kautz, M. Gösele and H.-P. Seidel
Technical Report, 2001
Abstract
Digital documents often require highly detailed representations of
real world objects. This is especially true for advanced e-commerce
applications and other multimedia data bases like online
encyclopaedias or virtual museums. Their further success will
strongly depend on advances in the field of high quality object
representation, distribution and rendering.
This tutorial highlights some recent results on the acquisition and
interactive display of high quality 3D models and shows how these
results can be seamlessly integrated with previous work into a
single framework for the acquisition, processing, transmission, and
interactive display of high quality 3D models on the Web.
Export
BibTeX
@techreport{LenschKautzGoeseleSeidel2001,
TITLE = {A framework for the acquisition, processing, transmission, and interactive display of high quality {3D} models on the Web},
AUTHOR = {Lensch, Hendrik P. A. and Kautz, Jan and G{\"o}sele, Michael and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-4-002},
NUMBER = {MPI-I-2001-4-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2001},
DATE = {2001},
ABSTRACT = {Digital documents often require highly detailed representations of real world objects. This is especially true for advanced e-commerce applications and other multimedia data bases like online encyclopaedias or virtual museums. Their further success will strongly depend on advances in the field of high quality object representation, distribution and rendering. This tutorial highlights some recent results on the acquisition and interactive display of high quality 3D models and shows how these results can be seamlessly integrated with previous work into a single framework for the acquisition, processing, transmission, and interactive display of high quality 3D models on the Web.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Lensch, Hendrik P. A.
%A Kautz, Jan
%A Gösele, Michael
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T A framework for the acquisition, processing, transmission, and interactive display of high quality 3D models on the Web :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6CAA-9
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-4-002
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2001
%P 20 p.
%X Digital documents often require highly detailed representations of
real world objects. This is especially true for advanced e-commerce
applications and other multimedia data bases like online
encyclopaedias or virtual museums. Their further success will
strongly depend on advances in the field of high quality object
representation, distribution and rendering.
This tutorial highlights some recent results on the acquisition and
interactive display of high quality 3D models and shows how these
results can be seamlessly integrated with previous work into a
single framework for the acquisition, processing, transmission, and
interactive display of high quality 3D models on the Web.
%B Research Report / Max-Planck-Institut für Informatik
A framework for the acquisition, processing and interactive display of high quality 3D models
H. P. A. Lensch, M. Gösele and H.-P. Seidel
Technical Report, 2001
H. P. A. Lensch, M. Gösele and H.-P. Seidel
Technical Report, 2001
Abstract
This tutorial highlights some recent results on the acquisition and
interactive display of high quality 3D models. For further use in
photorealistic rendering or object recognition, a high quality
representation must capture two different things: the shape of the
model represented as a geometric description of its surface and on the
other hand the appearance of the material or materials it is made of,
e.g. the object's color, texture, or reflection properties.
The tutorial shows how computer vision and computer graphics
techniques can be seamlessly integrated into a single framework for
the acquisition, processing, and interactive display of high quality
3D models.
Export
BibTeX
@techreport{LenschGoeseleSeidel2001,
TITLE = {A framework for the acquisition, processing and interactive display of high quality {3D} models},
AUTHOR = {Lensch, Hendrik P. A. and G{\"o}sele, Michael and Seidel, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-4-005},
NUMBER = {MPI-I-2001-4-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2001},
DATE = {2001},
ABSTRACT = {This tutorial highlights some recent results on the acquisition and interactive display of high quality 3D models. For further use in photorealistic rendering or object recognition, a high quality representation must capture two different things: the shape of the model represented as a geometric description of its surface and on the other hand the appearance of the material or materials it is made of, e.g. the object's color, texture, or reflection properties. The tutorial shows how computer vision and computer graphics techniques can be seamlessly integrated into a single framework for the acquisition, processing, and interactive display of high quality 3D models.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Lensch, Hendrik P. A.
%A Gösele, Michael
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T A framework for the acquisition, processing and interactive display of high quality 3D models :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6CA1-C
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-4-005
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2001
%P 39 p.
%X This tutorial highlights some recent results on the acquisition and
interactive display of high quality 3D models. For further use in
photorealistic rendering or object recognition, a high quality
representation must capture two different things: the shape of the
model represented as a geometric description of its surface and on the
other hand the appearance of the material or materials it is made of,
e.g. the object's color, texture, or reflection properties.
The tutorial shows how computer vision and computer graphics
techniques can be seamlessly integrated into a single framework for
the acquisition, processing, and interactive display of high quality
3D models.
%B Research Report / Max-Planck-Institut für Informatik
Directed single-source shortest-paths in linear average-case time
U. Meyer
Technical Report, 2001
U. Meyer
Technical Report, 2001
Abstract
The quest for a linear-time single-source shortest-path (SSSP) algorithm
on
directed graphs with positive edge weights is an ongoing hot research
topic. While Thorup recently found an ${\cal O}(n+m)$ time RAM algorithm
for undirected graphs with $n$ nodes, $m$ edges and integer edge weights
in
$\{0,\ldots, 2^w-1\}$ where $w$ denotes the word length, the currently
best time bound for directed sparse graphs on a RAM is
${\cal O}(n+m \cdot \log\log n)$.
In the present paper we study the average-case complexity of SSSP.
We give simple label-setting and label-correcting algorithms for
arbitrary directed graphs with random real edge weights
uniformly distributed in $\left[0,1\right]$ and show that they
need linear time ${\cal O}(n+m)$ with high probability.
A variant of the label-correcting approach also supports
parallelization.
Furthermore, we propose a general method to construct graphs with
random edge weights which incur large non-linear expected running times on
many traditional shortest-path algorithms.
Export
BibTeX
@techreport{,
TITLE = {Directed single-source shortest-paths in linear average-case time},
AUTHOR = {Meyer, Ulrich},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-1-002},
NUMBER = {MPI-I-2001-1-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2001},
DATE = {2001},
ABSTRACT = {The quest for a linear-time single-source shortest-path (SSSP) algorithm on directed graphs with positive edge weights is an ongoing hot research topic. While Thorup recently found an ${\cal O}(n+m)$ time RAM algorithm for undirected graphs with $n$ nodes, $m$ edges and integer edge weights in $\{0,\ldots, 2^w-1\}$ where $w$ denotes the word length, the currently best time bound for directed sparse graphs on a RAM is ${\cal O}(n+m \cdot \log\log n)$. In the present paper we study the average-case complexity of SSSP. We give simple label-setting and label-correcting algorithms for arbitrary directed graphs with random real edge weights uniformly distributed in $\left[0,1\right]$ and show that they need linear time ${\cal O}(n+m)$ with high probability. A variant of the label-correcting approach also supports parallelization. Furthermore, we propose a general method to construct graphs with random edge weights which incur large non-linear expected running times on many traditional shortest-path algorithms.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Meyer, Ulrich
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Directed single-source shortest-paths in linear average-case time :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6D44-5
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-1-002
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2001
%P 32 p.
%X The quest for a linear-time single-source shortest-path (SSSP) algorithm
on
directed graphs with positive edge weights is an ongoing hot research
topic. While Thorup recently found an ${\cal O}(n+m)$ time RAM algorithm
for undirected graphs with $n$ nodes, $m$ edges and integer edge weights
in
$\{0,\ldots, 2^w-1\}$ where $w$ denotes the word length, the currently
best time bound for directed sparse graphs on a RAM is
${\cal O}(n+m \cdot \log\log n)$.
In the present paper we study the average-case complexity of SSSP.
We give simple label-setting and label-correcting algorithms for
arbitrary directed graphs with random real edge weights
uniformly distributed in $\left[0,1\right]$ and show that they
need linear time ${\cal O}(n+m)$ with high probability.
A variant of the label-correcting approach also supports
parallelization.
Furthermore, we propose a general method to construct graphs with
random edge weights which incur large non-linear expected running times on
many traditional shortest-path algorithms.
%B Research Report / Max-Planck-Institut für Informatik
Extending reduction techniques for the Steiner tree problem: a combination of alternative-and bound-based approaches
T. Polzin and S. Vahdati
Technical Report, 2001a
T. Polzin and S. Vahdati
Technical Report, 2001a
Abstract
A key ingredient of the most successful algorithms for the Steiner problem are reduction methods, i.e. methods to reduce the size of a given instance while preserving at least one optimal solution (or the
ability to efficiently reconstruct one). While classical reduction tests just inspected simple patterns (vertices or edges), recent and more sophisticated tests extend the scope of inspection to more general patterns (like
trees). In this paper, we present such an extended reduction test, which generalizes different tests in the literature. We use the new approach of combining alternative- and bound-based methods, which
substantially improves the impact of the tests. We also present several algorithmic improvements, especially for the computation of the needed information. The experimental results show a substantial improvement over previous methods using the idea of extension.
Export
BibTeX
@techreport{MPI-I-2001-1-007,
TITLE = {Extending reduction techniques for the Steiner tree problem: a combination of alternative-and bound-based approaches},
AUTHOR = {Polzin, Tobias and Vahdati, Siavash},
LANGUAGE = {eng},
NUMBER = {MPI-I-2001-1-007},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2001},
DATE = {2001},
ABSTRACT = {A key ingredient of the most successful algorithms for the Steiner problem are reduction methods, i.e. methods to reduce the size of a given instance while preserving at least one optimal solution (or the ability to efficiently reconstruct one). While classical reduction tests just inspected simple patterns (vertices or edges), recent and more sophisticated tests extend the scope of inspection to more general patterns (like trees). In this paper, we present such an extended reduction test, which generalizes different tests in the literature. We use the new approach of combining alternative- and bound-based methods, which substantially improves the impact of the tests. We also present several algorithmic improvements, especially for the computation of the needed information. The experimental results show a substantial improvement over previous methods using the idea of extension.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Polzin, Tobias
%A Vahdati, Siavash
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T Extending reduction techniques for the Steiner tree problem: a combination of alternative-and bound-based approaches :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6D16-F
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2001
%P 24 p.
%X A key ingredient of the most successful algorithms for the Steiner problem are reduction methods, i.e. methods to reduce the size of a given instance while preserving at least one optimal solution (or the
ability to efficiently reconstruct one). While classical reduction tests just inspected simple patterns (vertices or edges), recent and more sophisticated tests extend the scope of inspection to more general patterns (like
trees). In this paper, we present such an extended reduction test, which generalizes different tests in the literature. We use the new approach of combining alternative- and bound-based methods, which
substantially improves the impact of the tests. We also present several algorithmic improvements, especially for the computation of the needed information. The experimental results show a substantial improvement over previous methods using the idea of extension.
%B Research Report / Max-Planck-Institut für Informatik
Partitioning techniques for the Steiner problem
T. Polzin and S. Vahdati
Technical Report, 2001b
T. Polzin and S. Vahdati
Technical Report, 2001b
Abstract
Partitioning is one of the basic ideas for designing efficient
algorithms, but on \NP-hard problems like the Steiner problem
straightforward application of the classical paradigms
for exploiting this idea rarely leads to empirically successful
algorithms. In this paper, we present a new approach which is based on
vertex separators. We show several contexts in which this
approach can be used profitably. Our approach is new in the sense that it
uses partitioning to design reduction methods. We introduce two such
methods; and show their impact empirically.
Export
BibTeX
@techreport{,
TITLE = {Partitioning techniques for the Steiner problem},
AUTHOR = {Polzin, Tobias and Vahdati, Siavash},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-1-006},
NUMBER = {MPI-I-2001-1-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2001},
DATE = {2001},
ABSTRACT = {Partitioning is one of the basic ideas for designing efficient algorithms, but on \NP-hard problems like the Steiner problem straightforward application of the classical paradigms for exploiting this idea rarely leads to empirically successful algorithms. In this paper, we present a new approach which is based on vertex separators. We show several contexts in which this approach can be used profitably. Our approach is new in the sense that it uses partitioning to design reduction methods. We introduce two such methods; and show their impact empirically.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Polzin, Tobias
%A Vahdati, Siavash
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T Partitioning techniques for the Steiner problem :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6D19-9
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-1-006
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2001
%P 21 p.
%X Partitioning is one of the basic ideas for designing efficient
algorithms, but on \NP-hard problems like the Steiner problem
straightforward application of the classical paradigms
for exploiting this idea rarely leads to empirically successful
algorithms. In this paper, we present a new approach which is based on
vertex separators. We show several contexts in which this
approach can be used profitably. Our approach is new in the sense that it
uses partitioning to design reduction methods. We introduce two such
methods; and show their impact empirically.
%B Research Report
On Steiner trees and minimum spanning trees in hypergraphs
T. Polzin and S. Vahdati
Technical Report, 2001c
T. Polzin and S. Vahdati
Technical Report, 2001c
Abstract
The state-of-the-art algorithms for geometric Steiner
problems use a two-phase approach based on full Steiner trees
(FSTs). The bottleneck of this approach is the second phase (FST
concatenation phase), in which an optimum Steiner tree is constructed
out of the FSTs generated in the first phase. The hitherto most
successful algorithm for this phase considers the FSTs as edges
of a hypergraph and is based on an LP-relaxation of the minimum spanning
tree in hypergraph (MSTH) problem. In this paper, we compare this
original and some new relaxations of this problem and show their
equivalence, and thereby refute a conjecture in the literature.
Since the second phase can also be formulated as a Steiner
problem in graphs, we clarify the relation of this MSTH-relaxation to
all classical relaxations of the Steiner problem.
Finally, we perform some experiments, both on the quality of the
relaxations and on FST-concatenation methods based on them,
leading to the surprising result that an algorithm of ours
which is designed for general
graphs is superior to the MSTH-approach.
Export
BibTeX
@techreport{,
TITLE = {On Steiner trees and minimum spanning trees in hypergraphs},
AUTHOR = {Polzin, Tobias and Vahdati, Siavash},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-1-005},
NUMBER = {MPI-I-2001-1-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2001},
DATE = {2001},
ABSTRACT = {The state-of-the-art algorithms for geometric Steiner problems use a two-phase approach based on full Steiner trees (FSTs). The bottleneck of this approach is the second phase (FST concatenation phase), in which an optimum Steiner tree is constructed out of the FSTs generated in the first phase. The hitherto most successful algorithm for this phase considers the FSTs as edges of a hypergraph and is based on an LP-relaxation of the minimum spanning tree in hypergraph (MSTH) problem. In this paper, we compare this original and some new relaxations of this problem and show their equivalence, and thereby refute a conjecture in the literature. Since the second phase can also be formulated as a Steiner problem in graphs, we clarify the relation of this MSTH-relaxation to all classical relaxations of the Steiner problem. Finally, we perform some experiments, both on the quality of the relaxations and on FST-concatenation methods based on them, leading to the surprising result that an algorithm of ours which is designed for general graphs is superior to the MSTH-approach.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Polzin, Tobias
%A Vahdati, Siavash
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T On Steiner trees and minimum spanning trees in hypergraphs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6D1F-E
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-1-005
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2001
%P 15 p.
%X The state-of-the-art algorithms for geometric Steiner
problems use a two-phase approach based on full Steiner trees
(FSTs). The bottleneck of this approach is the second phase (FST
concatenation phase), in which an optimum Steiner tree is constructed
out of the FSTs generated in the first phase. The hitherto most
successful algorithm for this phase considers the FSTs as edges
of a hypergraph and is based on an LP-relaxation of the minimum spanning
tree in hypergraph (MSTH) problem. In this paper, we compare this
original and some new relaxations of this problem and show their
equivalence, and thereby refute a conjecture in the literature.
Since the second phase can also be formulated as a Steiner
problem in graphs, we clarify the relation of this MSTH-relaxation to
all classical relaxations of the Steiner problem.
Finally, we perform some experiments, both on the quality of the
relaxations and on FST-concatenation methods based on them,
leading to the surprising result that an algorithm of ours
which is designed for general
graphs is superior to the MSTH-approach.
%B Research Report / Max-Planck-Institut für Informatik
Implementation of planar Nef polyhedra
M. Seel
Technical Report, 2001
M. Seel
Technical Report, 2001
Abstract
A planar Nef polyhedron is any set that can be obtained from the open half-space by a finite number of set complement and set intersection operations. The set of Nef polyhedra is closed under the Boolean set operations. We describe a date structure that realizes two-dimensional Nef polyhedra and offers a large set of binary and unary set operations. The underlying set operations are realized by an efficient and complete algorithm for the overlay of two Nef polyhedra. The algorithm is efficient in the sense that its running time is bounded by the size of the inputs plus the size of the output times a logarithmic factor. The algorithm is complete in the sense that it can handle all inputs and requires no general position assumption. The seecond part of the algorithmic interface considers point location and ray shooting in planar subdivisions.
The implementation follows the generic programming paradigm in C++ and CGAL. Several concept interfaces are defined that allow the adaptation of the software by the means of traits classes. The described project is part of the CGAL libarary version 2.3.
Export
BibTeX
@techreport{,
TITLE = {Implementation of planar Nef polyhedra},
AUTHOR = {Seel, Michael},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-1-003},
NUMBER = {MPI-I-2001-1-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2001},
DATE = {2001},
ABSTRACT = {A planar Nef polyhedron is any set that can be obtained from the open half-space by a finite number of set complement and set intersection operations. The set of Nef polyhedra is closed under the Boolean set operations. We describe a date structure that realizes two-dimensional Nef polyhedra and offers a large set of binary and unary set operations. The underlying set operations are realized by an efficient and complete algorithm for the overlay of two Nef polyhedra. The algorithm is efficient in the sense that its running time is bounded by the size of the inputs plus the size of the output times a logarithmic factor. The algorithm is complete in the sense that it can handle all inputs and requires no general position assumption. The seecond part of the algorithmic interface considers point location and ray shooting in planar subdivisions. The implementation follows the generic programming paradigm in C++ and CGAL. Several concept interfaces are defined that allow the adaptation of the software by the means of traits classes. The described project is part of the CGAL libarary version 2.3.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Seel, Michael
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Implementation of planar Nef polyhedra :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6D25-D
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-1-003
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2001
%P 345 p.
%X A planar Nef polyhedron is any set that can be obtained from the open half-space by a finite number of set complement and set intersection operations. The set of Nef polyhedra is closed under the Boolean set operations. We describe a date structure that realizes two-dimensional Nef polyhedra and offers a large set of binary and unary set operations. The underlying set operations are realized by an efficient and complete algorithm for the overlay of two Nef polyhedra. The algorithm is efficient in the sense that its running time is bounded by the size of the inputs plus the size of the output times a logarithmic factor. The algorithm is complete in the sense that it can handle all inputs and requires no general position assumption. The seecond part of the algorithmic interface considers point location and ray shooting in planar subdivisions.
The implementation follows the generic programming paradigm in C++ and CGAL. Several concept interfaces are defined that allow the adaptation of the software by the means of traits classes. The described project is part of the CGAL libarary version 2.3.
%B Research Report / Max-Planck-Institut für Informatik
Resolution-based decision procedures for the universal theory of some classes of distributive lattices with operators
V. Sofronie-Stokkermans
Technical Report, 2001
V. Sofronie-Stokkermans
Technical Report, 2001
Abstract
In this paper we establish a link between satisfiability of universal
sentences with respect to classes of distributive lattices with operators and
satisfiability with respect to certain classes of relational structures.
This justifies a method for structure-preserving translation to clause form
of universal (Horn) sentences in such classes of algebras.
We show that refinements of resolution yield decision procedures
for the universal (Horn) theory of some such classes.
In particular, we obtain exponential decision procedures
for the universal Horn theory of
(i) the class of all bounded distributive lattices with operators,
(ii) for the class of all bounded distributive lattices with operators
satisfying a set of (generalized) residuation conditions,
and a doubly-exponential decision procedure for the universal Horn theory of
the class of all Heyting algebras.
Export
BibTeX
@techreport{Sofronie-Stokkermans2001,
TITLE = {Resolution-based decision procedures for the universal theory of some classes of distributive lattices with operators},
AUTHOR = {Sofronie-Stokkermans, Viorica},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-2-005},
NUMBER = {MPI-I-2001-2-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2001},
DATE = {2001},
ABSTRACT = {In this paper we establish a link between satisfiability of universal sentences with respect to classes of distributive lattices with operators and satisfiability with respect to certain classes of relational structures. This justifies a method for structure-preserving translation to clause form of universal (Horn) sentences in such classes of algebras. We show that refinements of resolution yield decision procedures for the universal (Horn) theory of some such classes. In particular, we obtain exponential decision procedures for the universal Horn theory of (i) the class of all bounded distributive lattices with operators, (ii) for the class of all bounded distributive lattices with operators satisfying a set of (generalized) residuation conditions, and a doubly-exponential decision procedure for the universal Horn theory of the class of all Heyting algebras.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Sofronie-Stokkermans, Viorica
%+ Automation of Logic, MPI for Informatics, Max Planck Society
%T Resolution-based decision procedures for the universal theory of some classes of distributive lattices with operators :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6CB3-4
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2001-2-005
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2001
%P 41 p.
%X In this paper we establish a link between satisfiability of universal
sentences with respect to classes of distributive lattices with operators and
satisfiability with respect to certain classes of relational structures.
This justifies a method for structure-preserving translation to clause form
of universal (Horn) sentences in such classes of algebras.
We show that refinements of resolution yield decision procedures
for the universal (Horn) theory of some such classes.
In particular, we obtain exponential decision procedures
for the universal Horn theory of
(i) the class of all bounded distributive lattices with operators,
(ii) for the class of all bounded distributive lattices with operators
satisfying a set of (generalized) residuation conditions,
and a doubly-exponential decision procedure for the universal Horn theory of
the class of all Heyting algebras.
%B Research Report / Max-Planck-Institut für Informatik
Superposition and chaining for totally ordered divisible abelian groups
U. Waldmann
Technical Report, 2001
U. Waldmann
Technical Report, 2001
Abstract
We present a calculus for first-order theorem proving in the
presence of the axioms of totally ordered divisible abelian groups.
The calculus extends previous superposition or chaining calculi for
divisible torsion-free abelian groups and dense total orderings
without endpoints. As its predecessors, it is refutationally
complete and requires neither explicit inferences with the theory
axioms nor variable overlaps. It offers thus an efficient way of
treating equalities and inequalities between additive terms over,
e.g., the rational numbers within a first-order theorem prover.
Export
BibTeX
@techreport{WaldmannMPI-I-2001-2-001,
TITLE = {Superposition and chaining for totally ordered divisible abelian groups},
AUTHOR = {Waldmann, Uwe},
LANGUAGE = {eng},
NUMBER = {MPI-I-2001-2-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2001},
DATE = {2001},
ABSTRACT = {We present a calculus for first-order theorem proving in the presence of the axioms of totally ordered divisible abelian groups. The calculus extends previous superposition or chaining calculi for divisible torsion-free abelian groups and dense total orderings without endpoints. As its predecessors, it is refutationally complete and requires neither explicit inferences with the theory axioms nor variable overlaps. It offers thus an efficient way of treating equalities and inequalities between additive terms over, e.g., the rational numbers within a first-order theorem prover.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Waldmann, Uwe
%+ Automation of Logic, MPI for Informatics, Max Planck Society
%T Superposition and chaining for totally ordered divisible abelian groups :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6CFF-D
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2001
%P 40 p.
%X We present a calculus for first-order theorem proving in the
presence of the axioms of totally ordered divisible abelian groups.
The calculus extends previous superposition or chaining calculi for
divisible torsion-free abelian groups and dense total orderings
without endpoints. As its predecessors, it is refutationally
complete and requires neither explicit inferences with the theory
axioms nor variable overlaps. It offers thus an efficient way of
treating equalities and inequalities between additive terms over,
e.g., the rational numbers within a first-order theorem prover.
%B Research Report / Max-Planck-Institut für Informatik
2000
A branch and cut algorithm for the optimal solution of the side-chain placement problem
E. Althaus, O. Kohlbacher, H.-P. Lenhof and P. Müller
Technical Report, 2000
E. Althaus, O. Kohlbacher, H.-P. Lenhof and P. Müller
Technical Report, 2000
Abstract
Rigid--body docking approaches are not sufficient to predict the structure of a
protein complex from the unbound (native) structures of the two proteins.
Accounting for side chain flexibility is an important step towards fully
flexible protein docking. This work describes an approach that allows
conformational flexibility for the side chains while keeping the protein
backbone rigid. Starting from candidates created by a rigid--docking
algorithm, we demangle the side chains of the docking site, thus creating
reasonable approximations of the true complex structure. These structures are
ranked with respect to the binding free energy. We present two new techniques
for side chain demangling. Both approaches are based on a discrete
representation of the side chain conformational space by the use of a rotamer
library. This leads to a combinatorial optimization problem. For the solution
of this problem we propose a fast heuristic approach and an exact, albeit
slower, method that uses branch--\&--cut techniques. As a test set we use the
unbound structures of three proteases and the corresponding protein
inhibitors. For each of the examples, the highest--ranking conformation
produced was a good approximation of the true complex structure.
Export
BibTeX
@techreport{AlthausKohlbacherLenhofMuller2000,
TITLE = {A branch and cut algorithm for the optimal solution of the side-chain placement problem},
AUTHOR = {Althaus, Ernst and Kohlbacher, Oliver and Lenhof, Hans-Peter and M{\"u}ller, Peter},
LANGUAGE = {eng},
NUMBER = {MPI-I-2000-1-001},
YEAR = {2000},
DATE = {2000},
ABSTRACT = {Rigid--body docking approaches are not sufficient to predict the structure of a protein complex from the unbound (native) structures of the two proteins. Accounting for side chain flexibility is an important step towards fully flexible protein docking. This work describes an approach that allows conformational flexibility for the side chains while keeping the protein backbone rigid. Starting from candidates created by a rigid--docking algorithm, we demangle the side chains of the docking site, thus creating reasonable approximations of the true complex structure. These structures are ranked with respect to the binding free energy. We present two new techniques for side chain demangling. Both approaches are based on a discrete representation of the side chain conformational space by the use of a rotamer library. This leads to a combinatorial optimization problem. For the solution of this problem we propose a fast heuristic approach and an exact, albeit slower, method that uses branch--\&--cut techniques. As a test set we use the unbound structures of three proteases and the corresponding protein inhibitors. For each of the examples, the highest--ranking conformation produced was a good approximation of the true complex structure.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Althaus, Ernst
%A Kohlbacher, Oliver
%A Lenhof, Hans-Peter
%A Müller, Peter
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A branch and cut algorithm for the optimal solution of the side-chain placement problem :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A866-0
%D 2000
%P 26 p.
%X Rigid--body docking approaches are not sufficient to predict the structure of a
protein complex from the unbound (native) structures of the two proteins.
Accounting for side chain flexibility is an important step towards fully
flexible protein docking. This work describes an approach that allows
conformational flexibility for the side chains while keeping the protein
backbone rigid. Starting from candidates created by a rigid--docking
algorithm, we demangle the side chains of the docking site, thus creating
reasonable approximations of the true complex structure. These structures are
ranked with respect to the binding free energy. We present two new techniques
for side chain demangling. Both approaches are based on a discrete
representation of the side chain conformational space by the use of a rotamer
library. This leads to a combinatorial optimization problem. For the solution
of this problem we propose a fast heuristic approach and an exact, albeit
slower, method that uses branch--\&--cut techniques. As a test set we use the
unbound structures of three proteases and the corresponding protein
inhibitors. For each of the examples, the highest--ranking conformation
produced was a good approximation of the true complex structure.
%B Research Report / Max-Planck-Institut für Informatik
A powerful heuristic for telephone gossiping
R. Beier and J. Sibeyn
Technical Report, 2000
R. Beier and J. Sibeyn
Technical Report, 2000
Abstract
A refined heuristic for computing schedules for gossiping in the
telephone model is presented. The heuristic is fast: for a network
with n nodes and m edges, requiring R rounds for gossiping, the
running time is O(R n log(n) m) for all tested
classes of graphs. This moderate time consumption allows to compute
gossiping schedules for networks with more than 10,000 PUs and
100,000 connections. The heuristic is good: in practice the computed
schedules never exceed the optimum by more than a few rounds. The
heuristic is versatile: it can also be used for broadcasting and
more general information dispersion patterns. It can handle both the
unit-cost and the linear-cost model.
Actually, the heuristic is so good, that for CCC, shuffle-exchange,
butterfly de Bruijn, star and pancake networks the constructed
gossiping schedules are better than the best theoretically derived
ones. For example, for gossiping on a shuffle-exchange network with
2^{13} PUs, the former upper bound was 49 rounds, while our
heuristic finds a schedule requiring 31 rounds. Also for broadcasting
the heuristic improves on many formerly known results.
A second heuristic, works even better for CCC, butterfly, star and
pancake networks. For example, with this heuristic we found that
gossiping on a pancake network with 7! PUs can be performed in 15
rounds, 2 fewer than achieved by the best theoretical construction.
This second heuristic is less versatile than the first, but by
refined search techniques it can tackle even larger problems, the
main limitation being the storage capacity. Another advantage is that
the constructed schedules can be represented concisely.
Export
BibTeX
@techreport{MPI-I-2000-1-002,
TITLE = {A powerful heuristic for telephone gossiping},
AUTHOR = {Beier, Ren{\'e} and Sibeyn, Jop},
LANGUAGE = {eng},
NUMBER = {MPI-I-2000-1-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2000},
DATE = {2000},
ABSTRACT = {A refined heuristic for computing schedules for gossiping in the telephone model is presented. The heuristic is fast: for a network with n nodes and m edges, requiring R rounds for gossiping, the running time is O(R n log(n) m) for all tested classes of graphs. This moderate time consumption allows to compute gossiping schedules for networks with more than 10,000 PUs and 100,000 connections. The heuristic is good: in practice the computed schedules never exceed the optimum by more than a few rounds. The heuristic is versatile: it can also be used for broadcasting and more general information dispersion patterns. It can handle both the unit-cost and the linear-cost model. Actually, the heuristic is so good, that for CCC, shuffle-exchange, butterfly de Bruijn, star and pancake networks the constructed gossiping schedules are better than the best theoretically derived ones. For example, for gossiping on a shuffle-exchange network with 2^{13} PUs, the former upper bound was 49 rounds, while our heuristic finds a schedule requiring 31 rounds. Also for broadcasting the heuristic improves on many formerly known results. A second heuristic, works even better for CCC, butterfly, star and pancake networks. For example, with this heuristic we found that gossiping on a pancake network with 7! PUs can be performed in 15 rounds, 2 fewer than achieved by the best theoretical construction. This second heuristic is less versatile than the first, but by refined search techniques it can tackle even larger problems, the main limitation being the storage capacity. Another advantage is that the constructed schedules can be represented concisely.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Beier, René
%A Sibeyn, Jop
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A powerful heuristic for telephone gossiping :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6F2E-5
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2000
%P 23 p.
%X A refined heuristic for computing schedules for gossiping in the
telephone model is presented. The heuristic is fast: for a network
with n nodes and m edges, requiring R rounds for gossiping, the
running time is O(R n log(n) m) for all tested
classes of graphs. This moderate time consumption allows to compute
gossiping schedules for networks with more than 10,000 PUs and
100,000 connections. The heuristic is good: in practice the computed
schedules never exceed the optimum by more than a few rounds. The
heuristic is versatile: it can also be used for broadcasting and
more general information dispersion patterns. It can handle both the
unit-cost and the linear-cost model.
Actually, the heuristic is so good, that for CCC, shuffle-exchange,
butterfly de Bruijn, star and pancake networks the constructed
gossiping schedules are better than the best theoretically derived
ones. For example, for gossiping on a shuffle-exchange network with
2^{13} PUs, the former upper bound was 49 rounds, while our
heuristic finds a schedule requiring 31 rounds. Also for broadcasting
the heuristic improves on many formerly known results.
A second heuristic, works even better for CCC, butterfly, star and
pancake networks. For example, with this heuristic we found that
gossiping on a pancake network with 7! PUs can be performed in 15
rounds, 2 fewer than achieved by the best theoretical construction.
This second heuristic is less versatile than the first, but by
refined search techniques it can tackle even larger problems, the
main limitation being the storage capacity. Another advantage is that
the constructed schedules can be represented concisely.
%B Research Report / Max-Planck-Institut für Informatik
Hyperbolic Hausdorff distance for medial axis transform
S. W. Choi and H.-P. Seidel
Technical Report, 2000
S. W. Choi and H.-P. Seidel
Technical Report, 2000
Abstract
Although the Hausdorff distance is a popular device
to measure the differences between sets,
it is not natural for some specific classes of sets,
especially for the medial axis transform
which is defined as the set of all pairs
of the centers and the radii of the maximal balls
contained in another set.
In spite of its many advantages and possible applications,
the medial axis transform has one great weakness,
namely its instability under the Hausdorff distance
when the boundary of the original set is perturbed.
Though many attempts have been made for the resolution of this phenomenon,
most of them are heuristic in nature
and lack precise error analysis.
Export
BibTeX
@techreport{ChoiSeidel2000,
TITLE = {Hyperbolic Hausdorff distance for medial axis transform},
AUTHOR = {Choi, Sung Woo and Seidel, Hans-Peter},
LANGUAGE = {eng},
NUMBER = {MPI-I-2000-4-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2000},
DATE = {2000},
ABSTRACT = {Although the Hausdorff distance is a popular device to measure the differences between sets, it is not natural for some specific classes of sets, especially for the medial axis transform which is defined as the set of all pairs of the centers and the radii of the maximal balls contained in another set. In spite of its many advantages and possible applications, the medial axis transform has one great weakness, namely its instability under the Hausdorff distance when the boundary of the original set is perturbed. Though many attempts have been made for the resolution of this phenomenon, most of them are heuristic in nature and lack precise error analysis.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Choi, Sung Woo
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Hyperbolic Hausdorff distance for medial axis transform :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6D4A-A
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2000
%P 30 p.
%X Although the Hausdorff distance is a popular device
to measure the differences between sets,
it is not natural for some specific classes of sets,
especially for the medial axis transform
which is defined as the set of all pairs
of the centers and the radii of the maximal balls
contained in another set.
In spite of its many advantages and possible applications,
the medial axis transform has one great weakness,
namely its instability under the Hausdorff distance
when the boundary of the original set is perturbed.
Though many attempts have been made for the resolution of this phenomenon,
most of them are heuristic in nature
and lack precise error analysis.
%B Research Report / Max-Planck-Institut für Informatik
Low-contention depth-first scheduling of parallel computations with synchronization variables
P. Fatourou
Technical Report, 2000
P. Fatourou
Technical Report, 2000
Abstract
In this paper, we present a randomized, online, space-efficient
algorithm for the general class of programs with synchronization
variables (such programs are produced by parallel programming
languages, like, e.g., Cool, ID, Sisal, Mul-T, OLDEN and Jade).
The algorithm achieves good locality and low scheduling overheads
for this general class of computations, by combining work-stealing
and depth-first scheduling.
More specifically, given a computation with work $T_1$,
depth $T_\infty$ and $\sigma$ synchronizations that its
execution requires space $S_1$ on a single-processor
computer, our algorithm achieves expected space
complexity at most $S_1 + O(PT_\infty \log (PT_\infty))$
and runs in an expected number of
$O(T_1/P + \sigma \log (PT_\infty)/P + T_\infty \log (PT_\infty))$
timesteps on a shared-memory, parallel machine with $P$ processors.
Moreover, for any $\varepsilon > 0$, the space complexity of our
algorithm is at most $S_1 + O(P(T_\infty + \ln (1/\varepsilon))
\log (P(T_\infty + \ln(P(T_\infty + \ln (1/\varepsilon))/\varepsilon))))$
with probability at least $1-\varepsilon$. Thus, even for values
of $\varepsilon$ as small as $e^{-T_\infty}$, the space complexity
of our algorithm is at most $S_1 + O(PT_\infty \log(PT_\infty))$,
with probability at least $1-e^{-T_\infty}$. The algorithm achieves
good locality and low scheduling overheads by automatically
increasing the granularity of the work scheduled on each
processor.
Our results combine and extend previous algorithms and
analysis techniques (published by Blelloch et. al [6]
and by Narlikar [26]). Our algorithm not only exhibits the
same good space complexity for the general class of programs
with synchronization variables as its deterministic analog
presented in [6], but it also achieves good locality and
low scheduling overhead as the algorithm presented in [26],
which however performs well only for the more restricted class
of nested parallel computations.
Export
BibTeX
@techreport{,
TITLE = {Low-contention depth-first scheduling of parallel computations with synchronization variables},
AUTHOR = {Fatourou, Panagiota},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2000-1-003},
NUMBER = {MPI-I-2000-1-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2000},
DATE = {2000},
ABSTRACT = {In this paper, we present a randomized, online, space-efficient algorithm for the general class of programs with synchronization variables (such programs are produced by parallel programming languages, like, e.g., Cool, ID, Sisal, Mul-T, OLDEN and Jade). The algorithm achieves good locality and low scheduling overheads for this general class of computations, by combining work-stealing and depth-first scheduling. More specifically, given a computation with work $T_1$, depth $T_\infty$ and $\sigma$ synchronizations that its execution requires space $S_1$ on a single-processor computer, our algorithm achieves expected space complexity at most $S_1 + O(PT_\infty \log (PT_\infty))$ and runs in an expected number of $O(T_1/P + \sigma \log (PT_\infty)/P + T_\infty \log (PT_\infty))$ timesteps on a shared-memory, parallel machine with $P$ processors. Moreover, for any $\varepsilon > 0$, the space complexity of our algorithm is at most $S_1 + O(P(T_\infty + \ln (1/\varepsilon)) \log (P(T_\infty + \ln(P(T_\infty + \ln (1/\varepsilon))/\varepsilon))))$ with probability at least $1-\varepsilon$. Thus, even for values of $\varepsilon$ as small as $e^{-T_\infty}$, the space complexity of our algorithm is at most $S_1 + O(PT_\infty \log(PT_\infty))$, with probability at least $1-e^{-T_\infty}$. The algorithm achieves good locality and low scheduling overheads by automatically increasing the granularity of the work scheduled on each processor. Our results combine and extend previous algorithms and analysis techniques (published by Blelloch et. al [6] and by Narlikar [26]). Our algorithm not only exhibits the same good space complexity for the general class of programs with synchronization variables as its deterministic analog presented in [6], but it also achieves good locality and low scheduling overhead as the algorithm presented in [26], which however performs well only for the more restricted class of nested parallel computations.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Fatourou, Panagiota
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Low-contention depth-first scheduling of parallel computations with synchronization variables :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6F2B-B
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2000-1-003
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2000
%P 56 p.
%X In this paper, we present a randomized, online, space-efficient
algorithm for the general class of programs with synchronization
variables (such programs are produced by parallel programming
languages, like, e.g., Cool, ID, Sisal, Mul-T, OLDEN and Jade).
The algorithm achieves good locality and low scheduling overheads
for this general class of computations, by combining work-stealing
and depth-first scheduling.
More specifically, given a computation with work $T_1$,
depth $T_\infty$ and $\sigma$ synchronizations that its
execution requires space $S_1$ on a single-processor
computer, our algorithm achieves expected space
complexity at most $S_1 + O(PT_\infty \log (PT_\infty))$
and runs in an expected number of
$O(T_1/P + \sigma \log (PT_\infty)/P + T_\infty \log (PT_\infty))$
timesteps on a shared-memory, parallel machine with $P$ processors.
Moreover, for any $\varepsilon > 0$, the space complexity of our
algorithm is at most $S_1 + O(P(T_\infty + \ln (1/\varepsilon))
\log (P(T_\infty + \ln(P(T_\infty + \ln (1/\varepsilon))/\varepsilon))))$
with probability at least $1-\varepsilon$. Thus, even for values
of $\varepsilon$ as small as $e^{-T_\infty}$, the space complexity
of our algorithm is at most $S_1 + O(PT_\infty \log(PT_\infty))$,
with probability at least $1-e^{-T_\infty}$. The algorithm achieves
good locality and low scheduling overheads by automatically
increasing the granularity of the work scheduled on each
processor.
Our results combine and extend previous algorithms and
analysis techniques (published by Blelloch et. al [6]
and by Narlikar [26]). Our algorithm not only exhibits the
same good space complexity for the general class of programs
with synchronization variables as its deterministic analog
presented in [6], but it also achieves good locality and
low scheduling overhead as the algorithm presented in [26],
which however performs well only for the more restricted class
of nested parallel computations.
%B Research Report / Max-Planck-Institut für Informatik
Bump map shadows for openGL rendering
J. Kautz, W. Heidrich and K. Daubert
Technical Report, 2000
J. Kautz, W. Heidrich and K. Daubert
Technical Report, 2000
Export
BibTeX
@techreport{KautzHeidrichDaubert2000,
TITLE = {Bump map shadows for {openGL} rendering},
AUTHOR = {Kautz, Jan and Heidrich, Wolfgang and Daubert, Katja},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2000-4-001},
NUMBER = {MPI-I-2000-4-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2000},
DATE = {2000},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Kautz, Jan
%A Heidrich, Wolfgang
%A Daubert, Katja
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Bump map shadows for openGL rendering :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6D50-9
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/2000-4-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2000
%P 18 p.
%B Research Report / Max-Planck-Institut für Informatik
Geometric modeling based on polygonal meshes
L. P. Kobbelt, S. Bischoff, K. Kähler, R. Schneider, M. Botsch, C. Rössl and J. Vorsatz
Technical Report, 2000
L. P. Kobbelt, S. Bischoff, K. Kähler, R. Schneider, M. Botsch, C. Rössl and J. Vorsatz
Technical Report, 2000
Abstract
While traditional computer aided design (CAD) is mainly based on
piecewise polynomial surface representations, the recent advances in
the efficient handling of polygonal meshes have made available a set
of powerful techniques which enable sophisticated modeling operations
on freeform shapes. In this tutorial we are going to give a detailed
introduction into the various techniques that have been proposed over
the last years. Those techniques address important issues such as
surface generation from discrete samples (e.g. laser scans) or from
control meshes (ab initio design); complexity control by adjusting the
level of detail of a given 3D-model to the current application or to
the available hardware resources; advanced mesh optimization
techniques that are based on the numerical simulation of physical
material (e.g. membranes or thin plates) and finally the generation
and modification of hierarchical representations which enable
sophisticated multiresolution modeling functionality.
Export
BibTeX
@techreport{BischoffKahlerSchneiderBotschRosslVorsatz2000,
TITLE = {Geometric modeling based on polygonal meshes},
AUTHOR = {Kobbelt, Leif P. and Bischoff, Stephan and K{\"a}hler, Kolja and Schneider, Robert and Botsch, Mario and R{\"o}ssl, Christian and Vorsatz, Jens},
LANGUAGE = {eng},
NUMBER = {MPI-I-2000-4-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2000},
DATE = {2000},
ABSTRACT = {While traditional computer aided design (CAD) is mainly based on piecewise polynomial surface representations, the recent advances in the efficient handling of polygonal meshes have made available a set of powerful techniques which enable sophisticated modeling operations on freeform shapes. In this tutorial we are going to give a detailed introduction into the various techniques that have been proposed over the last years. Those techniques address important issues such as surface generation from discrete samples (e.g. laser scans) or from control meshes (ab initio design); complexity control by adjusting the level of detail of a given 3D-model to the current application or to the available hardware resources; advanced mesh optimization techniques that are based on the numerical simulation of physical material (e.g. membranes or thin plates) and finally the generation and modification of hierarchical representations which enable sophisticated multiresolution modeling functionality.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Kobbelt, Leif P.
%A Bischoff, Stephan
%A Kähler, Kolja
%A Schneider, Robert
%A Botsch, Mario
%A Rössl, Christian
%A Vorsatz, Jens
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Geometric modeling based on polygonal meshes :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6D4D-4
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2000
%P 52 p.
%X While traditional computer aided design (CAD) is mainly based on
piecewise polynomial surface representations, the recent advances in
the efficient handling of polygonal meshes have made available a set
of powerful techniques which enable sophisticated modeling operations
on freeform shapes. In this tutorial we are going to give a detailed
introduction into the various techniques that have been proposed over
the last years. Those techniques address important issues such as
surface generation from discrete samples (e.g. laser scans) or from
control meshes (ab initio design); complexity control by adjusting the
level of detail of a given 3D-model to the current application or to
the available hardware resources; advanced mesh optimization
techniques that are based on the numerical simulation of physical
material (e.g. membranes or thin plates) and finally the generation
and modification of hierarchical representations which enable
sophisticated multiresolution modeling functionality.
%B Research Report / Max-Planck-Institut für Informatik
A Generalized and improved constructive separation bound for real algebraic expressions
K. Mehlhorn and S. Schirra
Technical Report, 2000
K. Mehlhorn and S. Schirra
Technical Report, 2000
Abstract
We prove a separation bound for a large class of algebraic expressions
specified by expression dags.
The bound applies to expressions whose leaves are integers
and whose internal nodes are additions, subtractions, multiplications,
divisions, $k$-th root operations for integral $k$, and taking roots of
polynomials whose coefficients are given by the values of subexpressions.
The (logarithm of the)
new bound depends linearly on the algebraic degree of the expression.
Previous bounds applied to a smaller class of expressions and did not
guarantee linear dependency.
\ignore{In~\cite{BFMS} the dependency was quadratic.
and in the Li-Yap bound~\cite{LY} the dependency is usually linear, but may be
even worse than quadratic.}
Export
BibTeX
@techreport{MPI-I-2000-1-004,
TITLE = {A Generalized and improved constructive separation bound for real algebraic expressions},
AUTHOR = {Mehlhorn, Kurt and Schirra, Stefan},
LANGUAGE = {eng},
NUMBER = {MPI-I-2000-1-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2000},
DATE = {2000},
ABSTRACT = {We prove a separation bound for a large class of algebraic expressions specified by expression dags. The bound applies to expressions whose leaves are integers and whose internal nodes are additions, subtractions, multiplications, divisions, $k$-th root operations for integral $k$, and taking roots of polynomials whose coefficients are given by the values of subexpressions. The (logarithm of the) new bound depends linearly on the algebraic degree of the expression. Previous bounds applied to a smaller class of expressions and did not guarantee linear dependency. \ignore{In~\cite{BFMS} the dependency was quadratic. and in the Li-Yap bound~\cite{LY} the dependency is usually linear, but may be even worse than quadratic.}},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%A Schirra, Stefan
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A Generalized and improved constructive separation bound for real algebraic expressions :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6D56-E
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2000
%P 12 p.
%X We prove a separation bound for a large class of algebraic expressions
specified by expression dags.
The bound applies to expressions whose leaves are integers
and whose internal nodes are additions, subtractions, multiplications,
divisions, $k$-th root operations for integral $k$, and taking roots of
polynomials whose coefficients are given by the values of subexpressions.
The (logarithm of the)
new bound depends linearly on the algebraic degree of the expression.
Previous bounds applied to a smaller class of expressions and did not
guarantee linear dependency.
\ignore{In~\cite{BFMS} the dependency was quadratic.
and in the Li-Yap bound~\cite{LY} the dependency is usually linear, but may be
even worse than quadratic.}
%B Research Report / Max-Planck-Institut für Informatik
Infimaximal frames: a technique for making lines look like segments
M. Seel and K. Mehlhorn
Technical Report, 2000
M. Seel and K. Mehlhorn
Technical Report, 2000
Abstract
Many geometric algorithms that are usually formulated for points and
segments generalize easily to inputs also containing rays and lines.
The sweep algorithm for segment intersection is a prototypical
example. Implementations of such algorithms do, in general, not
extend easily. For example, segment endpoints cause events in sweep
line algorithms, but lines have no endpoints. We describe a general
technique, which we call infimaximal frames, for extending
implementations to inputs also containing rays and lines. The
technique can also be used to extend implementations of planar
subdivisions to subdivisions with many unbounded faces. We have used
the technique successfully in generalizing a sweep algorithm designed
for segments to rays and lines and also in an implementation of planar
Nef polyhedra.
Our implementation is based on concepts of generic programming in C++
and the geometric data types provided by the C++ Computational
Geometry Algorithms Library (CGAL).
Export
BibTeX
@techreport{MPI-I-2000-1-005,
TITLE = {Infimaximal frames: a technique for making lines look like segments},
AUTHOR = {Seel, Michael and Mehlhorn, Kurt},
LANGUAGE = {eng},
NUMBER = {MPI-I-2000-1-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2000},
DATE = {2000},
ABSTRACT = {Many geometric algorithms that are usually formulated for points and segments generalize easily to inputs also containing rays and lines. The sweep algorithm for segment intersection is a prototypical example. Implementations of such algorithms do, in general, not extend easily. For example, segment endpoints cause events in sweep line algorithms, but lines have no endpoints. We describe a general technique, which we call infimaximal frames, for extending implementations to inputs also containing rays and lines. The technique can also be used to extend implementations of planar subdivisions to subdivisions with many unbounded faces. We have used the technique successfully in generalizing a sweep algorithm designed for segments to rays and lines and also in an implementation of planar Nef polyhedra. Our implementation is based on concepts of generic programming in C++ and the geometric data types provided by the C++ Computational Geometry Algorithms Library (CGAL).},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Seel, Michael
%A Mehlhorn, Kurt
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Infimaximal frames: a technique for making lines look like segments :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6D53-3
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 2000
%P 16 p.
%X Many geometric algorithms that are usually formulated for points and
segments generalize easily to inputs also containing rays and lines.
The sweep algorithm for segment intersection is a prototypical
example. Implementations of such algorithms do, in general, not
extend easily. For example, segment endpoints cause events in sweep
line algorithms, but lines have no endpoints. We describe a general
technique, which we call infimaximal frames, for extending
implementations to inputs also containing rays and lines. The
technique can also be used to extend implementations of planar
subdivisions to subdivisions with many unbounded faces. We have used
the technique successfully in generalizing a sweep algorithm designed
for segments to rays and lines and also in an implementation of planar
Nef polyhedra.
Our implementation is based on concepts of generic programming in C++
and the geometric data types provided by the C++ Computational
Geometry Algorithms Library (CGAL).
%B Research Report / Max-Planck-Institut für Informatik
1999
BALL: Biochemical Algorithms Library
N. Boghossian, O. Kohlbacher and H.-P. Lenhof
Technical Report, 1999
N. Boghossian, O. Kohlbacher and H.-P. Lenhof
Technical Report, 1999
Abstract
In the next century, virtual laboratories will play a key role in
biotechnology. Computer experiments will not only replace
time-consuming and expensive real-world experiments, but they will also
provide insights that cannot be obtained using ``wet'' experiments.
The field that deals with the modeling of atoms, molecules, and their
reactions is called Molecular Modeling. The advent of
Life Sciences gave rise to numerous new developments in this
area. However, the implementation of new simulation tools is extremely
time-consuming. This is mainly due to the large amount of
supporting code ({\eg} for data import/export, visualization, and so on)
that is required in addition to the code necessary to implement the new idea. The
only way to reduce the development time is to reuse reliable code,
preferably using object-oriented approaches. We have designed and
implemented {\Ball}, the first object-oriented application framework for rapid
prototyping in Molecular Modeling. By the use
of the composite design pattern and polymorphism we were able to model
the multitude of complex biochemical concepts in a well-structured and
comprehensible class hierarchy, the {\Ball} kernel classes. The
isomorphism between the biochemical structures and the kernel classes
leads to an intuitive interface. Since {\Ball} was designed for rapid software
prototyping, ease of use and flexibility were our principal design
goals. Besides the kernel classes, {\Ball} provides fundamental
components for import/export of data in various file formats,
Molecular Mechanics simulations, three-dimensional visualization, and
more complex ones like a numerical solver for the Poisson-Boltzmann
equation. The usefulness of {\Ball} was shown by the
implementation of an algorithm that checks proteins for
similarity. Instead of the five months that an earlier implementation
took, we were able to implement it within a day using {\Ball}.
Export
BibTeX
@techreport{BoghossianKohlbacherLenhof1999,
TITLE = {{BALL}: Biochemical Algorithms Library},
AUTHOR = {Boghossian, Nicolas and Kohlbacher, Oliver and Lenhof, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1999-1-002},
NUMBER = {MPI-I-1999-1-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1999},
DATE = {1999},
ABSTRACT = {In the next century, virtual laboratories will play a key role in biotechnology. Computer experiments will not only replace time-consuming and expensive real-world experiments, but they will also provide insights that cannot be obtained using ``wet'' experiments. The field that deals with the modeling of atoms, molecules, and their reactions is called Molecular Modeling. The advent of Life Sciences gave rise to numerous new developments in this area. However, the implementation of new simulation tools is extremely time-consuming. This is mainly due to the large amount of supporting code ({\eg} for data import/export, visualization, and so on) that is required in addition to the code necessary to implement the new idea. The only way to reduce the development time is to reuse reliable code, preferably using object-oriented approaches. We have designed and implemented {\Ball}, the first object-oriented application framework for rapid prototyping in Molecular Modeling. By the use of the composite design pattern and polymorphism we were able to model the multitude of complex biochemical concepts in a well-structured and comprehensible class hierarchy, the {\Ball} kernel classes. The isomorphism between the biochemical structures and the kernel classes leads to an intuitive interface. Since {\Ball} was designed for rapid software prototyping, ease of use and flexibility were our principal design goals. Besides the kernel classes, {\Ball} provides fundamental components for import/export of data in various file formats, Molecular Mechanics simulations, three-dimensional visualization, and more complex ones like a numerical solver for the Poisson-Boltzmann equation. The usefulness of {\Ball} was shown by the implementation of an algorithm that checks proteins for similarity. Instead of the five months that an earlier implementation took, we were able to implement it within a day using {\Ball}.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Boghossian, Nicolas
%A Kohlbacher, Oliver
%A Lenhof, Hans-Peter
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T BALL: Biochemical Algorithms Library :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6F98-8
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1999-1-002
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1999
%P 20 p.
%X In the next century, virtual laboratories will play a key role in
biotechnology. Computer experiments will not only replace
time-consuming and expensive real-world experiments, but they will also
provide insights that cannot be obtained using ``wet'' experiments.
The field that deals with the modeling of atoms, molecules, and their
reactions is called Molecular Modeling. The advent of
Life Sciences gave rise to numerous new developments in this
area. However, the implementation of new simulation tools is extremely
time-consuming. This is mainly due to the large amount of
supporting code ({\eg} for data import/export, visualization, and so on)
that is required in addition to the code necessary to implement the new idea. The
only way to reduce the development time is to reuse reliable code,
preferably using object-oriented approaches. We have designed and
implemented {\Ball}, the first object-oriented application framework for rapid
prototyping in Molecular Modeling. By the use
of the composite design pattern and polymorphism we were able to model
the multitude of complex biochemical concepts in a well-structured and
comprehensible class hierarchy, the {\Ball} kernel classes. The
isomorphism between the biochemical structures and the kernel classes
leads to an intuitive interface. Since {\Ball} was designed for rapid software
prototyping, ease of use and flexibility were our principal design
goals. Besides the kernel classes, {\Ball} provides fundamental
components for import/export of data in various file formats,
Molecular Mechanics simulations, three-dimensional visualization, and
more complex ones like a numerical solver for the Poisson-Boltzmann
equation. The usefulness of {\Ball} was shown by the
implementation of an algorithm that checks proteins for
similarity. Instead of the five months that an earlier implementation
took, we were able to implement it within a day using {\Ball}.
%B Research Report / Max-Planck-Institut für Informatik
A simple way to recognize a correct Voronoi diagram of line segments
C. Burnikel, K. Mehlhorn and M. Seel
Technical Report, 1999
C. Burnikel, K. Mehlhorn and M. Seel
Technical Report, 1999
Abstract
Writing a program for computing the Voronoi diagram of line segments
is a complex task. Not only there is an abundance of geometric cases
that have to be considered, but the problem is also numerically
difficult. Therefore it is very easy to make subtle programming errors.
In this paper we present a procedure that for a given set of sites $S$
and a candidate graph $G$ rigorously checks that $G$ is the correct
Voronoi diagram of line segments for $S$. Our procedure is particularly
efficient and simple to implement.
Export
BibTeX
@techreport{,
TITLE = {A simple way to recognize a correct Voronoi diagram of line segments},
AUTHOR = {Burnikel, Christoph and Mehlhorn, Kurt and Seel, Michael},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1999-1-007},
NUMBER = {MPI-I-1999-1-007},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1999},
DATE = {1999},
ABSTRACT = {Writing a program for computing the Voronoi diagram of line segments is a complex task. Not only there is an abundance of geometric cases that have to be considered, but the problem is also numerically difficult. Therefore it is very easy to make subtle programming errors. In this paper we present a procedure that for a given set of sites $S$ and a candidate graph $G$ rigorously checks that $G$ is the correct Voronoi diagram of line segments for $S$. Our procedure is particularly efficient and simple to implement.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Burnikel, Christoph
%A Mehlhorn, Kurt
%A Seel, Michael
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A simple way to recognize a correct Voronoi diagram of line segments :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6F7E-4
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1999-1-007
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1999
%P 11 p.
%X Writing a program for computing the Voronoi diagram of line segments
is a complex task. Not only there is an abundance of geometric cases
that have to be considered, but the problem is also numerically
difficult. Therefore it is very easy to make subtle programming errors.
In this paper we present a procedure that for a given set of sites $S$
and a candidate graph $G$ rigorously checks that $G$ is the correct
Voronoi diagram of line segments for $S$. Our procedure is particularly
efficient and simple to implement.
%B Research Report / Max-Planck-Institut für Informatik
A theoretical and experimental study on the construction of suffix arrays in external memory
A. Crauser and P. Ferragina
Technical Report, 1999
A. Crauser and P. Ferragina
Technical Report, 1999
Abstract
The construction of full-text indexes on very large text collections
is nowadays a hot problem. The suffix array [Manber-Myers,~1993] is
one of the most attractive full-text indexing data structures due to
its simplicity, space efficiency and powerful/fast search operations
supported. In this paper we analyze, both theoretically and
experimentally, the I/O-complexity and the working space of six
algorithms for constructing large suffix arrays. Three of them are
the state-of-the-art, the other three algorithms are our new
proposals. We perform a set of experiments based on three different
data sets (English texts, Amino-acid sequences and random texts) and
give a precise hierarchy of these algorithms according to their
working-space vs. construction-time tradeoff. Given the current
trends in model design~\cite{Farach-et-al,Vitter} and disk
technology~\cite{dahlin,Ruemmler-Wilkes}, we will pose particular
attention to differentiate between ``random'' and ``contiguous''
disk accesses, in order to reasonably explain some practical
I/O-phenomena which are related to the experimental behavior of
these algorithms and that would be otherwise meaningless in the
light of other simpler external-memory models.
At the best of our knowledge, this is the first study which provides
a wide spectrum of possible approaches to the construction of suffix
arrays in external memory, and thus it should be helpful to anyone
who is interested in building full-text indexes on very large text
collections.
Finally, we conclude our paper by addressing two other issues. The
former concerns with the problem of building word-indexes; we show
that our results can be successfully applied to this case too,
without any loss in efficiency and without compromising the
simplicity of programming so to achieve a uniform, simple and
efficient approach to both the two indexing models. The latter issue
is related to the intriguing and apparently counterintuitive
``contradiction'' between the effective practical performance of the
well-known BaezaYates-Gonnet-Snider's algorithm~\cite{book-info},
verified in our experiments, and its unappealing (i.e., cubic)
worst-case behavior. We devise a new external-memory algorithm that
follows the basic philosophy underlying that algorithm but in a
significantly different manner, thus resulting in a novel approach
which combines good worst-case bounds with efficient practical
performance.
Export
BibTeX
@techreport{CrauserFerragina99,
TITLE = {A theoretical and experimental study on the construction of suffix arrays in external memory},
AUTHOR = {Crauser, Andreas and Ferragina, Paolo},
LANGUAGE = {eng},
NUMBER = {MPI-I-1999-1-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1999},
DATE = {1999},
ABSTRACT = {The construction of full-text indexes on very large text collections is nowadays a hot problem. The suffix array [Manber-Myers,~1993] is one of the most attractive full-text indexing data structures due to its simplicity, space efficiency and powerful/fast search operations supported. In this paper we analyze, both theoretically and experimentally, the I/O-complexity and the working space of six algorithms for constructing large suffix arrays. Three of them are the state-of-the-art, the other three algorithms are our new proposals. We perform a set of experiments based on three different data sets (English texts, Amino-acid sequences and random texts) and give a precise hierarchy of these algorithms according to their working-space vs. construction-time tradeoff. Given the current trends in model design~\cite{Farach-et-al,Vitter} and disk technology~\cite{dahlin,Ruemmler-Wilkes}, we will pose particular attention to differentiate between ``random'' and ``contiguous'' disk accesses, in order to reasonably explain some practical I/O-phenomena which are related to the experimental behavior of these algorithms and that would be otherwise meaningless in the light of other simpler external-memory models. At the best of our knowledge, this is the first study which provides a wide spectrum of possible approaches to the construction of suffix arrays in external memory, and thus it should be helpful to anyone who is interested in building full-text indexes on very large text collections. Finally, we conclude our paper by addressing two other issues. The former concerns with the problem of building word-indexes; we show that our results can be successfully applied to this case too, without any loss in efficiency and without compromising the simplicity of programming so to achieve a uniform, simple and efficient approach to both the two indexing models. The latter issue is related to the intriguing and apparently counterintuitive ``contradiction'' between the effective practical performance of the well-known BaezaYates-Gonnet-Snider's algorithm~\cite{book-info}, verified in our experiments, and its unappealing (i.e., cubic) worst-case behavior. We devise a new external-memory algorithm that follows the basic philosophy underlying that algorithm but in a significantly different manner, thus resulting in a novel approach which combines good worst-case bounds with efficient practical performance.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Crauser, Andreas
%A Ferragina, Paolo
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A theoretical and experimental study on the construction of suffix arrays in external memory :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6F9B-2
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1999
%P 40 p.
%X The construction of full-text indexes on very large text collections
is nowadays a hot problem. The suffix array [Manber-Myers,~1993] is
one of the most attractive full-text indexing data structures due to
its simplicity, space efficiency and powerful/fast search operations
supported. In this paper we analyze, both theoretically and
experimentally, the I/O-complexity and the working space of six
algorithms for constructing large suffix arrays. Three of them are
the state-of-the-art, the other three algorithms are our new
proposals. We perform a set of experiments based on three different
data sets (English texts, Amino-acid sequences and random texts) and
give a precise hierarchy of these algorithms according to their
working-space vs. construction-time tradeoff. Given the current
trends in model design~\cite{Farach-et-al,Vitter} and disk
technology~\cite{dahlin,Ruemmler-Wilkes}, we will pose particular
attention to differentiate between ``random'' and ``contiguous''
disk accesses, in order to reasonably explain some practical
I/O-phenomena which are related to the experimental behavior of
these algorithms and that would be otherwise meaningless in the
light of other simpler external-memory models.
At the best of our knowledge, this is the first study which provides
a wide spectrum of possible approaches to the construction of suffix
arrays in external memory, and thus it should be helpful to anyone
who is interested in building full-text indexes on very large text
collections.
Finally, we conclude our paper by addressing two other issues. The
former concerns with the problem of building word-indexes; we show
that our results can be successfully applied to this case too,
without any loss in efficiency and without compromising the
simplicity of programming so to achieve a uniform, simple and
efficient approach to both the two indexing models. The latter issue
is related to the intriguing and apparently counterintuitive
``contradiction'' between the effective practical performance of the
well-known BaezaYates-Gonnet-Snider's algorithm~\cite{book-info},
verified in our experiments, and its unappealing (i.e., cubic)
worst-case behavior. We devise a new external-memory algorithm that
follows the basic philosophy underlying that algorithm but in a
significantly different manner, thus resulting in a novel approach
which combines good worst-case bounds with efficient practical
performance.
%B Research Report / Max-Planck-Institut für Informatik
A framework for evaluating the quality of lossy image compression
J. Haber and H.-P. Seidel
Technical Report, 1999
J. Haber and H.-P. Seidel
Technical Report, 1999
Abstract
In this research report we present a framework for evaluating and comparing the
quality of various lossy image compression techniques based on a
multiresolution decomposition of the image data. In contrast to many other
publications, much attention is paid to the interdependencies of the individual
steps of such compression techniques. In our result section we are able to
show that it is quite worthwile to fine-tune the parameters of every step to
obtain an optimal interplay among them, which in turn leads to a higher
reconstruction quality.
Export
BibTeX
@techreport{HaberSeidel1999,
TITLE = {A framework for evaluating the quality of lossy image compression},
AUTHOR = {Haber, J{\"o}rg and Seidel, Hans-Peter},
LANGUAGE = {eng},
NUMBER = {MPI-I-1999-4-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1999},
DATE = {1999},
ABSTRACT = {In this research report we present a framework for evaluating and comparing the quality of various lossy image compression techniques based on a multiresolution decomposition of the image data. In contrast to many other publications, much attention is paid to the interdependencies of the individual steps of such compression techniques. In our result section we are able to show that it is quite worthwile to fine-tune the parameters of every step to obtain an optimal interplay among them, which in turn leads to a higher reconstruction quality.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Haber, Jörg
%A Seidel, Hans-Peter
%+ Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T A framework for evaluating the quality of lossy image compression :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6F38-D
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1999
%P 20 p.
%X In this research report we present a framework for evaluating and comparing the
quality of various lossy image compression techniques based on a
multiresolution decomposition of the image data. In contrast to many other
publications, much attention is paid to the interdependencies of the individual
steps of such compression techniques. In our result section we are able to
show that it is quite worthwile to fine-tune the parameters of every step to
obtain an optimal interplay among them, which in turn leads to a higher
reconstruction quality.
%B Research Report / Max-Planck-Institut für Informatik
Integration of graph iterators into LEDA
M. Nissen
Technical Report, 1999
M. Nissen
Technical Report, 1999
Abstract
This paper explains some implementation details of graph iterators and
data accessors in LEDA.
It shows how to create new iterators for new graph implementations such
that old algorithms can be re--used with new graph implementations as long
as they are based on graph iterators and data accessors.
Export
BibTeX
@techreport{MPI-I-1999-1-006,
TITLE = {Integration of graph iterators into {LEDA}},
AUTHOR = {Nissen, Marco},
LANGUAGE = {eng},
NUMBER = {MPI-I-1999-1-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1999},
DATE = {1999},
ABSTRACT = {This paper explains some implementation details of graph iterators and data accessors in LEDA. It shows how to create new iterators for new graph implementations such that old algorithms can be re--used with new graph implementations as long as they are based on graph iterators and data accessors.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Nissen, Marco
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Integration of graph iterators into LEDA :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6F85-1
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1999
%P 39 p.
%X This paper explains some implementation details of graph iterators and
data accessors in LEDA.
It shows how to create new iterators for new graph implementations such
that old algorithms can be re--used with new graph implementations as long
as they are based on graph iterators and data accessors.
%B Research Report / Max-Planck-Institut für Informatik
How generic language extensions enable open-world’' design in Java
M. Nissen and K. Weihe
Technical Report, 1999
M. Nissen and K. Weihe
Technical Report, 1999
Abstract
By \emph{open--world design} we mean that collaborating classes are so
loosely coupled that changes in one class
do not propagate to the other classes, and single classes can be isolated
and integrated in other contexts. Of course, this is what maintainability
and reusability is all about.
In the paper, we will demonstrate that in Java even an open--world design of mere
attribute access can only be achieved if static
safety is sacrificed, and that this conflict is unresolvable \emph{even if the attribute
type is fixed}. With generic language extensions such as GJ, which is a generic extension
of Java, it is possible to combine static type safety and open--world design.
As a consequence, genericity should be viewed as a
first--class design feature, because generic language features are preferably applied
in many situations in which object--orientedness seems appropriate.
We chose Java as the base of the discussion because Java is commonly known and several
advanced features of Java aim at a loose coupling of classes.
In particular, the paper is intended to make a strong point
in favor of generic extensions of Java.
Export
BibTeX
@techreport{MehlhornSchirra,
TITLE = {How generic language extensions enable ''open-world'' design in Java},
AUTHOR = {Nissen, Marco and Weihe, Karsten},
LANGUAGE = {eng},
NUMBER = {MPI-I-1999-1-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1999},
DATE = {1999},
ABSTRACT = {By \emph{open--world design} we mean that collaborating classes are so loosely coupled that changes in one class do not propagate to the other classes, and single classes can be isolated and integrated in other contexts. Of course, this is what maintainability and reusability is all about. In the paper, we will demonstrate that in Java even an open--world design of mere attribute access can only be achieved if static safety is sacrificed, and that this conflict is unresolvable \emph{even if the attribute type is fixed}. With generic language extensions such as GJ, which is a generic extension of Java, it is possible to combine static type safety and open--world design. As a consequence, genericity should be viewed as a first--class design feature, because generic language features are preferably applied in many situations in which object--orientedness seems appropriate. We chose Java as the base of the discussion because Java is commonly known and several advanced features of Java aim at a loose coupling of classes. In particular, the paper is intended to make a strong point in favor of generic extensions of Java.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Nissen, Marco
%A Weihe, Karsten
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T How generic language extensions enable ''open-world'' design in Java :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6F8F-D
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1999
%P 40 p.
%X By \emph{open--world design} we mean that collaborating classes are so
loosely coupled that changes in one class
do not propagate to the other classes, and single classes can be isolated
and integrated in other contexts. Of course, this is what maintainability
and reusability is all about.
In the paper, we will demonstrate that in Java even an open--world design of mere
attribute access can only be achieved if static
safety is sacrificed, and that this conflict is unresolvable \emph{even if the attribute
type is fixed}. With generic language extensions such as GJ, which is a generic extension
of Java, it is possible to combine static type safety and open--world design.
As a consequence, genericity should be viewed as a
first--class design feature, because generic language features are preferably applied
in many situations in which object--orientedness seems appropriate.
We chose Java as the base of the discussion because Java is commonly known and several
advanced features of Java aim at a loose coupling of classes.
In particular, the paper is intended to make a strong point
in favor of generic extensions of Java.
%B Research Report / Max-Planck-Institut für Informatik
Fast concurrent access to parallel disks
P. Sanders, S. Egner and J. Korst
Technical Report, 1999
P. Sanders, S. Egner and J. Korst
Technical Report, 1999
Abstract
High performance applications involving large data sets require the
efficient and flexible use of multiple disks. In an external memory
machine with D parallel, independent disks, only one block can be
accessed on each disk in one I/O step. This restriction leads to a
load balancing problem that is perhaps the main inhibitor for the
efficient adaptation of single-disk external memory algorithms to
multiple disks. We show how this problem can be solved efficiently by
using randomization and redundancy. A buffer of O(D) blocks suffices
to support efficient writing of arbitrary blocks if blocks are
distributed uniformly at random to the disks (e.g., by hashing). If
two randomly allocated copies of each block exist, N arbitrary blocks
can be read within ceiling(N/D)+1 I/O steps with high probability.
The redundancy can be further reduced from 2 to 1+1/r for any integer
r. From the point of view of external memory models, these results
rehabilitate Aggarwal and Vitter's "single-disk multi-head" model that
allows access to D arbitrary blocks in each I/O step. This powerful
model can be emulated on the physically more realistic independent
disk model with small constant overhead factors. Parallel disk
external memory algorithms can therefore be developed in the
multi-head model first. The emulation result can then be applied
directly or further refinements can be added.
Export
BibTeX
@techreport{SandersEgnerKorst99,
TITLE = {Fast concurrent access to parallel disks},
AUTHOR = {Sanders, Peter and Egner, Sebastian and Korst, Jan},
LANGUAGE = {eng},
NUMBER = {MPI-I-1999-1-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1999},
DATE = {1999},
ABSTRACT = {High performance applications involving large data sets require the efficient and flexible use of multiple disks. In an external memory machine with D parallel, independent disks, only one block can be accessed on each disk in one I/O step. This restriction leads to a load balancing problem that is perhaps the main inhibitor for the efficient adaptation of single-disk external memory algorithms to multiple disks. We show how this problem can be solved efficiently by using randomization and redundancy. A buffer of O(D) blocks suffices to support efficient writing of arbitrary blocks if blocks are distributed uniformly at random to the disks (e.g., by hashing). If two randomly allocated copies of each block exist, N arbitrary blocks can be read within ceiling(N/D)+1 I/O steps with high probability. The redundancy can be further reduced from 2 to 1+1/r for any integer r. From the point of view of external memory models, these results rehabilitate Aggarwal and Vitter's "single-disk multi-head" model that allows access to D arbitrary blocks in each I/O step. This powerful model can be emulated on the physically more realistic independent disk model with small constant overhead factors. Parallel disk external memory algorithms can therefore be developed in the multi-head model first. The emulation result can then be applied directly or further refinements can be added.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Sanders, Peter
%A Egner, Sebastian
%A Korst, Jan
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T Fast concurrent access to parallel disks :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6F94-0
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1999
%P 30 p.
%X High performance applications involving large data sets require the
efficient and flexible use of multiple disks. In an external memory
machine with D parallel, independent disks, only one block can be
accessed on each disk in one I/O step. This restriction leads to a
load balancing problem that is perhaps the main inhibitor for the
efficient adaptation of single-disk external memory algorithms to
multiple disks. We show how this problem can be solved efficiently by
using randomization and redundancy. A buffer of O(D) blocks suffices
to support efficient writing of arbitrary blocks if blocks are
distributed uniformly at random to the disks (e.g., by hashing). If
two randomly allocated copies of each block exist, N arbitrary blocks
can be read within ceiling(N/D)+1 I/O steps with high probability.
The redundancy can be further reduced from 2 to 1+1/r for any integer
r. From the point of view of external memory models, these results
rehabilitate Aggarwal and Vitter's "single-disk multi-head" model that
allows access to D arbitrary blocks in each I/O step. This powerful
model can be emulated on the physically more realistic independent
disk model with small constant overhead factors. Parallel disk
external memory algorithms can therefore be developed in the
multi-head model first. The emulation result can then be applied
directly or further refinements can be added.
%B Research Report / Max-Planck-Institut für Informatik
Ultimate parallel list ranking?
J. Sibeyn
Technical Report, 1999
J. Sibeyn
Technical Report, 1999
Abstract
Two improved list-ranking algorithms are presented. The
``peeling-off'' algorithm leads to an optimal PRAM algorithm, but
was designed with application on a real parallel machine in mind.
It is simpler than earlier algorithms, and in a range of problem
sizes, where previously several algorithms where required for the
best performance, now this single algorithm suffices. If the problem
size is much larger than the number of available processors, then the
``sparse-ruling-sets'' algorithm is even better. In previous
versions this algorithm had very restricted practical application
because of the large number of communication rounds it was
performing. This main weakness of this algorithm is overcome by
adding two new ideas, each of which reduces the number of
communication rounds by a factor of two.
Export
BibTeX
@techreport{Sibeyn1999,
TITLE = {Ultimate parallel list ranking?},
AUTHOR = {Sibeyn, Jop},
LANGUAGE = {eng},
NUMBER = {MPI-I-1999-1-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1999},
DATE = {1999},
ABSTRACT = {Two improved list-ranking algorithms are presented. The ``peeling-off'' algorithm leads to an optimal PRAM algorithm, but was designed with application on a real parallel machine in mind. It is simpler than earlier algorithms, and in a range of problem sizes, where previously several algorithms where required for the best performance, now this single algorithm suffices. If the problem size is much larger than the number of available processors, then the ``sparse-ruling-sets'' algorithm is even better. In previous versions this algorithm had very restricted practical application because of the large number of communication rounds it was performing. This main weakness of this algorithm is overcome by adding two new ideas, each of which reduces the number of communication rounds by a factor of two.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Sibeyn, Jop
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Ultimate parallel list ranking? :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6F8A-8
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1999
%P 20 p.
%X Two improved list-ranking algorithms are presented. The
``peeling-off'' algorithm leads to an optimal PRAM algorithm, but
was designed with application on a real parallel machine in mind.
It is simpler than earlier algorithms, and in a range of problem
sizes, where previously several algorithms where required for the
best performance, now this single algorithm suffices. If the problem
size is much larger than the number of available processors, then the
``sparse-ruling-sets'' algorithm is even better. In previous
versions this algorithm had very restricted practical application
because of the large number of communication rounds it was
performing. This main weakness of this algorithm is overcome by
adding two new ideas, each of which reduces the number of
communication rounds by a factor of two.
%B Research Report / Max-Planck-Institut für Informatik
Cancellative superposition decides the theory of divisible torsion-free abelian groups
U. Waldmann
Technical Report, 1999
U. Waldmann
Technical Report, 1999
Abstract
In divisible torsion-free abelian groups, the efficiency of the
cancellative superposition calculus can be greatly increased by
combining it with a variable elimination algorithm that transforms
every clause into an equivalent clause without unshielded variables.
We show that the resulting calculus is a decision procedure for
the theory of divisible torsion-free abelian groups.
Export
BibTeX
@techreport{WaldmannMPI-I-1999-2-003,
TITLE = {Cancellative superposition decides the theory of divisible torsion-free abelian groups},
AUTHOR = {Waldmann, Uwe},
LANGUAGE = {eng},
NUMBER = {MPI-I-1999-2-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1999},
DATE = {1999},
ABSTRACT = {In divisible torsion-free abelian groups, the efficiency of the cancellative superposition calculus can be greatly increased by combining it with a variable elimination algorithm that transforms every clause into an equivalent clause without unshielded variables. We show that the resulting calculus is a decision procedure for the theory of divisible torsion-free abelian groups.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Waldmann, Uwe
%+ Automation of Logic, MPI for Informatics, Max Planck Society
%T Cancellative superposition decides the theory of divisible torsion-free abelian groups :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-6F75-5
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1999
%P 23 p.
%X In divisible torsion-free abelian groups, the efficiency of the
cancellative superposition calculus can be greatly increased by
combining it with a variable elimination algorithm that transforms
every clause into an equivalent clause without unshielded variables.
We show that the resulting calculus is a decision procedure for
the theory of divisible torsion-free abelian groups.
%B Research Report / Max-Planck-Institut für Informatik
1998
Scheduling with unexpected machine breakdowns
S. Albers and G. Schmidt
Technical Report, 1998
S. Albers and G. Schmidt
Technical Report, 1998
Abstract
We investigate an online version of the scheduling problem
$P, NC|pmtn|C_{\max}$, where a set of jobs has to be scheduled
on a number of identical machines so as to minimize the makespan.
The job processing times are known in advance and preemption of
jobs is allowed. Machines are {\it non-continuously\/} available,
i.e., they can break down and recover at arbitrary time instances {\it not
known in advance}. New machines may be added as well. Thus machine
availabilities change online.
We first show that no online algorithm can construct optimal schedules.
We also show that no online algorithm can achieve a constant competitive
ratio if there may be time intervals where no machine is available.
Then we present an online algorithm that constructs schedules with an
optimal makespan of $C_{\max}^{OPT}$ if a {\it lookahead\/} of one is
given, i.e., the algorithm always knows the next point in time when
the set of available machines changes. Finally we give an online algorithm
without lookahead that constructs schedules with a nearly optimal makespan
of $C_{\max}^{OPT} + \epsilon$, for any $\epsilon >0$, if at any
time at least one machine is available. Our results
demonstrate that not knowing machine availabilities in advance is of
little harm.
Export
BibTeX
@techreport{AlbersSchmidt98,
TITLE = {Scheduling with unexpected machine breakdowns},
AUTHOR = {Albers, Susanne and Schmidt, G{\"u}nter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1998-1-021},
NUMBER = {MPI-I-1998-1-021},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {We investigate an online version of the scheduling problem $P, NC|pmtn|C_{\max}$, where a set of jobs has to be scheduled on a number of identical machines so as to minimize the makespan. The job processing times are known in advance and preemption of jobs is allowed. Machines are {\it non-continuously\/} available, i.e., they can break down and recover at arbitrary time instances {\it not known in advance}. New machines may be added as well. Thus machine availabilities change online. We first show that no online algorithm can construct optimal schedules. We also show that no online algorithm can achieve a constant competitive ratio if there may be time intervals where no machine is available. Then we present an online algorithm that constructs schedules with an optimal makespan of $C_{\max}^{OPT}$ if a {\it lookahead\/} of one is given, i.e., the algorithm always knows the next point in time when the set of available machines changes. Finally we give an online algorithm without lookahead that constructs schedules with a nearly optimal makespan of $C_{\max}^{OPT} + \epsilon$, for any $\epsilon >0$, if at any time at least one machine is available. Our results demonstrate that not knowing machine availabilities in advance is of little harm.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Albers, Susanne
%A Schmidt, Günter
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Scheduling with unexpected machine breakdowns :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7B78-2
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1998-1-021
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 15 p.
%X We investigate an online version of the scheduling problem
$P, NC|pmtn|C_{\max}$, where a set of jobs has to be scheduled
on a number of identical machines so as to minimize the makespan.
The job processing times are known in advance and preemption of
jobs is allowed. Machines are {\it non-continuously\/} available,
i.e., they can break down and recover at arbitrary time instances {\it not
known in advance}. New machines may be added as well. Thus machine
availabilities change online.
We first show that no online algorithm can construct optimal schedules.
We also show that no online algorithm can achieve a constant competitive
ratio if there may be time intervals where no machine is available.
Then we present an online algorithm that constructs schedules with an
optimal makespan of $C_{\max}^{OPT}$ if a {\it lookahead\/} of one is
given, i.e., the algorithm always knows the next point in time when
the set of available machines changes. Finally we give an online algorithm
without lookahead that constructs schedules with a nearly optimal makespan
of $C_{\max}^{OPT} + \epsilon$, for any $\epsilon >0$, if at any
time at least one machine is available. Our results
demonstrate that not knowing machine availabilities in advance is of
little harm.
%B Research Report / Max-Planck-Institut für Informatik
Comparator networks for binary heap construction
G. S. Brodal and M. C. Pinotti
Technical Report, 1998
G. S. Brodal and M. C. Pinotti
Technical Report, 1998
Abstract
Comparator networks for constructing binary heaps of size $n$ are
presented which have size $O(n\log\log n)$ and depth $O(\log n)$. A
lower bound of $n\log\log n-O(n)$ for the size of any heap
construction network is also proven, implying that the networks
presented are within a constant factor of optimal. We give a tight
relation between the leading constants in the size of selection
networks and in the size of heap constructiion networks.
Export
BibTeX
@techreport{BrodalPinotti98,
TITLE = {Comparator networks for binary heap construction},
AUTHOR = {Brodal, Gerth St{\o}lting and Pinotti, M. Cristina},
LANGUAGE = {eng},
NUMBER = {MPI-I-1998-1-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {Comparator networks for constructing binary heaps of size $n$ are presented which have size $O(n\log\log n)$ and depth $O(\log n)$. A lower bound of $n\log\log n-O(n)$ for the size of any heap construction network is also proven, implying that the networks presented are within a constant factor of optimal. We give a tight relation between the leading constants in the size of selection networks and in the size of heap constructiion networks.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Brodal, Gerth Stølting
%A Pinotti, M. Cristina
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T Comparator networks for binary heap construction :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-9A0B-B
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 11 p.
%X Comparator networks for constructing binary heaps of size $n$ are
presented which have size $O(n\log\log n)$ and depth $O(\log n)$. A
lower bound of $n\log\log n-O(n)$ for the size of any heap
construction network is also proven, implying that the networks
presented are within a constant factor of optimal. We give a tight
relation between the leading constants in the size of selection
networks and in the size of heap constructiion networks.
%B Research Report / Max-Planck-Institut für Informatik
Applications of the generic programming paradigm in the design of CGAL
H. Brönniman, L. Kettner, S. Schirra and R. Veltkamp
Technical Report, 1998
H. Brönniman, L. Kettner, S. Schirra and R. Veltkamp
Technical Report, 1998
Abstract
We report on the use of the generic programming paradigm in the computational
geometry algorithms library CGAL. The parameterization of
the geometric algorithms in CGAL enhances flexibility and adaptability and
opens an easy way for abolishing precision and robustness problems by exact but
nevertheless efficient computation. Furthermore we discuss circulators, which
are an extension of the iterator concept to circular structures. Such structures
arise frequently in geometric computing.
Export
BibTeX
@techreport{BronnimanKettnerSchirraVeltkamp98,
TITLE = {Applications of the generic programming paradigm in the design of {CGAL}},
AUTHOR = {Br{\"o}nniman, Herv{\`e} and Kettner, Lutz and Schirra, Stefan and Veltkamp, Remco},
LANGUAGE = {eng},
NUMBER = {MPI-I-1998-1-030},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {We report on the use of the generic programming paradigm in the computational geometry algorithms library CGAL. The parameterization of the geometric algorithms in CGAL enhances flexibility and adaptability and opens an easy way for abolishing precision and robustness problems by exact but nevertheless efficient computation. Furthermore we discuss circulators, which are an extension of the iterator concept to circular structures. Such structures arise frequently in geometric computing.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Brönniman, Hervè
%A Kettner, Lutz
%A Schirra, Stefan
%A Veltkamp, Remco
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Applications of the generic programming paradigm in the design of CGAL :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7B5D-F
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 12 p.
%X We report on the use of the generic programming paradigm in the computational
geometry algorithms library CGAL. The parameterization of
the geometric algorithms in CGAL enhances flexibility and adaptability and
opens an easy way for abolishing precision and robustness problems by exact but
nevertheless efficient computation. Furthermore we discuss circulators, which
are an extension of the iterator concept to circular structures. Such structures
arise frequently in geometric computing.
%B Research Report / Max-Planck-Institut für Informatik
$q$-gram based database searching using a suffix array (QUASAR)
S. Burkhardt, A. Crauser, P. Ferragina, H.-P. Lenhof, E. Rivals and M. Vingron
Technical Report, 1998
S. Burkhardt, A. Crauser, P. Ferragina, H.-P. Lenhof, E. Rivals and M. Vingron
Technical Report, 1998
Abstract
With the increasing amount of DNA sequence information deposited in
our databases searching for similarity to a query sequence
has become a basic operation in molecular biology.
But even today's fast algorithms reach their limits when
applied to all-versus-all comparisons of large databases.
Here we present a new data base searching
algorithm dubbed QUASAR (Q-gram Alignment based on Suffix ARrays)
which was designed to quickly detect sequences with strong
similarity to the query in a context where many searches are
conducted on one database. Our algorithm applies a modification of
$q$-tuple filtering implemented on top of a suffix array. Two
versions were developed, one for a RAM resident suffix array and one
for access to the suffix array on disk. We compared our implementation
with BLAST and found that our approach is an order of magnitude faster.
It is, however, restricted to the search for strongly similar DNA
sequences as is typically required, e.g., in the context of clustering
expressed sequence tags (ESTs).
Export
BibTeX
@techreport{BurkhardtCrauserFerraginaLenhofRivalsVingron98,
TITLE = {\$q\$-gram based database searching using a suffix array ({QUASAR})},
AUTHOR = {Burkhardt, Stefan and Crauser, Andreas and Ferragina, Paolo and Lenhof, Hans-Peter and Rivals, Eric and Vingron, Martin},
LANGUAGE = {eng},
NUMBER = {MPI-I-1998-1-024},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {With the increasing amount of DNA sequence information deposited in our databases searching for similarity to a query sequence has become a basic operation in molecular biology. But even today's fast algorithms reach their limits when applied to all-versus-all comparisons of large databases. Here we present a new data base searching algorithm dubbed QUASAR (Q-gram Alignment based on Suffix ARrays) which was designed to quickly detect sequences with strong similarity to the query in a context where many searches are conducted on one database. Our algorithm applies a modification of $q$-tuple filtering implemented on top of a suffix array. Two versions were developed, one for a RAM resident suffix array and one for access to the suffix array on disk. We compared our implementation with BLAST and found that our approach is an order of magnitude faster. It is, however, restricted to the search for strongly similar DNA sequences as is typically required, e.g., in the context of clustering expressed sequence tags (ESTs).},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Burkhardt, Stefan
%A Crauser, Andreas
%A Ferragina, Paolo
%A Lenhof, Hans-Peter
%A Rivals, Eric
%A Vingron, Martin
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Gene regulation (Martin Vingron), Dept. of Computational Molecular Biology (Head: Martin Vingron), Max Planck Institute for Molecular Genetics, Max Planck Society
%T $q$-gram based database searching using a suffix array (QUASAR) :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7B6F-7
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 11 p.
%X With the increasing amount of DNA sequence information deposited in
our databases searching for similarity to a query sequence
has become a basic operation in molecular biology.
But even today's fast algorithms reach their limits when
applied to all-versus-all comparisons of large databases.
Here we present a new data base searching
algorithm dubbed QUASAR (Q-gram Alignment based on Suffix ARrays)
which was designed to quickly detect sequences with strong
similarity to the query in a context where many searches are
conducted on one database. Our algorithm applies a modification of
$q$-tuple filtering implemented on top of a suffix array. Two
versions were developed, one for a RAM resident suffix array and one
for access to the suffix array on disk. We compared our implementation
with BLAST and found that our approach is an order of magnitude faster.
It is, however, restricted to the search for strongly similar DNA
sequences as is typically required, e.g., in the context of clustering
expressed sequence tags (ESTs).
%B Research Report / Max-Planck-Institut für Informatik
Rational points on circles
C. Burnikel
Technical Report, 1998a
C. Burnikel
Technical Report, 1998a
Abstract
We solve the following problem.
For a given rational circle $C$ passing through the rational points
$p$, $q$, $r$ and a given angle $\alpha$, compute a rational point
on $C$ whose angle at $C$ differs from $\alpha$ by a value of
at most $\epsilon$.
In addition, try to minimize the bit length of the computed point.
This document contains the C++ program |rational_points_on_circle.c|.
We use the literate programming tool |noweb| by Norman Ramsey.
Export
BibTeX
@techreport{Burnikel98-1-023,
TITLE = {Rational points on circles},
AUTHOR = {Burnikel, Christoph},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1998-1-023},
NUMBER = {MPI-I-1998-1-023},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {We solve the following problem. For a given rational circle $C$ passing through the rational points $p$, $q$, $r$ and a given angle $\alpha$, compute a rational point on $C$ whose angle at $C$ differs from $\alpha$ by a value of at most $\epsilon$. In addition, try to minimize the bit length of the computed point. This document contains the C++ program |rational_points_on_circle.c|. We use the literate programming tool |noweb| by Norman Ramsey.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Burnikel, Christoph
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Rational points on circles :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7B72-E
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1998-1-023
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 14 p.
%X We solve the following problem.
For a given rational circle $C$ passing through the rational points
$p$, $q$, $r$ and a given angle $\alpha$, compute a rational point
on $C$ whose angle at $C$ differs from $\alpha$ by a value of
at most $\epsilon$.
In addition, try to minimize the bit length of the computed point.
This document contains the C++ program |rational_points_on_circle.c|.
We use the literate programming tool |noweb| by Norman Ramsey.
%B Research Report / Max-Planck-Institut für Informatik
Delaunay graphs by divide and conquer
C. Burnikel
Technical Report, 1998b
C. Burnikel
Technical Report, 1998b
Abstract
This document describes the LEDA program dc_delaunay.c
for computing Delaunay graphs by the divide-and-conquer method.
The program can be used either with exact primitives or with
non-exact primitives. It handles all cases of degeneracy
and is relatively robust against the use of imprecise arithmetic.
We use the literate programming tool noweb by Norman Ramsey.
Export
BibTeX
@techreport{Burnikel98-1-027,
TITLE = {Delaunay graphs by divide and conquer},
AUTHOR = {Burnikel, Christoph},
LANGUAGE = {eng},
NUMBER = {MPI-I-1998-1-027},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {This document describes the LEDA program dc_delaunay.c for computing Delaunay graphs by the divide-and-conquer method. The program can be used either with exact primitives or with non-exact primitives. It handles all cases of degeneracy and is relatively robust against the use of imprecise arithmetic. We use the literate programming tool noweb by Norman Ramsey.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Burnikel, Christoph
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Delaunay graphs by divide and conquer :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7B60-5
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 24 p.
%X This document describes the LEDA program dc_delaunay.c
for computing Delaunay graphs by the divide-and-conquer method.
The program can be used either with exact primitives or with
non-exact primitives. It handles all cases of degeneracy
and is relatively robust against the use of imprecise arithmetic.
We use the literate programming tool noweb by Norman Ramsey.
%B Research Report / Max-Planck-Institut für Informatik
Fast recursive division
C. Burnikel and J. Ziegler
Technical Report, 1998
C. Burnikel and J. Ziegler
Technical Report, 1998
Abstract
We present a new recursive method for division with remainder of integers. Its
running time is $2K(n)+O(n \log n)$ for division of a $2n$-digit number by an
$n$-digit number where $K(n)$ is the Karatsuba multiplication time. It pays in p
ractice for numbers with 860 bits or more. Then we show how we can lower this bo
und to
$3/2 K(n)+O(n\log n)$ if we are not interested in the remainder.
As an application of division with remainder we show how to speedup modular
multiplication. We also give practical results of an implementation that allow u
s to say that we have the fastest integer division on a SPARC architecture compa
red to all other integer packages we know of.
Export
BibTeX
@techreport{BurnikelZiegler98,
TITLE = {Fast recursive division},
AUTHOR = {Burnikel, Christoph and Ziegler, Joachim},
LANGUAGE = {eng},
NUMBER = {MPI-I-1998-1-022},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {We present a new recursive method for division with remainder of integers. Its running time is $2K(n)+O(n \log n)$ for division of a $2n$-digit number by an $n$-digit number where $K(n)$ is the Karatsuba multiplication time. It pays in p ractice for numbers with 860 bits or more. Then we show how we can lower this bo und to $3/2 K(n)+O(n\log n)$ if we are not interested in the remainder. As an application of division with remainder we show how to speedup modular multiplication. We also give practical results of an implementation that allow u s to say that we have the fastest integer division on a SPARC architecture compa red to all other integer packages we know of.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Burnikel, Christoph
%A Ziegler, Joachim
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Fast recursive division :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7B75-8
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 29 p.
%X We present a new recursive method for division with remainder of integers. Its
running time is $2K(n)+O(n \log n)$ for division of a $2n$-digit number by an
$n$-digit number where $K(n)$ is the Karatsuba multiplication time. It pays in p
ractice for numbers with 860 bits or more. Then we show how we can lower this bo
und to
$3/2 K(n)+O(n\log n)$ if we are not interested in the remainder.
As an application of division with remainder we show how to speedup modular
multiplication. We also give practical results of an implementation that allow u
s to say that we have the fastest integer division on a SPARC architecture compa
red to all other integer packages we know of.
%B Research Report / Max-Planck-Institut für Informatik
Randomized external-memory algorithms for some geometric problems
A. Crauser, P. Ferragina, K. Mehlhorn, U. Meyer and E. A. Ramos
Technical Report, 1998
A. Crauser, P. Ferragina, K. Mehlhorn, U. Meyer and E. A. Ramos
Technical Report, 1998
Abstract
We show that the well-known random incremental construction of
Clarkson and Shor can be adapted via {\it gradations}
to provide efficient external-memory algorithms for some geometric
problems. In particular, as the main result, we obtain an optimal
randomized algorithm for the problem of computing the trapezoidal
decomposition determined by a set of $N$ line segments in the plane
with $K$ pairwise intersections, that requires $\Theta(\frac{N}{B}
\log_{M/B} \frac{N}{B} +\frac{K}{B})$ expected disk accesses, where
$M$ is the size of the available internal memory and $B$ is the size
of the block transfer. The approach is sufficiently general to
obtain algorithms also for the problems of 3-d half-space
intersections, 2-d and 3-d convex hulls, 2-d abstract Voronoi
diagrams and batched planar point location, which require an optimal
expected number of disk accesses and are simpler than the ones
previously known. The results extend to an external-memory model
with multiple disks. Additionally, under reasonable conditions on
the parameters $N,M,B$, these results can be notably simplified
originating practical algorithms which still achieve optimal
expected bounds.
Export
BibTeX
@techreport{CrauserFerraginaMehlhornMeyerRamos98,
TITLE = {Randomized external-memory algorithms for some geometric problems},
AUTHOR = {Crauser, Andreas and Ferragina, Paolo and Mehlhorn, Kurt and Meyer, Ulrich and Ramos, Edgar A.},
LANGUAGE = {eng},
NUMBER = {MPI-I-1998-1-017},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {We show that the well-known random incremental construction of Clarkson and Shor can be adapted via {\it gradations} to provide efficient external-memory algorithms for some geometric problems. In particular, as the main result, we obtain an optimal randomized algorithm for the problem of computing the trapezoidal decomposition determined by a set of $N$ line segments in the plane with $K$ pairwise intersections, that requires $\Theta(\frac{N}{B} \log_{M/B} \frac{N}{B} +\frac{K}{B})$ expected disk accesses, where $M$ is the size of the available internal memory and $B$ is the size of the block transfer. The approach is sufficiently general to obtain algorithms also for the problems of 3-d half-space intersections, 2-d and 3-d convex hulls, 2-d abstract Voronoi diagrams and batched planar point location, which require an optimal expected number of disk accesses and are simpler than the ones previously known. The results extend to an external-memory model with multiple disks. Additionally, under reasonable conditions on the parameters $N,M,B$, these results can be notably simplified originating practical algorithms which still achieve optimal expected bounds.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Crauser, Andreas
%A Ferragina, Paolo
%A Mehlhorn, Kurt
%A Meyer, Ulrich
%A Ramos, Edgar A.
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Randomized external-memory algorithms for some geometric problems :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7BBB-C
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 27 p.
%X We show that the well-known random incremental construction of
Clarkson and Shor can be adapted via {\it gradations}
to provide efficient external-memory algorithms for some geometric
problems. In particular, as the main result, we obtain an optimal
randomized algorithm for the problem of computing the trapezoidal
decomposition determined by a set of $N$ line segments in the plane
with $K$ pairwise intersections, that requires $\Theta(\frac{N}{B}
\log_{M/B} \frac{N}{B} +\frac{K}{B})$ expected disk accesses, where
$M$ is the size of the available internal memory and $B$ is the size
of the block transfer. The approach is sufficiently general to
obtain algorithms also for the problems of 3-d half-space
intersections, 2-d and 3-d convex hulls, 2-d abstract Voronoi
diagrams and batched planar point location, which require an optimal
expected number of disk accesses and are simpler than the ones
previously known. The results extend to an external-memory model
with multiple disks. Additionally, under reasonable conditions on
the parameters $N,M,B$, these results can be notably simplified
originating practical algorithms which still achieve optimal
expected bounds.
%B Research Report / Max-Planck-Institut für Informatik
On the performance of LEDA-SM
A. Crauser, K. Mehlhorn, E. Althaus, K. Brengel, T. Buchheit, J. Keller, H. Krone, O. Lambert, R. Schulte, S. Thiel, M. Westphal and R. Wirth
Technical Report, 1998
A. Crauser, K. Mehlhorn, E. Althaus, K. Brengel, T. Buchheit, J. Keller, H. Krone, O. Lambert, R. Schulte, S. Thiel, M. Westphal and R. Wirth
Technical Report, 1998
Abstract
We report on the performance of a library
prototype for external memory algorithms and data structures called
LEDA-SM, where SM is an acronym for secondary memory. Our library
is based on LEDA and intended to complement it for large data. We
present performance results of our external memory library prototype
and compare these results with corresponding results of LEDAs
in-core algorithms in virtual memory. The results show that even if
only a small main memory is used for the external memory algorithms,
they always outperform their in-core counterpart. Furthermore we
compare different implementations of external memory data structures
and algorithms.
Export
BibTeX
@techreport{CrauserMehlhornAlthausetal98,
TITLE = {On the performance of {LEDA}-{SM}},
AUTHOR = {Crauser, Andreas and Mehlhorn, Kurt and Althaus, Ernst and Brengel, Klaus and Buchheit, Thomas and Keller, J{\"o}rg and Krone, Henning and Lambert, Oliver and Schulte, Ralph and Thiel, Sven and Westphal, Mark and Wirth, Robert},
LANGUAGE = {eng},
NUMBER = {MPI-I-1998-1-028},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {We report on the performance of a library prototype for external memory algorithms and data structures called LEDA-SM, where SM is an acronym for secondary memory. Our library is based on LEDA and intended to complement it for large data. We present performance results of our external memory library prototype and compare these results with corresponding results of LEDAs in-core algorithms in virtual memory. The results show that even if only a small main memory is used for the external memory algorithms, they always outperform their in-core counterpart. Furthermore we compare different implementations of external memory data structures and algorithms.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Crauser, Andreas
%A Mehlhorn, Kurt
%A Althaus, Ernst
%A Brengel, Klaus
%A Buchheit, Thomas
%A Keller, Jörg
%A Krone, Henning
%A Lambert, Oliver
%A Schulte, Ralph
%A Thiel, Sven
%A Westphal, Mark
%A Wirth, Robert
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On the performance of LEDA-SM :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7B63-0
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 26 p.
%X We report on the performance of a library
prototype for external memory algorithms and data structures called
LEDA-SM, where SM is an acronym for secondary memory. Our library
is based on LEDA and intended to complement it for large data. We
present performance results of our external memory library prototype
and compare these results with corresponding results of LEDAs
in-core algorithms in virtual memory. The results show that even if
only a small main memory is used for the external memory algorithms,
they always outperform their in-core counterpart. Furthermore we
compare different implementations of external memory data structures
and algorithms.
%B Research Report / Max-Planck-Institut für Informatik
On positive influence and negative dependence
D. Dubhashi and D. Ranjan
Technical Report, 1998
D. Dubhashi and D. Ranjan
Technical Report, 1998
Abstract
We study two notions of negative influence namely negative regression and
negative association. We show that if a set of symmetric binary random
variables are negatively regressed then they are necessarily negatively
associated. The proof uses a lemma that is of independent interest and
shows that every binary symmetric distribution has a variable of
``positive influence''. We also show that in general the notion of negative
regression is different from that of negative association.
Export
BibTeX
@techreport{DubhashiRanjan98,
TITLE = {On positive influence and negative dependence},
AUTHOR = {Dubhashi, Devdatt and Ranjan, Desh},
LANGUAGE = {eng},
NUMBER = {MPI-I-1998-1-018},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {We study two notions of negative influence namely negative regression and negative association. We show that if a set of symmetric binary random variables are negatively regressed then they are necessarily negatively associated. The proof uses a lemma that is of independent interest and shows that every binary symmetric distribution has a variable of ``positive influence''. We also show that in general the notion of negative regression is different from that of negative association.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Dubhashi, Devdatt
%A Ranjan, Desh
%+ External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On positive influence and negative dependence :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7BAC-E
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 12 p.
%X We study two notions of negative influence namely negative regression and
negative association. We show that if a set of symmetric binary random
variables are negatively regressed then they are necessarily negatively
associated. The proof uses a lemma that is of independent interest and
shows that every binary symmetric distribution has a variable of
``positive influence''. We also show that in general the notion of negative
regression is different from that of negative association.
%B Research Report / Max-Planck-Institut für Informatik
On the Design of CGAL, the Computational Geometry Algorithms Library
A. Fabri, G.-J. Giezeman, L. Kettner, S. Schirra and S. Schönherr
Technical Report, 1998
A. Fabri, G.-J. Giezeman, L. Kettner, S. Schirra and S. Schönherr
Technical Report, 1998
Abstract
CGAL is a Computational Geometry Algorithms Library written
in C++, which is developed in an ESPRIT LTR project. The goal
is to make the large body of geometric algorithms developed in the field of
computational geometry available for industrial application. In this chapter
we discuss the major design goals for CGAL, which are correctness,
flexibility, ease-of-use, efficiency, and robustness, and present our approach
to reach these goals. Templates and the relatively new generic programming
play a central role in the architecture of CGAL. We give a short
introduction to generic programming in C++, compare it to the
object-oriented programming paradigm, and present examples where
both paradigms are used effectively in CGAL.
Moreover, we give an overview on the current structure of the library
and consider software engineering aspects in the CGAL-project.
Export
BibTeX
@techreport{FabriGiezemanKettnerSchirraSch'onherr,
TITLE = {On the Design of {CGAL}, the Computational Geometry Algorithms Library},
AUTHOR = {Fabri, Andreas and Giezeman, Geert-Jan and Kettner, Lutz and Schirra, Stefan and Sch{\"o}nherr, Sven},
LANGUAGE = {eng},
NUMBER = {MPI-I-1998-1-007},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {CGAL is a Computational Geometry Algorithms Library written in C++, which is developed in an ESPRIT LTR project. The goal is to make the large body of geometric algorithms developed in the field of computational geometry available for industrial application. In this chapter we discuss the major design goals for CGAL, which are correctness, flexibility, ease-of-use, efficiency, and robustness, and present our approach to reach these goals. Templates and the relatively new generic programming play a central role in the architecture of CGAL. We give a short introduction to generic programming in C++, compare it to the object-oriented programming paradigm, and present examples where both paradigms are used effectively in CGAL. Moreover, we give an overview on the current structure of the library and consider software engineering aspects in the CGAL-project.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Fabri, Andreas
%A Giezeman, Geert-Jan
%A Kettner, Lutz
%A Schirra, Stefan
%A Schönherr, Sven
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T On the Design of CGAL, the Computational Geometry Algorithms Library :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7BDF-D
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 31 p.
%X CGAL is a Computational Geometry Algorithms Library written
in C++, which is developed in an ESPRIT LTR project. The goal
is to make the large body of geometric algorithms developed in the field of
computational geometry available for industrial application. In this chapter
we discuss the major design goals for CGAL, which are correctness,
flexibility, ease-of-use, efficiency, and robustness, and present our approach
to reach these goals. Templates and the relatively new generic programming
play a central role in the architecture of CGAL. We give a short
introduction to generic programming in C++, compare it to the
object-oriented programming paradigm, and present examples where
both paradigms are used effectively in CGAL.
Moreover, we give an overview on the current structure of the library
and consider software engineering aspects in the CGAL-project.
%B Research Report / Max-Planck-Institut für Informatik
Robustness analysis in combinatorial optimization
G. N. Frederickson and R. Solis-Oba
Technical Report, 1998
G. N. Frederickson and R. Solis-Oba
Technical Report, 1998
Abstract
The robustness function of an optimization problem measures the maximum
change in the value of its optimal solution that can be produced by changes of
a given total magnitude on the values of the elements in its input. The
problem of computing the robustness function of matroid optimization problems is
studied under two cost models: the discrete model, which allows the
removal of elements from the input, and the continuous model, which
permits finite changes on the values of the elements in the input.
For the discrete model, an $O(\log k)$-approximation algorithm is presented
for computing the robustness function of minimum spanning trees, where $k$ is
the number of edges to be removed. The algorithm uses as key subroutine a
2-approximation algorithm for the problem of dividing a graph into the maximum
number of components by removing $k$ edges from it.
For the continuous model, a number of results are presented. First, a general
algorithm is given for computing the robustness function of any matroid. The
algorithm runs in strongly polynomial time on matroids with a strongly
polynomial time independence test. Faster algorithms are also presented for
some particular classes of matroids: (1) an $O(n^3m^2 \log (n^2/m))$-time
algorithm for graphic matroids, where $m$ is the number of elements in the
matroid and $n$ is its rank, (2) an $O(mn(m+n^2)|E|\log(m^2/|E|+2))$-time
algorithm for transversal matroids, where $|E|$ is a parameter of the matroid,
(3) an $O(m^2n^2)$-time algorithm for scheduling matroids, and (4) an
$O(m \log m)$-time algorithm for partition matroids. For this last class of
matroids an optimal algorithm is also presented for evaluating the robustness
function at a single point.
Export
BibTeX
@techreport{FredericksonSolis-Oba98,
TITLE = {Robustness analysis in combinatorial optimization},
AUTHOR = {Frederickson, Greg N. and Solis-Oba, Roberto},
LANGUAGE = {eng},
NUMBER = {MPI-I-1998-1-011},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {The robustness function of an optimization problem measures the maximum change in the value of its optimal solution that can be produced by changes of a given total magnitude on the values of the elements in its input. The problem of computing the robustness function of matroid optimization problems is studied under two cost models: the discrete model, which allows the removal of elements from the input, and the continuous model, which permits finite changes on the values of the elements in the input. For the discrete model, an $O(\log k)$-approximation algorithm is presented for computing the robustness function of minimum spanning trees, where $k$ is the number of edges to be removed. The algorithm uses as key subroutine a 2-approximation algorithm for the problem of dividing a graph into the maximum number of components by removing $k$ edges from it. For the continuous model, a number of results are presented. First, a general algorithm is given for computing the robustness function of any matroid. The algorithm runs in strongly polynomial time on matroids with a strongly polynomial time independence test. Faster algorithms are also presented for some particular classes of matroids: (1) an $O(n^3m^2 \log (n^2/m))$-time algorithm for graphic matroids, where $m$ is the number of elements in the matroid and $n$ is its rank, (2) an $O(mn(m+n^2)|E|\log(m^2/|E|+2))$-time algorithm for transversal matroids, where $|E|$ is a parameter of the matroid, (3) an $O(m^2n^2)$-time algorithm for scheduling matroids, and (4) an $O(m \log m)$-time algorithm for partition matroids. For this last class of matroids an optimal algorithm is also presented for evaluating the robustness function at a single point.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Frederickson, Greg N.
%A Solis-Oba, Roberto
%+ External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Robustness analysis in combinatorial optimization :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7BD3-5
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 66 p.
%X The robustness function of an optimization problem measures the maximum
change in the value of its optimal solution that can be produced by changes of
a given total magnitude on the values of the elements in its input. The
problem of computing the robustness function of matroid optimization problems is
studied under two cost models: the discrete model, which allows the
removal of elements from the input, and the continuous model, which
permits finite changes on the values of the elements in the input.
For the discrete model, an $O(\log k)$-approximation algorithm is presented
for computing the robustness function of minimum spanning trees, where $k$ is
the number of edges to be removed. The algorithm uses as key subroutine a
2-approximation algorithm for the problem of dividing a graph into the maximum
number of components by removing $k$ edges from it.
For the continuous model, a number of results are presented. First, a general
algorithm is given for computing the robustness function of any matroid. The
algorithm runs in strongly polynomial time on matroids with a strongly
polynomial time independence test. Faster algorithms are also presented for
some particular classes of matroids: (1) an $O(n^3m^2 \log (n^2/m))$-time
algorithm for graphic matroids, where $m$ is the number of elements in the
matroid and $n$ is its rank, (2) an $O(mn(m+n^2)|E|\log(m^2/|E|+2))$-time
algorithm for transversal matroids, where $|E|$ is a parameter of the matroid,
(3) an $O(m^2n^2)$-time algorithm for scheduling matroids, and (4) an
$O(m \log m)$-time algorithm for partition matroids. For this last class of
matroids an optimal algorithm is also presented for evaluating the robustness
function at a single point.
%B Research Report / Max-Planck-Institut für Informatik
Fully dynamic shortest paths and negative cycle detection on diagraphs with Arbitrary Arc Weights
D. Frigioni, A. Marchetti-Spaccamela and U. Nanni
Technical Report, 1998
D. Frigioni, A. Marchetti-Spaccamela and U. Nanni
Technical Report, 1998
Abstract
We study the problem of maintaining the distances and the shortest
paths from a source node in a directed graph with arbitrary arc
weights, when weight updates of arcs are performed. We propose
algorithms that work for any digraph and have optimal space
requirements and query time. If a negative--length cycle is introduced
during weight decrease operations it is detected by the algorithms. The
proposed algorithms explicitly deal with zero--length cycles. The cost
of update operations depends on the class of the considered digraph
and on the number of the output updates. We show that, if the digraph
has a $k$-bounded accounting function (as in the case of digraphs with
genus, arboricity, degree, treewidth or pagenumber bounded by $k$) the
update procedures require $O(k\cdot n\cdot \log n)$ worst case
time. In the case of digraphs with $n$ nodes and $m$ arcs
$k=O(\sqrt{m})$, and hence we obtain $O(\sqrt{m}\cdot n \cdot \log n)$
worst case time per operation, which is better for a factor of
$O(\sqrt{m} / \log n)$ than recomputing everything from scratch after
each input update.
If we perform also insertions and deletions of arcs all the above
bounds become amortized.
Export
BibTeX
@techreport{FrigioniMarchetti-SpaccamelaNanni98,
TITLE = {Fully dynamic shortest paths and negative cycle detection on diagraphs with Arbitrary Arc Weights},
AUTHOR = {Frigioni, Daniele and Marchetti-Spaccamela, A. and Nanni, U.},
LANGUAGE = {eng},
NUMBER = {MPI-I-1998-1-009},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {We study the problem of maintaining the distances and the shortest paths from a source node in a directed graph with arbitrary arc weights, when weight updates of arcs are performed. We propose algorithms that work for any digraph and have optimal space requirements and query time. If a negative--length cycle is introduced during weight decrease operations it is detected by the algorithms. The proposed algorithms explicitly deal with zero--length cycles. The cost of update operations depends on the class of the considered digraph and on the number of the output updates. We show that, if the digraph has a $k$-bounded accounting function (as in the case of digraphs with genus, arboricity, degree, treewidth or pagenumber bounded by $k$) the update procedures require $O(k\cdot n\cdot \log n)$ worst case time. In the case of digraphs with $n$ nodes and $m$ arcs $k=O(\sqrt{m})$, and hence we obtain $O(\sqrt{m}\cdot n \cdot \log n)$ worst case time per operation, which is better for a factor of $O(\sqrt{m} / \log n)$ than recomputing everything from scratch after each input update. If we perform also insertions and deletions of arcs all the above bounds become amortized.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Frigioni, Daniele
%A Marchetti-Spaccamela, A.
%A Nanni, U.
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
%T Fully dynamic shortest paths and negative cycle detection on diagraphs with Arbitrary Arc Weights :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7BD9-A
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 18 p.
%X We study the problem of maintaining the distances and the shortest
paths from a source node in a directed graph with arbitrary arc
weights, when weight updates of arcs are performed. We propose
algorithms that work for any digraph and have optimal space
requirements and query time. If a negative--length cycle is introduced
during weight decrease operations it is detected by the algorithms. The
proposed algorithms explicitly deal with zero--length cycles. The cost
of update operations depends on the class of the considered digraph
and on the number of the output updates. We show that, if the digraph
has a $k$-bounded accounting function (as in the case of digraphs with
genus, arboricity, degree, treewidth or pagenumber bounded by $k$) the
update procedures require $O(k\cdot n\cdot \log n)$ worst case
time. In the case of digraphs with $n$ nodes and $m$ arcs
$k=O(\sqrt{m})$, and hence we obtain $O(\sqrt{m}\cdot n \cdot \log n)$
worst case time per operation, which is better for a factor of
$O(\sqrt{m} / \log n)$ than recomputing everything from scratch after
each input update.
If we perform also insertions and deletions of arcs all the above
bounds become amortized.
%B Research Report / Max-Planck-Institut für Informatik
Simpler and faster static AC$^0$ dictionaries
T. Hagerup
Technical Report, 1998
T. Hagerup
Technical Report, 1998
Abstract
We consider the static dictionary problem of using
$O(n)$ $w$-bit words to store $n$ $w$-bit keys for
fast retrieval on a $w$-bit \ACz\ RAM, i.e., on a
RAM with a word length of $w$ bits whose
instruction set is arbitrary, except that each instruction
must be realizable through an unbounded-fanin circuit
of constant depth and $w^{O(1)}$ size, and that the
instruction set must be finite and independent of the
keys stored.
We improve the best known upper bounds
for moderate values of~$w$ relative to $n$.
If ${w/{\log n}}=(\log\log n)^{O(1)}$,
query time $(\log\log\log n)^{O(1)}$ is achieved, and if
additionally ${w/{\log n}}\ge(\log\log n)^{1+\epsilon}$
for some fixed $\epsilon>0$, the query time
is constant.
For both of these special cases, the best previous
upper bound was $O(\log\log n)$.
Export
BibTeX
@techreport{Torben98,
TITLE = {Simpler and faster static {AC}\${\textasciicircum}0\$ dictionaries},
AUTHOR = {Hagerup, Torben},
LANGUAGE = {eng},
NUMBER = {MPI-I-1998-1-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {We consider the static dictionary problem of using $O(n)$ $w$-bit words to store $n$ $w$-bit keys for fast retrieval on a $w$-bit \ACz\ RAM, i.e., on a RAM with a word length of $w$ bits whose instruction set is arbitrary, except that each instruction must be realizable through an unbounded-fanin circuit of constant depth and $w^{O(1)}$ size, and that the instruction set must be finite and independent of the keys stored. We improve the best known upper bounds for moderate values of~$w$ relative to $n$. If ${w/{\log n}}=(\log\log n)^{O(1)}$, query time $(\log\log\log n)^{O(1)}$ is achieved, and if additionally ${w/{\log n}}\ge(\log\log n)^{1+\epsilon}$ for some fixed $\epsilon>0$, the query time is constant. For both of these special cases, the best previous upper bound was $O(\log\log n)$.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Hagerup, Torben
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Simpler and faster static AC$^0$ dictionaries :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-9A0E-5
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 13 p.
%X We consider the static dictionary problem of using
$O(n)$ $w$-bit words to store $n$ $w$-bit keys for
fast retrieval on a $w$-bit \ACz\ RAM, i.e., on a
RAM with a word length of $w$ bits whose
instruction set is arbitrary, except that each instruction
must be realizable through an unbounded-fanin circuit
of constant depth and $w^{O(1)}$ size, and that the
instruction set must be finite and independent of the
keys stored.
We improve the best known upper bounds
for moderate values of~$w$ relative to $n$.
If ${w/{\log n}}=(\log\log n)^{O(1)}$,
query time $(\log\log\log n)^{O(1)}$ is achieved, and if
additionally ${w/{\log n}}\ge(\log\log n)^{1+\epsilon}$
for some fixed $\epsilon>0$, the query time
is constant.
For both of these special cases, the best previous
upper bound was $O(\log\log n)$.
%B Research Report / Max-Planck-Institut für Informatik
Scheduling multicasts on unit-capacity trees and meshes
M. R. Henzinger and S. Leonardi
Technical Report, 1998
M. R. Henzinger and S. Leonardi
Technical Report, 1998
Abstract
This paper studies the multicast routing and admission control problem
on unit-capacity tree and mesh topologies in the throughput-model.
The problem is a generalization of the edge-disjoint paths problem and
is NP-hard both on trees and meshes.
We study both the offline and the online version of the problem:
In the offline setting, we give the first constant-factor approximation
algorithm for trees, and an O((log log n)^2)-factor approximation algorithm
for meshes.
In the online setting, we give the first polylogarithmic competitive
online algorithm for tree and mesh topologies. No polylogarithmic-competitive
algorithm is possible on general network topologies [Bartal,Fiat,Leonardi, 96],
and there exists a polylogarithmic lower bound on the competitive ratio
of any online algorithm on tree topologies [Awerbuch,Azar,Fiat,Leighton, 96].
We prove the same lower bound for meshes.
Export
BibTeX
@techreport{HenzingerLeonardi98,
TITLE = {Scheduling multicasts on unit-capacity trees and meshes},
AUTHOR = {Henzinger, Monika R. and Leonardi, Stefano},
LANGUAGE = {eng},
NUMBER = {MPI-I-1998-1-015},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {This paper studies the multicast routing and admission control problem on unit-capacity tree and mesh topologies in the throughput-model. The problem is a generalization of the edge-disjoint paths problem and is NP-hard both on trees and meshes. We study both the offline and the online version of the problem: In the offline setting, we give the first constant-factor approximation algorithm for trees, and an O((log log n)^2)-factor approximation algorithm for meshes. In the online setting, we give the first polylogarithmic competitive online algorithm for tree and mesh topologies. No polylogarithmic-competitive algorithm is possible on general network topologies [Bartal,Fiat,Leonardi, 96], and there exists a polylogarithmic lower bound on the competitive ratio of any online algorithm on tree topologies [Awerbuch,Azar,Fiat,Leighton, 96]. We prove the same lower bound for meshes.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Henzinger, Monika R.
%A Leonardi, Stefano
%+ External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Scheduling multicasts on unit-capacity trees and meshes :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7BC5-5
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 38 p.
%X This paper studies the multicast routing and admission control problem
on unit-capacity tree and mesh topologies in the throughput-model.
The problem is a generalization of the edge-disjoint paths problem and
is NP-hard both on trees and meshes.
We study both the offline and the online version of the problem:
In the offline setting, we give the first constant-factor approximation
algorithm for trees, and an O((log log n)^2)-factor approximation algorithm
for meshes.
In the online setting, we give the first polylogarithmic competitive
online algorithm for tree and mesh topologies. No polylogarithmic-competitive
algorithm is possible on general network topologies [Bartal,Fiat,Leonardi, 96],
and there exists a polylogarithmic lower bound on the competitive ratio
of any online algorithm on tree topologies [Awerbuch,Azar,Fiat,Leighton, 96].
We prove the same lower bound for meshes.
%B Research Report / Max-Planck-Institut für Informatik
Improved approximation schemes for scheduling unrelated parallel machines
K. Jansen and L. Porkolab
Technical Report, 1998a
K. Jansen and L. Porkolab
Technical Report, 1998a
Abstract
We consider the problem of scheduling $n$ independent jobs on $m$ unrelated
parallel machines. Each job has to be processed by exactly one machine,
processing job $j$ on machine $i$ requires $p_{ij}$ time units, and the
objective is to minimize the makespan, i.e. the maximum job completion time.
We focus on the case when $m$ is fixed and develop a fully polynomial
approximation scheme whose running time depends only linearly on $n$. In the
second half of the paper we extend this result to a variant of the problem,
where processing job $j$ on machine $i$ also incurs a cost of $c_{ij}$, and
thus there are two optimization criteria: makespan and cost. We show that for
any fixed $m$, there is a fully polynomial approximation scheme that, given
values $T$ and $C$, computes for any fixed $\epsilon > 0$ a schedule in $O(n)$
time with makespan at most $(1+\epsilon)T$ and cost at most $(1 + \epsilon)C$,
if there exists a schedule of makespan $T$ and cost $C$.
Export
BibTeX
@techreport{JansenPorkolab98-1-026,
TITLE = {Improved approximation schemes for scheduling unrelated parallel machines},
AUTHOR = {Jansen, Klaus and Porkolab, Lorant},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1998-1-026},
NUMBER = {MPI-I-1998-1-026},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {We consider the problem of scheduling $n$ independent jobs on $m$ unrelated parallel machines. Each job has to be processed by exactly one machine, processing job $j$ on machine $i$ requires $p_{ij}$ time units, and the objective is to minimize the makespan, i.e. the maximum job completion time. We focus on the case when $m$ is fixed and develop a fully polynomial approximation scheme whose running time depends only linearly on $n$. In the second half of the paper we extend this result to a variant of the problem, where processing job $j$ on machine $i$ also incurs a cost of $c_{ij}$, and thus there are two optimization criteria: makespan and cost. We show that for any fixed $m$, there is a fully polynomial approximation scheme that, given values $T$ and $C$, computes for any fixed $\epsilon > 0$ a schedule in $O(n)$ time with makespan at most $(1+\epsilon)T$ and cost at most $(1 + \epsilon)C$, if there exists a schedule of makespan $T$ and cost $C$.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Jansen, Klaus
%A Porkolab, Lorant
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Improved approximation schemes for scheduling unrelated parallel machines :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7B69-4
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1998-1-026
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 14 p.
%X We consider the problem of scheduling $n$ independent jobs on $m$ unrelated
parallel machines. Each job has to be processed by exactly one machine,
processing job $j$ on machine $i$ requires $p_{ij}$ time units, and the
objective is to minimize the makespan, i.e. the maximum job completion time.
We focus on the case when $m$ is fixed and develop a fully polynomial
approximation scheme whose running time depends only linearly on $n$. In the
second half of the paper we extend this result to a variant of the problem,
where processing job $j$ on machine $i$ also incurs a cost of $c_{ij}$, and
thus there are two optimization criteria: makespan and cost. We show that for
any fixed $m$, there is a fully polynomial approximation scheme that, given
values $T$ and $C$, computes for any fixed $\epsilon > 0$ a schedule in $O(n)$
time with makespan at most $(1+\epsilon)T$ and cost at most $(1 + \epsilon)C$,
if there exists a schedule of makespan $T$ and cost $C$.
%B Research Report / Max-Planck-Institut für Informatik
A new characterization for parity graphs and a coloring problem with costs
K. Jansen
Technical Report, 1998a
K. Jansen
Technical Report, 1998a
Abstract
In this paper, we give a characterization for parity graphs.
A graph is a parity graph, if and only if for every pair of vertices
all minimal chains joining them have the same parity. We prove
that $G$ is a parity graph, if and only if the cartesian product
$G \times K_2$ is a perfect graph.
Furthermore, as a consequence we get a result for the polyhedron
corresponding to an integer linear program formulation of a
coloring problem with costs. For the case that the costs $k_{v,3} = k_{v,c}$
for each color $c \ge 3$ and vertex $v \in V$, we show that the
polyhedron contains only
integral $0 / 1$ extrema if and only if the graph $G$ is a parity graph.
Export
BibTeX
@techreport{Jansen98-1-006,
TITLE = {A new characterization for parity graphs and a coloring problem with costs},
AUTHOR = {Jansen, Klaus},
LANGUAGE = {eng},
NUMBER = {MPI-I-1998-1-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {In this paper, we give a characterization for parity graphs. A graph is a parity graph, if and only if for every pair of vertices all minimal chains joining them have the same parity. We prove that $G$ is a parity graph, if and only if the cartesian product $G \times K_2$ is a perfect graph. Furthermore, as a consequence we get a result for the polyhedron corresponding to an integer linear program formulation of a coloring problem with costs. For the case that the costs $k_{v,3} = k_{v,c}$ for each color $c \ge 3$ and vertex $v \in V$, we show that the polyhedron contains only integral $0 / 1$ extrema if and only if the graph $G$ is a parity graph.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Jansen, Klaus
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A new characterization for parity graphs and a coloring problem with costs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7BE2-3
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 16 p.
%X In this paper, we give a characterization for parity graphs.
A graph is a parity graph, if and only if for every pair of vertices
all minimal chains joining them have the same parity. We prove
that $G$ is a parity graph, if and only if the cartesian product
$G \times K_2$ is a perfect graph.
Furthermore, as a consequence we get a result for the polyhedron
corresponding to an integer linear program formulation of a
coloring problem with costs. For the case that the costs $k_{v,3} = k_{v,c}$
for each color $c \ge 3$ and vertex $v \in V$, we show that the
polyhedron contains only
integral $0 / 1$ extrema if and only if the graph $G$ is a parity graph.
%B Research Report / Max-Planck-Institut für Informatik
Linear-time approximation schemes for scheduling malleable parallel tasks
K. Jansen and L. Porkolab
Technical Report, 1998b
K. Jansen and L. Porkolab
Technical Report, 1998b
Abstract
A malleable parallel task is one whose execution time is a function of the
number of (identical) processors alloted to it. We study the problem of
scheduling a set of $n$ independent malleable tasks on a fixed number of
parallel processors, and propose an approximation scheme that for any fixed
$\epsilon > 0$, computes in $O(n)$ time a non-preemptive schedule of length
at most $(1+\epsilon)$ times the optimum.
Export
BibTeX
@techreport{JansenPorkolab98-1-025,
TITLE = {Linear-time approximation schemes for scheduling malleable parallel tasks},
AUTHOR = {Jansen, Klaus and Porkolab, Lorant},
LANGUAGE = {eng},
NUMBER = {MPI-I-1998-1-025},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {A malleable parallel task is one whose execution time is a function of the number of (identical) processors alloted to it. We study the problem of scheduling a set of $n$ independent malleable tasks on a fixed number of parallel processors, and propose an approximation scheme that for any fixed $\epsilon > 0$, computes in $O(n)$ time a non-preemptive schedule of length at most $(1+\epsilon)$ times the optimum.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Jansen, Klaus
%A Porkolab, Lorant
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Linear-time approximation schemes for scheduling malleable parallel tasks :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7B6C-D
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 15 p.
%X A malleable parallel task is one whose execution time is a function of the
number of (identical) processors alloted to it. We study the problem of
scheduling a set of $n$ independent malleable tasks on a fixed number of
parallel processors, and propose an approximation scheme that for any fixed
$\epsilon > 0$, computes in $O(n)$ time a non-preemptive schedule of length
at most $(1+\epsilon)$ times the optimum.
%B Research Report / Max-Planck-Institut für Informatik
The mutual exclusion scheduling problem for permutation and comparability graphs
K. Jansen
Technical Report, 1998b
K. Jansen
Technical Report, 1998b
Abstract
In this paper, we consider the mutual exclusion scheduling problem
for comparability graphs.
Given an undirected graph $G$ and a fixed constant $m$, the problem is to
find a minimum coloring of $G$ such that each color is used at most $m$
times. The complexity of this problem for comparability graphs was mentioned as an open problem
by M\"ohring (1985) and for permutation graphs (a
subclass of comparability graphs) as an open problem by Lonc (1991). We
prove that this problem is already NP-complete for permutation graphs and
for each fixed constant $m \ge 6$.
Export
BibTeX
@techreport{Jansen98-1-005,
TITLE = {The mutual exclusion scheduling problem for permutation and comparability graphs},
AUTHOR = {Jansen, Klaus},
LANGUAGE = {eng},
NUMBER = {MPI-I-1998-1-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {In this paper, we consider the mutual exclusion scheduling problem for comparability graphs. Given an undirected graph $G$ and a fixed constant $m$, the problem is to find a minimum coloring of $G$ such that each color is used at most $m$ times. The complexity of this problem for comparability graphs was mentioned as an open problem by M\"ohring (1985) and for permutation graphs (a subclass of comparability graphs) as an open problem by Lonc (1991). We prove that this problem is already NP-complete for permutation graphs and for each fixed constant $m \ge 6$.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Jansen, Klaus
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T The mutual exclusion scheduling problem for permutation and comparability graphs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7BE5-E
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 12 p.
%X In this paper, we consider the mutual exclusion scheduling problem
for comparability graphs.
Given an undirected graph $G$ and a fixed constant $m$, the problem is to
find a minimum coloring of $G$ such that each color is used at most $m$
times. The complexity of this problem for comparability graphs was mentioned as an open problem
by M\"ohring (1985) and for permutation graphs (a
subclass of comparability graphs) as an open problem by Lonc (1991). We
prove that this problem is already NP-complete for permutation graphs and
for each fixed constant $m \ge 6$.
%B Research Report / Max-Planck-Institut für Informatik
A note on computing a maximal planar subgraph using PQ-trees
M. Jünger, S. Leipert and P. Mutzel
Technical Report, 1998
M. Jünger, S. Leipert and P. Mutzel
Technical Report, 1998
Abstract
The problem of computing a maximal planar subgraph of a non planar graph has been deeply
investigated over the last 20 years. Several attempts have been tried to solve the problem
with the help of PQ-trees. The latest attempt has been reported by Jayakumar et al. [10].
In this paper we show that the algorithm presented by Jayakumar et al. is not correct. We
show that it does not necessarily compute a maximal planar subgraph and we note that the same
holds for a modified version of the algorithm presented by Kant [12]. Our conclusions most
likely suggest not to use PQ-trees at all for this specific problem.
Export
BibTeX
@techreport{J'ungerLeipertMutzel98,
TITLE = {A note on computing a maximal planar subgraph using {PQ}-trees},
AUTHOR = {J{\"u}nger, Michael and Leipert, Sebastian and Mutzel, Petra},
LANGUAGE = {eng},
NUMBER = {MPI-I-1998-1-008},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {The problem of computing a maximal planar subgraph of a non planar graph has been deeply investigated over the last 20 years. Several attempts have been tried to solve the problem with the help of PQ-trees. The latest attempt has been reported by Jayakumar et al. [10]. In this paper we show that the algorithm presented by Jayakumar et al. is not correct. We show that it does not necessarily compute a maximal planar subgraph and we note that the same holds for a modified version of the algorithm presented by Kant [12]. Our conclusions most likely suggest not to use PQ-trees at all for this specific problem.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Jünger, Michael
%A Leipert, Sebastian
%A Mutzel, Petra
%+ External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A note on computing a maximal planar subgraph using PQ-trees :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7BDC-4
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 5 p.
%X The problem of computing a maximal planar subgraph of a non planar graph has been deeply
investigated over the last 20 years. Several attempts have been tried to solve the problem
with the help of PQ-trees. The latest attempt has been reported by Jayakumar et al. [10].
In this paper we show that the algorithm presented by Jayakumar et al. is not correct. We
show that it does not necessarily compute a maximal planar subgraph and we note that the same
holds for a modified version of the algorithm presented by Kant [12]. Our conclusions most
likely suggest not to use PQ-trees at all for this specific problem.
%B Research Report / Max-Planck-Institut für Informatik
Optimal compaction of orthogonal grid drawings
G. W. Klau and P. Mutzel
Technical Report, 1998a
G. W. Klau and P. Mutzel
Technical Report, 1998a
Abstract
We consider the two--dimensional compaction problem for orthogonal grid drawings in which the task is to alter the coordinates of the vertices and edge segments while preserving the shape of the drawing so that the total edge length is minimized. The problem is closely related to two--dimensional compaction in {\sc VLSI}--design and is conjectured to be {\sl NP}--hard.
We characterize the set of feasible solutions for the two--dimensional compaction problem in terms of paths in the so--called constraint graphs in $x$-- and $y$--direction. Similar graphs (known as {\em layout graphs}) have already been used for one--dimensional compaction in {\sc VLSI}--design, but this is the first time that a direct connection between these graphs is established. Given the pair of constraint graphs, the two--dimensional compaction task can be viewed as extending these graphs by new arcs so that certain conditions are satisfied and the total edge length is minimized. We can recognize those instances having only one such extension; for these cases we can solve the compaction problem in polynomial time.
We have transformed the geometrical problem into a graph--theoretical one which can
be formulated as an integer linear program. Our computational experiments have shown
that the new approach works well in practice. It is the first time that the two--dimensional
compaction problem is formulated as an integer linear program.
Export
BibTeX
@techreport{KlauMutzel98-1-031,
TITLE = {Optimal compaction of orthogonal grid drawings},
AUTHOR = {Klau, Gunnar W. and Mutzel, Petra},
LANGUAGE = {eng},
NUMBER = {MPI-I-1998-1-031},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {We consider the two--dimensional compaction problem for orthogonal grid drawings in which the task is to alter the coordinates of the vertices and edge segments while preserving the shape of the drawing so that the total edge length is minimized. The problem is closely related to two--dimensional compaction in {\sc VLSI}--design and is conjectured to be {\sl NP}--hard. We characterize the set of feasible solutions for the two--dimensional compaction problem in terms of paths in the so--called constraint graphs in $x$-- and $y$--direction. Similar graphs (known as {\em layout graphs}) have already been used for one--dimensional compaction in {\sc VLSI}--design, but this is the first time that a direct connection between these graphs is established. Given the pair of constraint graphs, the two--dimensional compaction task can be viewed as extending these graphs by new arcs so that certain conditions are satisfied and the total edge length is minimized. We can recognize those instances having only one such extension; for these cases we can solve the compaction problem in polynomial time. We have transformed the geometrical problem into a graph--theoretical one which can be formulated as an integer linear program. Our computational experiments have shown that the new approach works well in practice. It is the first time that the two--dimensional compaction problem is formulated as an integer linear program.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Klau, Gunnar W.
%A Mutzel, Petra
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Optimal compaction of orthogonal grid drawings :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7B5A-6
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 20 p.
%X We consider the two--dimensional compaction problem for orthogonal grid drawings in which the task is to alter the coordinates of the vertices and edge segments while preserving the shape of the drawing so that the total edge length is minimized. The problem is closely related to two--dimensional compaction in {\sc VLSI}--design and is conjectured to be {\sl NP}--hard.
We characterize the set of feasible solutions for the two--dimensional compaction problem in terms of paths in the so--called constraint graphs in $x$-- and $y$--direction. Similar graphs (known as {\em layout graphs}) have already been used for one--dimensional compaction in {\sc VLSI}--design, but this is the first time that a direct connection between these graphs is established. Given the pair of constraint graphs, the two--dimensional compaction task can be viewed as extending these graphs by new arcs so that certain conditions are satisfied and the total edge length is minimized. We can recognize those instances having only one such extension; for these cases we can solve the compaction problem in polynomial time.
We have transformed the geometrical problem into a graph--theoretical one which can
be formulated as an integer linear program. Our computational experiments have shown
that the new approach works well in practice. It is the first time that the two--dimensional
compaction problem is formulated as an integer linear program.
%B Research Report / Max-Planck-Institut für Informatik
Quasi-orthogonal drawing of planar graphs
G. W. Klau and P. Mutzel
Technical Report, 1998b
G. W. Klau and P. Mutzel
Technical Report, 1998b
Abstract
Orthogonal drawings of graphs are highly accepted in practice. For planar
graphs with vertex degree of at most four, Tamassia gives a polynomial time
algorithm which computes a region preserving orthogonal grid embedding with the
minimum number of bends. However, the graphs arising in practical applications
rarely have bounded vertex degree. In order to cope with general planar
graphs, we introduce the quasi--orthogonal drawing model. In this model,
vertices are drawn on grid points, and edges follow the grid paths except around
vertices of high degree. Furthermore we present an extension of Tamassia's
algorithm that constructs quasi--orthogonal drawings. We compare the drawings
to those obtained using related approaches.
Export
BibTeX
@techreport{KlauMutzel98-1-013,
TITLE = {Quasi-orthogonal drawing of planar graphs},
AUTHOR = {Klau, Gunnar W. and Mutzel, Petra},
LANGUAGE = {eng},
NUMBER = {MPI-I-1998-1-013},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {Orthogonal drawings of graphs are highly accepted in practice. For planar graphs with vertex degree of at most four, Tamassia gives a polynomial time algorithm which computes a region preserving orthogonal grid embedding with the minimum number of bends. However, the graphs arising in practical applications rarely have bounded vertex degree. In order to cope with general planar graphs, we introduce the quasi--orthogonal drawing model. In this model, vertices are drawn on grid points, and edges follow the grid paths except around vertices of high degree. Furthermore we present an extension of Tamassia's algorithm that constructs quasi--orthogonal drawings. We compare the drawings to those obtained using related approaches.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Klau, Gunnar W.
%A Mutzel, Petra
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Quasi-orthogonal drawing of planar graphs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7BCC-8
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 15 p.
%X Orthogonal drawings of graphs are highly accepted in practice. For planar
graphs with vertex degree of at most four, Tamassia gives a polynomial time
algorithm which computes a region preserving orthogonal grid embedding with the
minimum number of bends. However, the graphs arising in practical applications
rarely have bounded vertex degree. In order to cope with general planar
graphs, we introduce the quasi--orthogonal drawing model. In this model,
vertices are drawn on grid points, and edges follow the grid paths except around
vertices of high degree. Furthermore we present an extension of Tamassia's
algorithm that constructs quasi--orthogonal drawings. We compare the drawings
to those obtained using related approaches.
%B Research Report / Max-Planck-Institut für Informatik
New approximation algorithms for the achromatic number
P. Krysta and K. Lorys
Technical Report, 1998
P. Krysta and K. Lorys
Technical Report, 1998
Abstract
The achromatic number of a graph is the greatest number of colors in a
coloring of the vertices of the graph such that adjacent vertices get
distinct colors and for every pair of colors some
vertex of the first color and some vertex of the second color are adjacent.
The problem of computing this number is NP-complete for general graphs
as proved by Yannakakis and Gavril 1980. The problem is also NP-complete
for trees, that was proved by Cairnie and Edwards 1997.
Chaudhary and Vishwanathan 1997 gave recently a $7$-approximation
algorithm for this problem on trees, and an $O(\sqrt{n})$-approximation
algorithm for the problem on
graphs with girth (length of the shortest cycle) at least six.
We present the first $2$-approximation algorithm for the problem on trees.
This is a new algorithm based on different ideas than one by Chaudhary and
Vishwanathan 1997.
We then give a $1.15$-approximation algorithm for the problem on binary trees and a
$1.58$-approximation for the problem on trees of constant degree. We show that
the algorithms for constant degree trees can be implemented in linear time.
We also present the first $O(n^{3/8})$-approximation algorithm for the problem
on graphs with girth at least six.
Our algorithms are based on an interesting tree partitioning technique.
Moreover, we improve the lower bound of Farber {\em et al.} 1986
for the achromatic number of trees with degree bounded by three.
Export
BibTeX
@techreport{KrystaLorys98-1-016,
TITLE = {New approximation algorithms for the achromatic number},
AUTHOR = {Krysta, Piotr and Lorys, Krzysztof},
LANGUAGE = {eng},
NUMBER = {MPI-I-1998-1-016},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {The achromatic number of a graph is the greatest number of colors in a coloring of the vertices of the graph such that adjacent vertices get distinct colors and for every pair of colors some vertex of the first color and some vertex of the second color are adjacent. The problem of computing this number is NP-complete for general graphs as proved by Yannakakis and Gavril 1980. The problem is also NP-complete for trees, that was proved by Cairnie and Edwards 1997. Chaudhary and Vishwanathan 1997 gave recently a $7$-approximation algorithm for this problem on trees, and an $O(\sqrt{n})$-approximation algorithm for the problem on graphs with girth (length of the shortest cycle) at least six. We present the first $2$-approximation algorithm for the problem on trees. This is a new algorithm based on different ideas than one by Chaudhary and Vishwanathan 1997. We then give a $1.15$-approximation algorithm for the problem on binary trees and a $1.58$-approximation for the problem on trees of constant degree. We show that the algorithms for constant degree trees can be implemented in linear time. We also present the first $O(n^{3/8})$-approximation algorithm for the problem on graphs with girth at least six. Our algorithms are based on an interesting tree partitioning technique. Moreover, we improve the lower bound of Farber {\em et al.} 1986 for the achromatic number of trees with degree bounded by three.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Krysta, Piotr
%A Lorys, Krzysztof
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T New approximation algorithms for the achromatic number :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7BC1-D
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 26 p.
%X The achromatic number of a graph is the greatest number of colors in a
coloring of the vertices of the graph such that adjacent vertices get
distinct colors and for every pair of colors some
vertex of the first color and some vertex of the second color are adjacent.
The problem of computing this number is NP-complete for general graphs
as proved by Yannakakis and Gavril 1980. The problem is also NP-complete
for trees, that was proved by Cairnie and Edwards 1997.
Chaudhary and Vishwanathan 1997 gave recently a $7$-approximation
algorithm for this problem on trees, and an $O(\sqrt{n})$-approximation
algorithm for the problem on
graphs with girth (length of the shortest cycle) at least six.
We present the first $2$-approximation algorithm for the problem on trees.
This is a new algorithm based on different ideas than one by Chaudhary and
Vishwanathan 1997.
We then give a $1.15$-approximation algorithm for the problem on binary trees and a
$1.58$-approximation for the problem on trees of constant degree. We show that
the algorithms for constant degree trees can be implemented in linear time.
We also present the first $O(n^{3/8})$-approximation algorithm for the problem
on graphs with girth at least six.
Our algorithms are based on an interesting tree partitioning technique.
Moreover, we improve the lower bound of Farber {\em et al.} 1986
for the achromatic number of trees with degree bounded by three.
%B Research Report / Max-Planck-Institut für Informatik
Solving some discrepancy problems in NC*
S. Mahajan, E. A. Ramos and K. V. Subrahmanyam
Technical Report, 1998
S. Mahajan, E. A. Ramos and K. V. Subrahmanyam
Technical Report, 1998
Export
BibTeX
@techreport{MahajanRamosSubrahmanyam98,
TITLE = {Solving some discrepancy problems in {NC}*},
AUTHOR = {Mahajan, Sanjeev and Ramos, Edgar A. and Subrahmanyam, K. V.},
LANGUAGE = {eng},
NUMBER = {MPI-I-1998-1-012},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Mahajan, Sanjeev
%A Ramos, Edgar A.
%A Subrahmanyam, K. V.
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Solving some discrepancy problems in NC* :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7BD0-B
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 21 p.
%B Research Report / Max-Planck-Institut für Informatik
2nd Workshop on Algorithm Engineering WAE ’98 -- Proceedings
K. Mehlhorn (Ed.)
Technical Report, 1998
K. Mehlhorn (Ed.)
Technical Report, 1998
Export
BibTeX
@techreport{MehlhornWAE98,
TITLE = {2nd Workshop on Algorithm Engineering {WAE} '98 -- Proceedings},
EDITOR = {Mehlhorn, Kurt},
LANGUAGE = {eng},
NUMBER = {MPI-I-1998-1-019},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
TYPE = {Research Report},
}
Endnote
%0 Report
%E Mehlhorn, Kurt
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T 2nd Workshop on Algorithm Engineering WAE '98 -- Proceedings :
%O WAE 1998
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A388-E
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 213 p.
%B Research Report
Time-independent gossiping on full-port tori
U. Meyer and J. Sibeyn
Technical Report, 1998
U. Meyer and J. Sibeyn
Technical Report, 1998
Abstract
Near-optimal gossiping algorithms are given for two- and higher
dimensional tori. It is assumed that the amount of data each PU
contributes is so large that start-up time may be neglected.
For two-dimensional tori, a previous algorithm achieved optimality
in an intricate way, with a time-dependent routing pattern.
In all steps of our algorithms, the PUs forward the received
packets in the same way.
Export
BibTeX
@techreport{UlrichSibeyn98,
TITLE = {Time-independent gossiping on full-port tori},
AUTHOR = {Meyer, Ulrich and Sibeyn, Jop},
LANGUAGE = {eng},
NUMBER = {MPI-I-1998-1-014},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {Near-optimal gossiping algorithms are given for two- and higher dimensional tori. It is assumed that the amount of data each PU contributes is so large that start-up time may be neglected. For two-dimensional tori, a previous algorithm achieved optimality in an intricate way, with a time-dependent routing pattern. In all steps of our algorithms, the PUs forward the received packets in the same way.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Meyer, Ulrich
%A Sibeyn, Jop
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Time-independent gossiping on full-port tori :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7BC9-E
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 20 p.
%X Near-optimal gossiping algorithms are given for two- and higher
dimensional tori. It is assumed that the amount of data each PU
contributes is so large that start-up time may be neglected.
For two-dimensional tori, a previous algorithm achieved optimality
in an intricate way, with a time-dependent routing pattern.
In all steps of our algorithms, the PUs forward the received
packets in the same way.
%B Research Report / Max-Planck-Institut für Informatik
Optimizing over all combinatorial embeddings of a planar graph
P. Mutzel and R. Weiskircher
Technical Report, 1998
P. Mutzel and R. Weiskircher
Technical Report, 1998
Abstract
Optimizng Over All Combinatorial Embeddings of a Planar Graph".
We study the problem of optimizing over the set of all combinatorial
embeddings of a given planar graph. Our objective function prefers certain
cycles of $G$ as face cycles in the embedding. The motivation for studying
this problem arises in graph drawing, where the chosen embedding has an
important influence on the aesthetics of the drawing.
We characterize the set of all possible embeddings of a given biconnected
planar graph $G$ by means of a system of linear inequalities with
${0,1}$-variables corresponding to the set of those cycles in $G$ which can
appear in a combinatorial embedding. This system of linear inequalities can be
constructed recursively using the data structure of SPQR-trees and a new
splitting operation.
Our computational results on two benchmark sets of graphs are surprising: The
number of variables and constraints seems to grow only linearly with the size
of the graphs although the number of embeddings grows exponentially. For all
tested graphs (up to 500 vertices) and linear objective functions, the
resulting integer linear programs could be generated within 600 seconds and
solved within two seconds on a Sun Enterprise 10000 using CPLEX.
Export
BibTeX
@techreport{MutzelWeiskircher98,
TITLE = {Optimizing over all combinatorial embeddings of a planar graph},
AUTHOR = {Mutzel, Petra and Weiskircher, Ren{\'e}},
LANGUAGE = {eng},
NUMBER = {MPI-I-1998-1-029},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {Optimizng Over All Combinatorial Embeddings of a Planar Graph". We study the problem of optimizing over the set of all combinatorial embeddings of a given planar graph. Our objective function prefers certain cycles of $G$ as face cycles in the embedding. The motivation for studying this problem arises in graph drawing, where the chosen embedding has an important influence on the aesthetics of the drawing. We characterize the set of all possible embeddings of a given biconnected planar graph $G$ by means of a system of linear inequalities with ${0,1}$-variables corresponding to the set of those cycles in $G$ which can appear in a combinatorial embedding. This system of linear inequalities can be constructed recursively using the data structure of SPQR-trees and a new splitting operation. Our computational results on two benchmark sets of graphs are surprising: The number of variables and constraints seems to grow only linearly with the size of the graphs although the number of embeddings grows exponentially. For all tested graphs (up to 500 vertices) and linear objective functions, the resulting integer linear programs could be generated within 600 seconds and solved within two seconds on a Sun Enterprise 10000 using CPLEX.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Mutzel, Petra
%A Weiskircher, René
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Optimizing over all combinatorial embeddings of a planar graph :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7B66-A
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 23 p.
%X Optimizng Over All Combinatorial Embeddings of a Planar Graph".
We study the problem of optimizing over the set of all combinatorial
embeddings of a given planar graph. Our objective function prefers certain
cycles of $G$ as face cycles in the embedding. The motivation for studying
this problem arises in graph drawing, where the chosen embedding has an
important influence on the aesthetics of the drawing.
We characterize the set of all possible embeddings of a given biconnected
planar graph $G$ by means of a system of linear inequalities with
${0,1}$-variables corresponding to the set of those cycles in $G$ which can
appear in a combinatorial embedding. This system of linear inequalities can be
constructed recursively using the data structure of SPQR-trees and a new
splitting operation.
Our computational results on two benchmark sets of graphs are surprising: The
number of variables and constraints seems to grow only linearly with the size
of the graphs although the number of embeddings grows exponentially. For all
tested graphs (up to 500 vertices) and linear objective functions, the
resulting integer linear programs could be generated within 600 seconds and
solved within two seconds on a Sun Enterprise 10000 using CPLEX.
%B Research Report / Max-Planck-Institut für Informatik
On Wallace’s method for the generation of normal variates
C. Rüb
Technical Report, 1998
C. Rüb
Technical Report, 1998
Abstract
A method proposed by Wallace for the generation of normal random
variates is examined. His method works by transforming a pool
of numbers from the normal distribution into a new pool of number.
This is in contrast to almost all other known methods that transform one
or more variates from the uniform distribution into one or more
variates from the normal distribution. Unfortunately, a direct
implementation of Wallace's method has a serious flaw:
if consecutive numbers produced by this method are added, the
resulting variate, which should also be normally distributed,
will show a significant deviation from the expected behavior.
Wallace's method is analyzed with respect to this deficiency
and simple modifications are proposed that lead to variates of
better quality. It is argued that more randomness (that is,
more uniform random numbers) is needed in the transformation
process to improve the quality of the numbers generated.
However, an implementation of the modified method has
still small deviations from the expected behavior and its running
time is much higher than that of the original.
Export
BibTeX
@techreport{Rub98,
TITLE = {On Wallace's method for the generation of normal variates},
AUTHOR = {R{\"u}b, Christine},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1998-1-020},
NUMBER = {MPI-I-1998-1-020},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {A method proposed by Wallace for the generation of normal random variates is examined. His method works by transforming a pool of numbers from the normal distribution into a new pool of number. This is in contrast to almost all other known methods that transform one or more variates from the uniform distribution into one or more variates from the normal distribution. Unfortunately, a direct implementation of Wallace's method has a serious flaw: if consecutive numbers produced by this method are added, the resulting variate, which should also be normally distributed, will show a significant deviation from the expected behavior. Wallace's method is analyzed with respect to this deficiency and simple modifications are proposed that lead to variates of better quality. It is argued that more randomness (that is, more uniform random numbers) is needed in the transformation process to improve the quality of the numbers generated. However, an implementation of the modified method has still small deviations from the expected behavior and its running time is much higher than that of the original.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Rüb, Christine
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On Wallace's method for the generation of normal variates :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7B9B-3
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1998-1-020
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 17 p.
%X A method proposed by Wallace for the generation of normal random
variates is examined. His method works by transforming a pool
of numbers from the normal distribution into a new pool of number.
This is in contrast to almost all other known methods that transform one
or more variates from the uniform distribution into one or more
variates from the normal distribution. Unfortunately, a direct
implementation of Wallace's method has a serious flaw:
if consecutive numbers produced by this method are added, the
resulting variate, which should also be normally distributed,
will show a significant deviation from the expected behavior.
Wallace's method is analyzed with respect to this deficiency
and simple modifications are proposed that lead to variates of
better quality. It is argued that more randomness (that is,
more uniform random numbers) is needed in the transformation
process to improve the quality of the numbers generated.
However, an implementation of the modified method has
still small deviations from the expected behavior and its running
time is much higher than that of the original.
%B Research Report / Max-Planck-Institut für Informatik
Robustness and precision issues in geometric computation
S. Schirra
Technical Report, 1998a
S. Schirra
Technical Report, 1998a
Abstract
This is a preliminary version of a chapter that will appear in the
{\em Handbook on Computational Geometry}, edited by J.R.~Sack and
J.~Urrutia.
We give a survey on techniques that have been proposed and successfully used
to attack robustness and precision problems in the implementation of geometric
algorithms.
Export
BibTeX
@techreport{Schirra98-1-004,
TITLE = {Robustness and precision issues in geometric computation},
AUTHOR = {Schirra, Stefan},
LANGUAGE = {eng},
NUMBER = {MPI-I-1998-1-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {This is a preliminary version of a chapter that will appear in the {\em Handbook on Computational Geometry}, edited by J.R.~Sack and J.~Urrutia. We give a survey on techniques that have been proposed and successfully used to attack robustness and precision problems in the implementation of geometric algorithms.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Schirra, Stefan
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Robustness and precision issues in geometric computation :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7BE8-8
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 34 p.
%X This is a preliminary version of a chapter that will appear in the
{\em Handbook on Computational Geometry}, edited by J.R.~Sack and
J.~Urrutia.
We give a survey on techniques that have been proposed and successfully used
to attack robustness and precision problems in the implementation of geometric
algorithms.
%B Research Report / Max-Planck-Institut für Informatik
Parameterized implementations of classical planar convex hull algorithms and extreme point compuations
S. Schirra
Technical Report, 1998b
S. Schirra
Technical Report, 1998b
Abstract
We present C{\tt ++}-implementations of some classical algorithms for
computing
extreme points of a set of points in two-dimensional space.
The template feature of C{\tt ++} is used to provide generic code, that
works with various point types and various implementations of the primitives
used in the extreme point computation. The parameterization makes the code
flexible and adaptable. The code can be used with primitives provided by the
CGAL-kernel,
primitives provided by LEDA, and others. The interfaces of the convex
hull functions are compliant to the Standard Template Library.
Export
BibTeX
@techreport{Schirra1998-1-003,
TITLE = {Parameterized implementations of classical planar convex hull algorithms and extreme point compuations},
AUTHOR = {Schirra, Stefan},
LANGUAGE = {eng},
NUMBER = {MPI-I-1998-1-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {We present C{\tt ++}-implementations of some classical algorithms for computing extreme points of a set of points in two-dimensional space. The template feature of C{\tt ++} is used to provide generic code, that works with various point types and various implementations of the primitives used in the extreme point computation. The parameterization makes the code flexible and adaptable. The code can be used with primitives provided by the CGAL-kernel, primitives provided by LEDA, and others. The interfaces of the convex hull functions are compliant to the Standard Template Library.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Schirra, Stefan
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Parameterized implementations of classical planar convex hull algorithms and extreme point compuations :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7BEB-2
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 93 p.
%X We present C{\tt ++}-implementations of some classical algorithms for
computing
extreme points of a set of points in two-dimensional space.
The template feature of C{\tt ++} is used to provide generic code, that
works with various point types and various implementations of the primitives
used in the extreme point computation. The parameterization makes the code
flexible and adaptable. The code can be used with primitives provided by the
CGAL-kernel,
primitives provided by LEDA, and others. The interfaces of the convex
hull functions are compliant to the Standard Template Library.
%B Research Report / Max-Planck-Institut für Informatik
Resolution-based Theorem Proving for SHn-Logics
V. Sofronie-Stokkermans
Technical Report, 1998
V. Sofronie-Stokkermans
Technical Report, 1998
Abstract
In this paper we illustrate by means of an example, namely SHn-logics,
a method for translation to clause form and automated theorem proving for
first-order many-valued logics based on distributive lattices with operators.
Export
BibTeX
@techreport{Sofronie1998b,
TITLE = {Resolution-based Theorem Proving for {SH}n-Logics},
AUTHOR = {Sofronie-Stokkermans, Viorica},
LANGUAGE = {eng},
NUMBER = {E1852-GS-981},
INSTITUTION = {Technische Universit{\"a}t Wien},
ADDRESS = {Vienna, Austria},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {In this paper we illustrate by means of an example, namely SHn-logics, a method for translation to clause form and automated theorem proving for first-order many-valued logics based on distributive lattices with operators.},
}
Endnote
%0 Report
%A Sofronie-Stokkermans, Viorica
%+ Automation of Logic, MPI for Informatics, Max Planck Society
%T Resolution-based Theorem Proving for SHn-Logics :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-001A-21C5-8
%Y Technische Universität Wien
%C Vienna, Austria
%D 1998
%X In this paper we illustrate by means of an example, namely SHn-logics,
a method for translation to clause form and automated theorem proving for
first-order many-valued logics based on distributive lattices with operators.
2-Approximation algorithm for finding a spanning tree with maximum number of leaves
R. Solis-Oba
Technical Report, 1998
R. Solis-Oba
Technical Report, 1998
Abstract
We study the problem of finding a spanning tree with maximum number of leaves.
We present a simple 2-approximation algorithm for the problem, improving on the
previous best performance ratio of 3 achieved by algorithms of Ravi and Lu. Our
algorithm can be implemented to run in linear time using simple data structures.
We also study the variant of the problem in which a given subset of vertices are
required to be leaves in the tree. We provide a 5/2-approximation algorithm for
this version of the problem
Export
BibTeX
@techreport{Solis-Oba98,
TITLE = {2-Approximation algorithm for finding a spanning tree with maximum number of leaves},
AUTHOR = {Solis-Oba, Roberto},
LANGUAGE = {eng},
NUMBER = {MPI-I-1998-1-010},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1998},
DATE = {1998},
ABSTRACT = {We study the problem of finding a spanning tree with maximum number of leaves. We present a simple 2-approximation algorithm for the problem, improving on the previous best performance ratio of 3 achieved by algorithms of Ravi and Lu. Our algorithm can be implemented to run in linear time using simple data structures. We also study the variant of the problem in which a given subset of vertices are required to be leaves in the tree. We provide a 5/2-approximation algorithm for this version of the problem},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Solis-Oba, Roberto
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T 2-Approximation algorithm for finding a spanning tree with maximum number of leaves :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7BD6-0
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1998
%P 16 p.
%X We study the problem of finding a spanning tree with maximum number of leaves.
We present a simple 2-approximation algorithm for the problem, improving on the
previous best performance ratio of 3 achieved by algorithms of Ravi and Lu. Our
algorithm can be implemented to run in linear time using simple data structures.
We also study the variant of the problem in which a given subset of vertices are
required to be leaves in the tree. We provide a 5/2-approximation algorithm for
this version of the problem
%B Research Report / Max-Planck-Institut für Informatik
1997
Better bounds for online scheduling
S. Albers
Technical Report, 1997
S. Albers
Technical Report, 1997
Abstract
We study a classical problem in online scheduling. A sequence of jobs must be
scheduled on $m$ identical parallel machines. As each job arrives, its
processing time is known. The goal is to
minimize the makespan. Bartal, Fiat, Karloff and Vohra gave a
deterministic online algorithm that is 1.986-competitive.
Karger, Phillips and Torng generalized the
algorithm and proved an upper bound of 1.945. The best lower bound currently
known on the competitive ratio that can be
achieved by deterministic online algorithms
it equal to 1.837. In this paper we present an improved deterministic online
scheduling algorithm that is 1.923-competitive, for all $m\geq 2$.
The algorithm is based on a new scheduling strategy, i.e., it is not
a generalization of the approach by Bartal {\it et al}. Also, the algorithm
has a simple structure. Furthermore,
we develop a better lower bound. We prove that,
for general $m$, no deterministic online scheduling algorithm can be
better than \mbox{1.852-competitive}.
Export
BibTeX
@techreport{Albers97,
TITLE = {Better bounds for online scheduling},
AUTHOR = {Albers, Susanne},
LANGUAGE = {eng},
NUMBER = {MPI-I-1997-1-009},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1997},
DATE = {1997},
ABSTRACT = {We study a classical problem in online scheduling. A sequence of jobs must be scheduled on $m$ identical parallel machines. As each job arrives, its processing time is known. The goal is to minimize the makespan. Bartal, Fiat, Karloff and Vohra gave a deterministic online algorithm that is 1.986-competitive. Karger, Phillips and Torng generalized the algorithm and proved an upper bound of 1.945. The best lower bound currently known on the competitive ratio that can be achieved by deterministic online algorithms it equal to 1.837. In this paper we present an improved deterministic online scheduling algorithm that is 1.923-competitive, for all $m\geq 2$. The algorithm is based on a new scheduling strategy, i.e., it is not a generalization of the approach by Bartal {\it et al}. Also, the algorithm has a simple structure. Furthermore, we develop a better lower bound. We prove that, for general $m$, no deterministic online scheduling algorithm can be better than \mbox{1.852-competitive}.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Albers, Susanne
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Better bounds for online scheduling :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-9E1F-1
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1997
%P 16 p.
%X We study a classical problem in online scheduling. A sequence of jobs must be
scheduled on $m$ identical parallel machines. As each job arrives, its
processing time is known. The goal is to
minimize the makespan. Bartal, Fiat, Karloff and Vohra gave a
deterministic online algorithm that is 1.986-competitive.
Karger, Phillips and Torng generalized the
algorithm and proved an upper bound of 1.945. The best lower bound currently
known on the competitive ratio that can be
achieved by deterministic online algorithms
it equal to 1.837. In this paper we present an improved deterministic online
scheduling algorithm that is 1.923-competitive, for all $m\geq 2$.
The algorithm is based on a new scheduling strategy, i.e., it is not
a generalization of the approach by Bartal {\it et al}. Also, the algorithm
has a simple structure. Furthermore,
we develop a better lower bound. We prove that,
for general $m$, no deterministic online scheduling algorithm can be
better than \mbox{1.852-competitive}.
%B Research Report / Max-Planck-Institut für Informatik
Exploring unknown environments
S. Albers and M. R. Henzinger
Technical Report, 1997
S. Albers and M. R. Henzinger
Technical Report, 1997
Abstract
We consider exploration problems where a robot has to construct a
complete map of an unknown environment. We assume that the environment
is modeled by a directed,
strongly connected graph. The robot's task is to visit all nodes and
edges of the graph using the minimum number $R$ of edge traversals.
Koutsoupias~\cite{K} gave a lower bound for $R$ of $\Omega(d^2 m)$,
and Deng and Papadimitriou~\cite{DP}
showed an upper bound of $d^{O(d)} m$, where $m$
is the number edges in the graph and $d$ is the minimum number of
edges that have to be added to make the graph Eulerian.
We give the first sub-exponential algorithm for this exploration
problem, which achieves an upper bound of
$d^{O(\log d)} m$. We also show a matching lower bound of
$d^{\Omega(\log d)}m$ for our algorithm. Additionally, we give lower
bounds of $2^{\Omega(d)}m$, resp.\ $d^{\Omega(\log d)}m$
for various other natural exploration algorithms.
Export
BibTeX
@techreport{AlbersHenzinger97,
TITLE = {Exploring unknown environments},
AUTHOR = {Albers, Susanne and Henzinger, Monika R.},
LANGUAGE = {eng},
NUMBER = {MPI-I-1997-1-017},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1997},
DATE = {1997},
ABSTRACT = {We consider exploration problems where a robot has to construct a complete map of an unknown environment. We assume that the environment is modeled by a directed, strongly connected graph. The robot's task is to visit all nodes and edges of the graph using the minimum number $R$ of edge traversals. Koutsoupias~\cite{K} gave a lower bound for $R$ of $\Omega(d^2 m)$, and Deng and Papadimitriou~\cite{DP} showed an upper bound of $d^{O(d)} m$, where $m$ is the number edges in the graph and $d$ is the minimum number of edges that have to be added to make the graph Eulerian. We give the first sub-exponential algorithm for this exploration problem, which achieves an upper bound of $d^{O(\log d)} m$. We also show a matching lower bound of $d^{\Omega(\log d)}m$ for our algorithm. Additionally, we give lower bounds of $2^{\Omega(d)}m$, resp.\ $d^{\Omega(\log d)}m$ for various other natural exploration algorithms.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Albers, Susanne
%A Henzinger, Monika R.
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T Exploring unknown environments :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-9D82-5
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1997
%P 23 p.
%X We consider exploration problems where a robot has to construct a
complete map of an unknown environment. We assume that the environment
is modeled by a directed,
strongly connected graph. The robot's task is to visit all nodes and
edges of the graph using the minimum number $R$ of edge traversals.
Koutsoupias~\cite{K} gave a lower bound for $R$ of $\Omega(d^2 m)$,
and Deng and Papadimitriou~\cite{DP}
showed an upper bound of $d^{O(d)} m$, where $m$
is the number edges in the graph and $d$ is the minimum number of
edges that have to be added to make the graph Eulerian.
We give the first sub-exponential algorithm for this exploration
problem, which achieves an upper bound of
$d^{O(\log d)} m$. We also show a matching lower bound of
$d^{\Omega(\log d)}m$ for our algorithm. Additionally, we give lower
bounds of $2^{\Omega(d)}m$, resp.\ $d^{\Omega(\log d)}m$
for various other natural exploration algorithms.
%B Research Report / Max-Planck-Institut für Informatik
AGD-Library: A Library of Algorithms for Graph Drawing
D. Alberts, C. Gutwenger, P. Mutzel and S. Näher
Technical Report, 1997
D. Alberts, C. Gutwenger, P. Mutzel and S. Näher
Technical Report, 1997
Abstract
A graph drawing algorithm produces a layout of a graph in two- or three-dimensional space that should be readable and easy to understand.
Since the aesthetic criteria differ from one application area to another,
it is unlikely that a definition of the ``optimal drawing'' of a graph in
a strict mathematical sense exists. A large number of graph drawing algorithms
taking different aesthetic criteria into account have already been proposed.
In this paper we describe the design and implementation of the AGD--Library,
a library of {\bf A}lgorithms for {\bf G}raph {\bf D}rawing. The library
offers a broad range of existing algorithms for two-dimensional graph drawing
and tools for implementing new algorithms. The library is written in \CC using
the LEDA platform for combinatorial and geometric computing
(\cite{Mehlhorn-Naeher:CACM,LEDA-Manual}).
The algorithms are implemented independently of the underlying visualization
or graphics system by using a generic layout interface.
Most graph drawing algorithms place a set of restrictions on the
input graphs like planarity or biconnectivity. We provide a mechanism
for declaring this precondition for a particular algorithm and
checking it for potential input graphs. A drawing model can be
characterized by a set of properties of the drawing. We call these properties
the postcondition of the algorithm. There is support
for maintaining and retrieving the postcondition of an algorithm.
Export
BibTeX
@techreport{AlbertsGutwengerMutzelNaher,
TITLE = {{AGD}-Library: A Library of Algorithms for Graph Drawing},
AUTHOR = {Alberts, David and Gutwenger, Carsten and Mutzel, Petra and N{\"a}her, Stefan},
LANGUAGE = {eng},
NUMBER = {MPI-I-1997-1-019},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1997},
DATE = {1997},
ABSTRACT = {A graph drawing algorithm produces a layout of a graph in two- or three-dimensional space that should be readable and easy to understand. Since the aesthetic criteria differ from one application area to another, it is unlikely that a definition of the ``optimal drawing'' of a graph in a strict mathematical sense exists. A large number of graph drawing algorithms taking different aesthetic criteria into account have already been proposed. In this paper we describe the design and implementation of the AGD--Library, a library of {\bf A}lgorithms for {\bf G}raph {\bf D}rawing. The library offers a broad range of existing algorithms for two-dimensional graph drawing and tools for implementing new algorithms. The library is written in \CC using the LEDA platform for combinatorial and geometric computing (\cite{Mehlhorn-Naeher:CACM,LEDA-Manual}). The algorithms are implemented independently of the underlying visualization or graphics system by using a generic layout interface. Most graph drawing algorithms place a set of restrictions on the input graphs like planarity or biconnectivity. We provide a mechanism for declaring this precondition for a particular algorithm and checking it for potential input graphs. A drawing model can be characterized by a set of properties of the drawing. We call these properties the postcondition of the algorithm. There is support for maintaining and retrieving the postcondition of an algorithm.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Alberts, David
%A Gutwenger, Carsten
%A Mutzel, Petra
%A Näher, Stefan
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T AGD-Library: A Library of Algorithms for Graph Drawing :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-9D7C-6
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1997
%P 13 p.
%X A graph drawing algorithm produces a layout of a graph in two- or three-dimensional space that should be readable and easy to understand.
Since the aesthetic criteria differ from one application area to another,
it is unlikely that a definition of the ``optimal drawing'' of a graph in
a strict mathematical sense exists. A large number of graph drawing algorithms
taking different aesthetic criteria into account have already been proposed.
In this paper we describe the design and implementation of the AGD--Library,
a library of {\bf A}lgorithms for {\bf G}raph {\bf D}rawing. The library
offers a broad range of existing algorithms for two-dimensional graph drawing
and tools for implementing new algorithms. The library is written in \CC using
the LEDA platform for combinatorial and geometric computing
(\cite{Mehlhorn-Naeher:CACM,LEDA-Manual}).
The algorithms are implemented independently of the underlying visualization
or graphics system by using a generic layout interface.
Most graph drawing algorithms place a set of restrictions on the
input graphs like planarity or biconnectivity. We provide a mechanism
for declaring this precondition for a particular algorithm and
checking it for potential input graphs. A drawing model can be
characterized by a set of properties of the drawing. We call these properties
the postcondition of the algorithm. There is support
for maintaining and retrieving the postcondition of an algorithm.
%B Research Report / Max-Planck-Institut für Informatik
Maximum network flow with floating point arithmetic
E. Althaus and K. Mehlhorn
Technical Report, 1997
E. Althaus and K. Mehlhorn
Technical Report, 1997
Abstract
We discuss the implementation of network flow algorithms in floating point
arithmetic. We give an example to illustrate the difficulties that may arise
when floating point arithmetic is used without care. We describe an iterative
improvement scheme that can be put around any network flow algorithm for
integer capacities. The scheme carefully scales the capacities such that all
integers arising can be handled exactly using floating point arithmetic.
For $m \le 10^9$ and double precision floating
point arithmetic the number of iterations is always bounded by three and the
relative error in the flow value is at most $2^{-19}$. For $m \le 10^6$ and
double precision arithmetic the relative error after the first iteration is
bounded by $10^{-3}$.
Export
BibTeX
@techreport{AlthausMehlhorn97,
TITLE = {Maximum network flow with floating point arithmetic},
AUTHOR = {Althaus, Ernst and Mehlhorn, Kurt},
LANGUAGE = {eng},
NUMBER = {MPI-I-1997-1-022},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1997},
DATE = {1997},
ABSTRACT = {We discuss the implementation of network flow algorithms in floating point arithmetic. We give an example to illustrate the difficulties that may arise when floating point arithmetic is used without care. We describe an iterative improvement scheme that can be put around any network flow algorithm for integer capacities. The scheme carefully scales the capacities such that all integers arising can be handled exactly using floating point arithmetic. For $m \le 10^9$ and double precision floating point arithmetic the number of iterations is always bounded by three and the relative error in the flow value is at most $2^{-19}$. For $m \le 10^6$ and double precision arithmetic the relative error after the first iteration is bounded by $10^{-3}$.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Althaus, Ernst
%A Mehlhorn, Kurt
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Maximum network flow with floating point arithmetic :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-9D72-9
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1997
%P 5 p.
%X We discuss the implementation of network flow algorithms in floating point
arithmetic. We give an example to illustrate the difficulties that may arise
when floating point arithmetic is used without care. We describe an iterative
improvement scheme that can be put around any network flow algorithm for
integer capacities. The scheme carefully scales the capacities such that all
integers arising can be handled exactly using floating point arithmetic.
For $m \le 10^9$ and double precision floating
point arithmetic the number of iterations is always bounded by three and the
relative error in the flow value is at most $2^{-19}$. For $m \le 10^6$ and
double precision arithmetic the relative error after the first iteration is
bounded by $10^{-3}$.
%B Research Report / Max-Planck-Institut für Informatik
Algorithmen zum automatischen Zeichnen von Graphen
F. J. Brandenburg, M. Jünger and P. Mutzel
Technical Report, 1997
F. J. Brandenburg, M. Jünger and P. Mutzel
Technical Report, 1997
Abstract
Das Zeichnen von Graphen ist ein junges aufblühendes Gebiet der Informatik. Es befasst sich mit
Entwurf, Analyse, Implementierung und Evaluierung von neuen Algorithmen für ästhetisch schöne
Zeichnungen von Graphen.
Anhand von selektierten Anwendungsbeispielen, Problemstellungen und Lösungsansätzen wollen wir in
dieses noch relativ unbekannte Gebiet einführen und gleichzeitig einen Überblick über die
Aktivitäten und Ziele einer von der DFG im Rahmen des Schwerpunktprogramms "`Effiziente Algorithmen
für Diskrete Probleme und ihre Anwendungen"' geförderten Arbeitsgruppe aus Mitgliedern der
Universitäten Halle, Köln und Passau und des Max-Planck-Instituts für Informatik in
Saarbrücken geben.
Export
BibTeX
@techreport{BrandenburgJuengerMutzel97,
TITLE = {{Algorithmen zum automatischen Zeichnen von Graphen}},
AUTHOR = {Brandenburg, Franz J. and J{\"u}nger, Michael and Mutzel, Petra},
LANGUAGE = {deu},
NUMBER = {MPI-I-1997-1-007},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1997},
DATE = {1997},
ABSTRACT = {Das Zeichnen von Graphen ist ein junges aufbl{\"u}hendes Gebiet der Informatik. Es befasst sich mit Entwurf, Analyse, Implementierung und Evaluierung von neuen Algorithmen f{\"u}r {\"a}sthetisch sch{\"o}ne Zeichnungen von Graphen. Anhand von selektierten Anwendungsbeispielen, Problemstellungen und L{\"o}sungsans{\"a}tzen wollen wir in dieses noch relativ unbekannte Gebiet einf{\"u}hren und gleichzeitig einen {\"U}berblick {\"u}ber die Aktivit{\"a}ten und Ziele einer von der DFG im Rahmen des Schwerpunktprogramms "`Effiziente Algorithmen f{\"u}r Diskrete Probleme und ihre Anwendungen"' gef{\"o}rderten Arbeitsgruppe aus Mitgliedern der Universit{\"a}ten Halle, K{\"o}ln und Passau und des Max-Planck-Instituts f{\"u}r Informatik in Saarbr{\"u}cken geben.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Brandenburg, Franz J.
%A Jünger, Michael
%A Mutzel, Petra
%+ External Organizations
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Algorithmen zum automatischen Zeichnen von Graphen :
%G deu
%U http://hdl.handle.net/11858/00-001M-0000-0014-9F6D-7
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1997
%P 9 S.
%X Das Zeichnen von Graphen ist ein junges aufblühendes Gebiet der Informatik. Es befasst sich mit
Entwurf, Analyse, Implementierung und Evaluierung von neuen Algorithmen für ästhetisch schöne
Zeichnungen von Graphen.
Anhand von selektierten Anwendungsbeispielen, Problemstellungen und Lösungsansätzen wollen wir in
dieses noch relativ unbekannte Gebiet einführen und gleichzeitig einen Überblick über die
Aktivitäten und Ziele einer von der DFG im Rahmen des Schwerpunktprogramms "`Effiziente Algorithmen
für Diskrete Probleme und ihre Anwendungen"' geförderten Arbeitsgruppe aus Mitgliedern der
Universitäten Halle, Köln und Passau und des Max-Planck-Instituts für Informatik in
Saarbrücken geben.
%B Research Report / Max-Planck-Institut für Informatik
A parallel priority queue with constant time operations
G. S. Brodal, J. L. Träff and C. Zaroliagis
Technical Report, 1997
G. S. Brodal, J. L. Träff and C. Zaroliagis
Technical Report, 1997
Abstract
We present a parallel priority queue that supports the following
operations in constant time: {\em parallel insertion\/} of a sequence of
elements ordered according to key,
{\em parallel decrease key\/} for a sequence of elements ordered
according to key, {\em deletion of the minimum key element},
as well as {\em deletion of an arbitrary element}. Our data structure is
the first to
support multi insertion and multi decrease key in constant time. The
priority queue can be implemented on the EREW PRAM, and can perform any
sequence of $n$ operations in $O(n)$ time and $O(m\log n)$ work,
$m$ being the total number of keys inserted and/or updated. A main
application is a parallel implementation of Dijkstra's algorithm for the
single-source shortest path problem, which runs in $O(n)$ time and
$O(m\log n)$ work on a CREW PRAM on graphs with $n$ vertices and $m$
edges. This is a logarithmic factor improvement in the running time
compared with previous approaches.
Export
BibTeX
@techreport{BrodalTraffZaroliagis97,
TITLE = {A parallel priority queue with constant time operations},
AUTHOR = {Brodal, Gerth St{\o}lting and Tr{\"a}ff, Jesper Larsson and Zaroliagis, Christos},
LANGUAGE = {eng},
NUMBER = {MPI-I-1997-1-011},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1997},
DATE = {1997},
ABSTRACT = {We present a parallel priority queue that supports the following operations in constant time: {\em parallel insertion\/} of a sequence of elements ordered according to key, {\em parallel decrease key\/} for a sequence of elements ordered according to key, {\em deletion of the minimum key element}, as well as {\em deletion of an arbitrary element}. Our data structure is the first to support multi insertion and multi decrease key in constant time. The priority queue can be implemented on the EREW PRAM, and can perform any sequence of $n$ operations in $O(n)$ time and $O(m\log n)$ work, $m$ being the total number of keys inserted and/or updated. A main application is a parallel implementation of Dijkstra's algorithm for the single-source shortest path problem, which runs in $O(n)$ time and $O(m\log n)$ work on a CREW PRAM on graphs with $n$ vertices and $m$ edges. This is a logarithmic factor improvement in the running time compared with previous approaches.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Brodal, Gerth Stølting
%A Träff, Jesper Larsson
%A Zaroliagis, Christos
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A parallel priority queue with constant time operations :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-9E19-D
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1997
%P 19 p.
%X We present a parallel priority queue that supports the following
operations in constant time: {\em parallel insertion\/} of a sequence of
elements ordered according to key,
{\em parallel decrease key\/} for a sequence of elements ordered
according to key, {\em deletion of the minimum key element},
as well as {\em deletion of an arbitrary element}. Our data structure is
the first to
support multi insertion and multi decrease key in constant time. The
priority queue can be implemented on the EREW PRAM, and can perform any
sequence of $n$ operations in $O(n)$ time and $O(m\log n)$ work,
$m$ being the total number of keys inserted and/or updated. A main
application is a parallel implementation of Dijkstra's algorithm for the
single-source shortest path problem, which runs in $O(n)$ time and
$O(m\log n)$ work on a CREW PRAM on graphs with $n$ vertices and $m$
edges. This is a logarithmic factor improvement in the running time
compared with previous approaches.
%B Research Report / Max-Planck-Institut für Informatik
Finger search trees with constant insertion time
G. S. Brodal
Technical Report, 1997
G. S. Brodal
Technical Report, 1997
Abstract
We consider the problem of implementing finger search trees on the
pointer machine, {\it i.e.}, how to maintain a sorted list such that
searching for an element $x$, starting the search at any arbitrary
element $f$ in the list, only requires logarithmic time in the
distance between $x$ and $f$ in the list.
We present the first pointer-based implementation of finger search
trees allowing new elements to be inserted at any arbitrary position
in the list in worst case constant time. Previously, the best known
insertion time on the pointer machine was $O(\log^* n)$, where $n$
is the total length of the list. On a unit-cost RAM, a constant
insertion time has been achieved by Dietz and Raman by using
standard techniques of packing small problem sizes into a constant
number of machine words.
Deletion of a list element is supported in $O(\log^* n)$ time, which
matches the previous best bounds. Our data structure requires linear
space.
Export
BibTeX
@techreport{Brodal97,
TITLE = {Finger search trees with constant insertion time},
AUTHOR = {Brodal, Gerth St{\o}lting},
LANGUAGE = {eng},
NUMBER = {MPI-I-1997-1-020},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1997},
DATE = {1997},
ABSTRACT = {We consider the problem of implementing finger search trees on the pointer machine, {\it i.e.}, how to maintain a sorted list such that searching for an element $x$, starting the search at any arbitrary element $f$ in the list, only requires logarithmic time in the distance between $x$ and $f$ in the list. We present the first pointer-based implementation of finger search trees allowing new elements to be inserted at any arbitrary position in the list in worst case constant time. Previously, the best known insertion time on the pointer machine was $O(\log^* n)$, where $n$ is the total length of the list. On a unit-cost RAM, a constant insertion time has been achieved by Dietz and Raman by using standard techniques of packing small problem sizes into a constant number of machine words. Deletion of a list element is supported in $O(\log^* n)$ time, which matches the previous best bounds. Our data structure requires linear space.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Brodal, Gerth Stølting
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Finger search trees with constant insertion time :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-9D79-C
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1997
%P 17 p.
%X We consider the problem of implementing finger search trees on the
pointer machine, {\it i.e.}, how to maintain a sorted list such that
searching for an element $x$, starting the search at any arbitrary
element $f$ in the list, only requires logarithmic time in the
distance between $x$ and $f$ in the list.
We present the first pointer-based implementation of finger search
trees allowing new elements to be inserted at any arbitrary position
in the list in worst case constant time. Previously, the best known
insertion time on the pointer machine was $O(\log^* n)$, where $n$
is the total length of the list. On a unit-cost RAM, a constant
insertion time has been achieved by Dietz and Raman by using
standard techniques of packing small problem sizes into a constant
number of machine words.
Deletion of a list element is supported in $O(\log^* n)$ time, which
matches the previous best bounds. Our data structure requires linear
space.
%B Research Report / Max-Planck-Institut für Informatik
Restricted 2-factor polytopes
W. H. Cunningham and Y. Wang
Technical Report, 1997
W. H. Cunningham and Y. Wang
Technical Report, 1997
Abstract
The optimal $k$-restricted 2-factor problem consists of finding,
in a complete undirected graph $K_n$, a minimum cost 2-factor
(subgraph having degree 2 at every node) with all components having more
than $k$ nodes.
The problem is a relaxation of the well-known symmetric travelling
salesman problem, and is equivalent to it when $\frac{n}{2}\leq k\leq n-1$.
We study the $k$-restricted 2-factor polytope. We present a large
class of valid inequalities, called bipartition
inequalities, and describe some of their properties; some of
these results are new even for the travelling salesman polytope.
For the case $k=3$, the triangle-free 2-factor polytope,
we derive a necessary and sufficient condition for such inequalities
to be facet inducing.
Export
BibTeX
@techreport{CunninghamWang97,
TITLE = {Restricted 2-factor polytopes},
AUTHOR = {Cunningham, Wiliam H. and Wang, Yaoguang},
LANGUAGE = {eng},
NUMBER = {MPI-I-1997-1-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1997},
DATE = {1997},
ABSTRACT = {The optimal $k$-restricted 2-factor problem consists of finding, in a complete undirected graph $K_n$, a minimum cost 2-factor (subgraph having degree 2 at every node) with all components having more than $k$ nodes. The problem is a relaxation of the well-known symmetric travelling salesman problem, and is equivalent to it when $\frac{n}{2}\leq k\leq n-1$. We study the $k$-restricted 2-factor polytope. We present a large class of valid inequalities, called bipartition inequalities, and describe some of their properties; some of these results are new even for the travelling salesman polytope. For the case $k=3$, the triangle-free 2-factor polytope, we derive a necessary and sufficient condition for such inequalities to be facet inducing.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Cunningham, Wiliam H.
%A Wang, Yaoguang
%+ External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Restricted 2-factor polytopes :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-9F73-8
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1997
%P 30 p.
%X The optimal $k$-restricted 2-factor problem consists of finding,
in a complete undirected graph $K_n$, a minimum cost 2-factor
(subgraph having degree 2 at every node) with all components having more
than $k$ nodes.
The problem is a relaxation of the well-known symmetric travelling
salesman problem, and is equivalent to it when $\frac{n}{2}\leq k\leq n-1$.
We study the $k$-restricted 2-factor polytope. We present a large
class of valid inequalities, called bipartition
inequalities, and describe some of their properties; some of
these results are new even for the travelling salesman polytope.
For the case $k=3$, the triangle-free 2-factor polytope,
we derive a necessary and sufficient condition for such inequalities
to be facet inducing.
%B Research Report / Max-Planck-Institut für Informatik
On-line network routing - a survey
A. Fiat and S. Leonardi
Technical Report, 1997
A. Fiat and S. Leonardi
Technical Report, 1997
Export
BibTeX
@techreport{fiatLeonardi97,
TITLE = {On-line network routing -- a survey},
AUTHOR = {Fiat, Amos and Leonardi, Stefano},
LANGUAGE = {eng},
NUMBER = {MPI-I-97-1-026},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1997},
DATE = {1997},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Fiat, Amos
%A Leonardi, Stefano
%+ External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On-line network routing - a survey :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-9CD2-A
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1997
%P 19 p.
%B Research Report / Max-Planck-Institut für Informatik
On the Bahncard problem
R. Fleischer
Technical Report, 1997
R. Fleischer
Technical Report, 1997
Abstract
In this paper, we generalize the {\em Ski-Rental Problem}
to the {\em Bahncardproblem} which is an online problem of
practical relevance for all travelers.
The Bahncard is a railway pass of the Deutsche Bundesbahn (the German
railway company) which entitles its holder to a 50\%\ price
reduction on nearly all train tickets.
It costs 240\thinspace DM, and it is valid for 12 months.
For the common traveler, the decision at which time to buy
a Bahncard is a typical online problem, because she usually does
not know when and where to she will travel next.
We show that the greedy algorithm applied by most travelers
and clerks at ticket offices is not better in the worst case
than the trivial algorithm which never buys a Bahncard.
We present two optimal deterministic online algorithms,
an optimistic one and and a pessimistic one.
We further give a lower bound for randomized online algorithms
and present an algorithm which we conjecture to be optimal;
a proof of the conjecture is given for a special case of the problem.
It turns out that the optimal competitive ratio only depends on
the price reduction factor (50\%\ for the German Bahncardproblem),
but does not depend on the price or validity period of a Bahncard.
Export
BibTeX
@techreport{Fleischer97,
TITLE = {On the Bahncard problem},
AUTHOR = {Fleischer, Rudolf},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1997-1-018},
NUMBER = {MPI-I-1997-1-018},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1997},
DATE = {1997},
ABSTRACT = {In this paper, we generalize the {\em Ski-Rental Problem} to the {\em Bahncardproblem} which is an online problem of practical relevance for all travelers. The Bahncard is a railway pass of the Deutsche Bundesbahn (the German railway company) which entitles its holder to a 50\%\ price reduction on nearly all train tickets. It costs 240\thinspace DM, and it is valid for 12 months. For the common traveler, the decision at which time to buy a Bahncard is a typical online problem, because she usually does not know when and where to she will travel next. We show that the greedy algorithm applied by most travelers and clerks at ticket offices is not better in the worst case than the trivial algorithm which never buys a Bahncard. We present two optimal deterministic online algorithms, an optimistic one and and a pessimistic one. We further give a lower bound for randomized online algorithms and present an algorithm which we conjecture to be optimal; a proof of the conjecture is given for a special case of the problem. It turns out that the optimal competitive ratio only depends on the price reduction factor (50\%\ for the German Bahncardproblem), but does not depend on the price or validity period of a Bahncard.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Fleischer, Rudolf
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On the Bahncard problem :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-9D7F-F
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1997-1-018
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1997
%P 16 p.
%X In this paper, we generalize the {\em Ski-Rental Problem}
to the {\em Bahncardproblem} which is an online problem of
practical relevance for all travelers.
The Bahncard is a railway pass of the Deutsche Bundesbahn (the German
railway company) which entitles its holder to a 50\%\ price
reduction on nearly all train tickets.
It costs 240\thinspace DM, and it is valid for 12 months.
For the common traveler, the decision at which time to buy
a Bahncard is a typical online problem, because she usually does
not know when and where to she will travel next.
We show that the greedy algorithm applied by most travelers
and clerks at ticket offices is not better in the worst case
than the trivial algorithm which never buys a Bahncard.
We present two optimal deterministic online algorithms,
an optimistic one and and a pessimistic one.
We further give a lower bound for randomized online algorithms
and present an algorithm which we conjecture to be optimal;
a proof of the conjecture is given for a special case of the problem.
It turns out that the optimal competitive ratio only depends on
the price reduction factor (50\%\ for the German Bahncardproblem),
but does not depend on the price or validity period of a Bahncard.
%B Research Report / Max-Planck-Institut für Informatik
Faster and simpler algorithms for multicommodity flow and other fractional packing problems
N. Garg and J. Könemann
Technical Report, 1997
N. Garg and J. Könemann
Technical Report, 1997
Abstract
This paper considers the problem of designing fast, approximate,
combinatorial algorithms for multicommodity flows and other fractional
packing problems. We provide a different approach to these problems
which yields faster and much simpler algorithms. In particular we
provide the first polynomial-time, combinatorial approximation algorithm
for
the fractional packing problem; in fact the running time of our
algorithm is
strongly polynomial. Our approach also allows us to substitute
shortest path computations for min-cost flow computations in computing
maximum concurrent flow and min-cost multicommodity flow; this yields
much
faster algorithms when the number of commodities is large.
Export
BibTeX
@techreport{GargKoenemann97,
TITLE = {Faster and simpler algorithms for multicommodity flow and other fractional packing problems},
AUTHOR = {Garg, Naveen and K{\"o}nemann, Jochen},
LANGUAGE = {eng},
NUMBER = {MPI-I-97-1-025},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1997},
DATE = {1997},
ABSTRACT = {This paper considers the problem of designing fast, approximate, combinatorial algorithms for multicommodity flows and other fractional packing problems. We provide a different approach to these problems which yields faster and much simpler algorithms. In particular we provide the first polynomial-time, combinatorial approximation algorithm for the fractional packing problem; in fact the running time of our algorithm is strongly polynomial. Our approach also allows us to substitute shortest path computations for min-cost flow computations in computing maximum concurrent flow and min-cost multicommodity flow; this yields much faster algorithms when the number of commodities is large.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Garg, Naveen
%A Könemann, Jochen
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Faster and simpler algorithms for multicommodity flow and other fractional packing problems :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-9CD9-B
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1997
%P 13 p.
%X This paper considers the problem of designing fast, approximate,
combinatorial algorithms for multicommodity flows and other fractional
packing problems. We provide a different approach to these problems
which yields faster and much simpler algorithms. In particular we
provide the first polynomial-time, combinatorial approximation algorithm
for
the fractional packing problem; in fact the running time of our
algorithm is
strongly polynomial. Our approach also allows us to substitute
shortest path computations for min-cost flow computations in computing
maximum concurrent flow and min-cost multicommodity flow; this yields
much
faster algorithms when the number of commodities is large.
%B Research Report / Max-Planck-Institut für Informatik
A polylogarithmic approximation algorithm for group Steiner tree problem
N. Garg, G. Konjevod and R. Ravi
Technical Report, 1997
N. Garg, G. Konjevod and R. Ravi
Technical Report, 1997
Abstract
The group Steiner tree problem is a generalization of the
Steiner tree problem where we are given several subsets (groups) of
vertices in
a weighted graph,
and the goal is to find a minimum-weight connected subgraph containing
at least one vertex from each group. The problem was introduced by
Reich and Widmayer and finds applications in VLSI design.
The group Steiner tree problem generalizes the set covering
problem, and is therefore at least as hard.
We give a randomized $O(\log^3 n \log k)$-approximation
algorithm for the group Steiner tree problem on an $n$-node graph, where
$k$ is the number of groups.The best previous performance guarantee was
$(1+\frac{\ln k}{2})\sqrt{k}$ (Bateman, Helvig, Robins and Zelikovsky).
Noting that the group Steiner problem also models the
network design problems with location-theoretic constraints studied by
Marathe, Ravi and Sundaram, our results also improve their bicriteria
approximation results. Similarly, we improve previous results by
Slav{\'\i}k on a tour version, called the errand scheduling problem.
We use the result of Bartal on probabilistic approximation of finite
metric spaces by tree metrics
to reduce the problem to one in a tree metric. To find a solution on a
tree,
we use a generalization of randomized rounding. Our approximation
guarantees
improve to $O(\log^2 n \log k)$ in the case of graphs that exclude
small minors by using a better alternative to Bartal's result on
probabilistic approximations of metrics induced by such graphs
(Konjevod, Ravi and Salman) -- this improvement is valid for the group
Steiner problem on planar graphs as well as on a set of points in the
2D-Euclidean case.
Export
BibTeX
@techreport{GargKonjevodRavi97,
TITLE = {A polylogarithmic approximation algorithm for group Steiner tree problem},
AUTHOR = {Garg, Naveen and Konjevod, Goran and Ravi, R.},
LANGUAGE = {eng},
NUMBER = {MPI-I-97-1-027},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1997},
DATE = {1997},
ABSTRACT = {The group Steiner tree problem is a generalization of the Steiner tree problem where we are given several subsets (groups) of vertices in a weighted graph, and the goal is to find a minimum-weight connected subgraph containing at least one vertex from each group. The problem was introduced by Reich and Widmayer and finds applications in VLSI design. The group Steiner tree problem generalizes the set covering problem, and is therefore at least as hard. We give a randomized $O(\log^3 n \log k)$-approximation algorithm for the group Steiner tree problem on an $n$-node graph, where $k$ is the number of groups.The best previous performance guarantee was $(1+\frac{\ln k}{2})\sqrt{k}$ (Bateman, Helvig, Robins and Zelikovsky). Noting that the group Steiner problem also models the network design problems with location-theoretic constraints studied by Marathe, Ravi and Sundaram, our results also improve their bicriteria approximation results. Similarly, we improve previous results by Slav{\'\i}k on a tour version, called the errand scheduling problem. We use the result of Bartal on probabilistic approximation of finite metric spaces by tree metrics to reduce the problem to one in a tree metric. To find a solution on a tree, we use a generalization of randomized rounding. Our approximation guarantees improve to $O(\log^2 n \log k)$ in the case of graphs that exclude small minors by using a better alternative to Bartal's result on probabilistic approximations of metrics induced by such graphs (Konjevod, Ravi and Salman) -- this improvement is valid for the group Steiner problem on planar graphs as well as on a set of points in the 2D-Euclidean case.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Garg, Naveen
%A Konjevod, Goran
%A Ravi, R.
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
%T A polylogarithmic approximation algorithm for group Steiner tree problem :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-9CCF-3
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1997
%P 7 p.
%X The group Steiner tree problem is a generalization of the
Steiner tree problem where we are given several subsets (groups) of
vertices in
a weighted graph,
and the goal is to find a minimum-weight connected subgraph containing
at least one vertex from each group. The problem was introduced by
Reich and Widmayer and finds applications in VLSI design.
The group Steiner tree problem generalizes the set covering
problem, and is therefore at least as hard.
We give a randomized $O(\log^3 n \log k)$-approximation
algorithm for the group Steiner tree problem on an $n$-node graph, where
$k$ is the number of groups.The best previous performance guarantee was
$(1+\frac{\ln k}{2})\sqrt{k}$ (Bateman, Helvig, Robins and Zelikovsky).
Noting that the group Steiner problem also models the
network design problems with location-theoretic constraints studied by
Marathe, Ravi and Sundaram, our results also improve their bicriteria
approximation results. Similarly, we improve previous results by
Slav{\'\i}k on a tour version, called the errand scheduling problem.
We use the result of Bartal on probabilistic approximation of finite
metric spaces by tree metrics
to reduce the problem to one in a tree metric. To find a solution on a
tree,
we use a generalization of randomized rounding. Our approximation
guarantees
improve to $O(\log^2 n \log k)$ in the case of graphs that exclude
small minors by using a better alternative to Bartal's result on
probabilistic approximations of metrics induced by such graphs
(Konjevod, Ravi and Salman) -- this improvement is valid for the group
Steiner problem on planar graphs as well as on a set of points in the
2D-Euclidean case.
%B Research Report / Max-Planck-Institut für Informatik
Evaluating a 2-approximation algorithm for edge-separators in planar graphs
N. Garg and C. Manss
Technical Report, 1997
N. Garg and C. Manss
Technical Report, 1997
Abstract
In this paper we report on results obtained by an implementation of a
2-approximation algorithm for edge separators in planar
graphs. For 374 out of the 435 instances the algorithm returned the optimum
solution. For the remaining instances the solution returned was never more
than 10.6\% away from the lower bound on the optimum separator. We also
improve the worst-case running time of the algorithm from $O(n^6)$ to $O(n^5)$
and present techniques which improve the running time significantly in
practice.
Export
BibTeX
@techreport{GargManss97,
TITLE = {Evaluating a 2-approximation algorithm for edge-separators in planar graphs},
AUTHOR = {Garg, Naveen and Manss, Christian},
LANGUAGE = {eng},
NUMBER = {MPI-I-1997-1-010},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1997},
DATE = {1997},
ABSTRACT = {In this paper we report on results obtained by an implementation of a 2-approximation algorithm for edge separators in planar graphs. For 374 out of the 435 instances the algorithm returned the optimum solution. For the remaining instances the solution returned was never more than 10.6\% away from the lower bound on the optimum separator. We also improve the worst-case running time of the algorithm from $O(n^6)$ to $O(n^5)$ and present techniques which improve the running time significantly in practice.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Garg, Naveen
%A Manss, Christian
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Evaluating a 2-approximation algorithm for edge-separators in planar graphs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-9E1C-7
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1997
%P 9 p.
%X In this paper we report on results obtained by an implementation of a
2-approximation algorithm for edge separators in planar
graphs. For 374 out of the 435 instances the algorithm returned the optimum
solution. For the remaining instances the solution returned was never more
than 10.6\% away from the lower bound on the optimum separator. We also
improve the worst-case running time of the algorithm from $O(n^6)$ to $O(n^5)$
and present techniques which improve the running time significantly in
practice.
%B Research Report / Max-Planck-Institut für Informatik
Approximating sparsest cuts
N. Garg
Technical Report, 1997
N. Garg
Technical Report, 1997
Export
BibTeX
@techreport{Garg97,
TITLE = {Approximating sparsest cuts},
AUTHOR = {Garg, Naveen},
LANGUAGE = {eng},
NUMBER = {MPI-I-1997-1-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1997},
DATE = {1997},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Garg, Naveen
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Approximating sparsest cuts :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-9FD3-1
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1997
%P 9 p.
%B Research Report / Max-Planck-Institut für Informatik
Minimizing stall time in single and parallel disk systems
N. Garg, S. Albers and S. Leonardi
Technical Report, 1997
N. Garg, S. Albers and S. Leonardi
Technical Report, 1997
Abstract
We study integrated prefetching and caching problems following
the work of Cao et al. and Kimbrel and Karlin.
Cao et al. and Kimbrel and Karlin gave approximation algorithms
for minimizing the total elapsed time in single and
parallel disk settings. The total elapsed time is the sum of the processor
stall times and the length of the request sequence to be served.
We show that an optimum prefetching/caching schedule for a
single disk problem can be computed in polynomial time,
thereby settling an open question by Kimbrel and Karlin.
For the parallel disk problem we give an approximation algorithm for
minimizing stall time. Stall time is a more realistic and harder to
approximate measure for this problem. All of our algorithms are based on
a new approach which involves formulating the prefetching/caching problems
as integer programs.
Export
BibTeX
@techreport{AlbersGargLeonardi97,
TITLE = {Minimizing stall time in single and parallel disk systems},
AUTHOR = {Garg, Naveen and Albers, Susanne and Leonardi, Stefano},
LANGUAGE = {eng},
NUMBER = {MPI-I-1997-1-024},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1997},
DATE = {1997},
ABSTRACT = {We study integrated prefetching and caching problems following the work of Cao et al. and Kimbrel and Karlin. Cao et al. and Kimbrel and Karlin gave approximation algorithms for minimizing the total elapsed time in single and parallel disk settings. The total elapsed time is the sum of the processor stall times and the length of the request sequence to be served. We show that an optimum prefetching/caching schedule for a single disk problem can be computed in polynomial time, thereby settling an open question by Kimbrel and Karlin. For the parallel disk problem we give an approximation algorithm for minimizing stall time. Stall time is a more realistic and harder to approximate measure for this problem. All of our algorithms are based on a new approach which involves formulating the prefetching/caching problems as integer programs.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Garg, Naveen
%A Albers, Susanne
%A Leonardi, Stefano
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Minimizing stall time in single and parallel disk systems :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-9D69-1
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1997
%P 16 p.
%X We study integrated prefetching and caching problems following
the work of Cao et al. and Kimbrel and Karlin.
Cao et al. and Kimbrel and Karlin gave approximation algorithms
for minimizing the total elapsed time in single and
parallel disk settings. The total elapsed time is the sum of the processor
stall times and the length of the request sequence to be served.
We show that an optimum prefetching/caching schedule for a
single disk problem can be computed in polynomial time,
thereby settling an open question by Kimbrel and Karlin.
For the parallel disk problem we give an approximation algorithm for
minimizing stall time. Stall time is a more realistic and harder to
approximate measure for this problem. All of our algorithms are based on
a new approach which involves formulating the prefetching/caching problems
as integer programs.
%B Research Report / Max-Planck-Institut für Informatik
Parallel algorithms for MD-simulations of synthetic polymers
B. Jung, H.-P. Lenhof, P. Müller and C. Rüb
Technical Report, 1997
B. Jung, H.-P. Lenhof, P. Müller and C. Rüb
Technical Report, 1997
Abstract
Molecular dynamics simulation has become an important tool for testing
and developing hypotheses about chemical and physical processes. Since
the required amount of computing power is tremendous there is a strong
interest in parallel algorithms. We deal with efficient algorithms on
MIMD computers for
a special class of macromolecules, namely synthetic polymers, which play
a very important role in industry. This makes it worthwhile to design
fast parallel algorithms specifically for them. Contrary to existing
parallel algorithms, our algorithms take the structure of synthetic
polymers into account which allows faster simulation of their dynamics.
Export
BibTeX
@techreport{JungLenhofMullerRub97,
TITLE = {Parallel algorithms for {MD}-simulations of synthetic polymers},
AUTHOR = {Jung, Bernd and Lenhof, Hans-Peter and M{\"u}ller, Peter and R{\"u}b, Christine},
LANGUAGE = {eng},
NUMBER = {MPI-I-1997-1-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1997},
DATE = {1997},
ABSTRACT = {Molecular dynamics simulation has become an important tool for testing and developing hypotheses about chemical and physical processes. Since the required amount of computing power is tremendous there is a strong interest in parallel algorithms. We deal with efficient algorithms on MIMD computers for a special class of macromolecules, namely synthetic polymers, which play a very important role in industry. This makes it worthwhile to design fast parallel algorithms specifically for them. Contrary to existing parallel algorithms, our algorithms take the structure of synthetic polymers into account which allows faster simulation of their dynamics.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Jung, Bernd
%A Lenhof, Hans-Peter
%A Müller, Peter
%A Rüb, Christine
%+ External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Parallel algorithms for MD-simulations of synthetic polymers :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-9FD0-7
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1997
%P 32 p.
%X Molecular dynamics simulation has become an important tool for testing
and developing hypotheses about chemical and physical processes. Since
the required amount of computing power is tremendous there is a strong
interest in parallel algorithms. We deal with efficient algorithms on
MIMD computers for
a special class of macromolecules, namely synthetic polymers, which play
a very important role in industry. This makes it worthwhile to design
fast parallel algorithms specifically for them. Contrary to existing
parallel algorithms, our algorithms take the structure of synthetic
polymers into account which allows faster simulation of their dynamics.
%B Research Report / Max-Planck-Institut für Informatik
Pitfalls of using PQ-Trees in automatic graph drawing
M. Jünger, S. Leipert and P. Mutzel
Technical Report, 1997
M. Jünger, S. Leipert and P. Mutzel
Technical Report, 1997
Abstract
A number of erroneous attempts involving $PQ$-trees
in the context of automatic graph drawing algorithms have
been presented in the literature in recent years.
In order to prevent
future research from constructing algorithms with similar errors we
point out some of the major mistakes.
In particular, we examine erroneous usage of the $PQ$-tree data
structure in algorithms for computing maximal planar subgraphs and an
algorithm for testing leveled planarity of leveled directed acyclic
graphs with several sources and sinks.
Export
BibTeX
@techreport{JungerLeipertMutzel97,
TITLE = {Pitfalls of using {PQ}-Trees in automatic graph drawing},
AUTHOR = {J{\"u}nger, Michael and Leipert, Sebastian and Mutzel, Petra},
LANGUAGE = {eng},
NUMBER = {MPI-I-97-1-015},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1997},
DATE = {1997},
ABSTRACT = {A number of erroneous attempts involving $PQ$-trees in the context of automatic graph drawing algorithms have been presented in the literature in recent years. In order to prevent future research from constructing algorithms with similar errors we point out some of the major mistakes. In particular, we examine erroneous usage of the $PQ$-tree data structure in algorithms for computing maximal planar subgraphs and an algorithm for testing leveled planarity of leveled directed acyclic graphs with several sources and sinks.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Jünger, Michael
%A Leipert, Sebastian
%A Mutzel, Petra
%+ External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Pitfalls of using PQ-Trees in automatic graph drawing :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-9E13-A
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1997
%P 12 p.
%X A number of erroneous attempts involving $PQ$-trees
in the context of automatic graph drawing algorithms have
been presented in the literature in recent years.
In order to prevent
future research from constructing algorithms with similar errors we
point out some of the major mistakes.
In particular, we examine erroneous usage of the $PQ$-tree data
structure in algorithms for computing maximal planar subgraphs and an
algorithm for testing leveled planarity of leveled directed acyclic
graphs with several sources and sinks.
%B Research Report / Max-Planck-Institut für Informatik
New contact measures for the protein docking problem
H.-P. Lenhof
Technical Report, 1997
H.-P. Lenhof
Technical Report, 1997
Abstract
We have developed and
implemented a parallel distributed algorithm for the
rigid-body protein docking problem. The algorithm is based on
a new fitness function for evaluating the surface matching
of a given conformation.
The fitness function is defined as the weighted sum
of two contact measures, the {\em geometric contact measure}
and the {\em chemical contact measure}.
The geometric contact measure measures the ``size'' of the
contact area of two molecules. It is a potential function
that counts the ``van der Waals contacts'' between the atoms of the
two molecules (the algorithm does not compute
the Lennard-Jones potential).
The chemical contact measure is also based
on the ``van der Waals contacts'' principle: We consider
all atom pairs that have a ``van der Waals'' contact,
but instead of adding a constant for each pair $(a,b)$ we add a
``chemical weight'' that depends on the atom pair $(a,b)$.
We tested our docking algorithm with a test set that contains
the test examples of Norel et al.~\cite{NLWN94} and
\protect{Fischer} et al.~\cite{FLWN95} and compared the results of our
docking algorithm with the results of Norel et al.~\cite{NLWN94,NLWN95},
with the results of Fischer et al.~\cite{FLWN95} and
with the results of Meyer et al.~\cite{MWS96}.
In 32 of 35 test examples the best conformation with respect
to the fitness function was an approximation of the real
conformation.
Export
BibTeX
@techreport{Lenhof97,
TITLE = {New contact measures for the protein docking problem},
AUTHOR = {Lenhof, Hans-Peter},
LANGUAGE = {eng},
NUMBER = {MPI-I-97-1-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1997},
DATE = {1997},
ABSTRACT = {We have developed and implemented a parallel distributed algorithm for the rigid-body protein docking problem. The algorithm is based on a new fitness function for evaluating the surface matching of a given conformation. The fitness function is defined as the weighted sum of two contact measures, the {\em geometric contact measure} and the {\em chemical contact measure}. The geometric contact measure measures the ``size'' of the contact area of two molecules. It is a potential function that counts the ``van der Waals contacts'' between the atoms of the two molecules (the algorithm does not compute the Lennard-Jones potential). The chemical contact measure is also based on the ``van der Waals contacts'' principle: We consider all atom pairs that have a ``van der Waals'' contact, but instead of adding a constant for each pair $(a,b)$ we add a ``chemical weight'' that depends on the atom pair $(a,b)$. We tested our docking algorithm with a test set that contains the test examples of Norel et al.~\cite{NLWN94} and \protect{Fischer} et al.~\cite{FLWN95} and compared the results of our docking algorithm with the results of Norel et al.~\cite{NLWN94,NLWN95}, with the results of Fischer et al.~\cite{FLWN95} and with the results of Meyer et al.~\cite{MWS96}. In 32 of 35 test examples the best conformation with respect to the fitness function was an approximation of the real conformation.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Lenhof, Hans-Peter
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T New contact measures for the protein docking problem :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-9F7D-3
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1997
%P 10 p.
%X We have developed and
implemented a parallel distributed algorithm for the
rigid-body protein docking problem. The algorithm is based on
a new fitness function for evaluating the surface matching
of a given conformation.
The fitness function is defined as the weighted sum
of two contact measures, the {\em geometric contact measure}
and the {\em chemical contact measure}.
The geometric contact measure measures the ``size'' of the
contact area of two molecules. It is a potential function
that counts the ``van der Waals contacts'' between the atoms of the
two molecules (the algorithm does not compute
the Lennard-Jones potential).
The chemical contact measure is also based
on the ``van der Waals contacts'' principle: We consider
all atom pairs that have a ``van der Waals'' contact,
but instead of adding a constant for each pair $(a,b)$ we add a
``chemical weight'' that depends on the atom pair $(a,b)$.
We tested our docking algorithm with a test set that contains
the test examples of Norel et al.~\cite{NLWN94} and
\protect{Fischer} et al.~\cite{FLWN95} and compared the results of our
docking algorithm with the results of Norel et al.~\cite{NLWN94,NLWN95},
with the results of Fischer et al.~\cite{FLWN95} and
with the results of Meyer et al.~\cite{MWS96}.
In 32 of 35 test examples the best conformation with respect
to the fitness function was an approximation of the real
conformation.
%B Research Report
Randomized on-line call control revisited
S. Leonardi and A. P. Marchetti-Spaccamela
Technical Report, 1997
S. Leonardi and A. P. Marchetti-Spaccamela
Technical Report, 1997
Abstract
We consider the on-line problem of call admission and routing on
trees and meshes. Previous work considered randomized algorithms
and analyzed the {\em competitive ratio} of the algorithms.
However, these previous algorithms could obtain very low profit with
high probability.
We investigate the question if it is possible to devise on-line
competitive algorithms for these problems that would guarantee a ``good''
solution with ``good'' probability. We give a new family of
randomized algorithms with provably optimal (up to constant factors)
competitive ratios, and provably good probability to get a profit
close to the expectation. We also give lower bounds that show
bounds on how high the probability of such algorithms, to get a profit close
to the expectation, can be.
We also see
this work as a first step towards understanding
how well can the profit of an competitively-optimal randomized on-line
algorithm be concentrated around its expectation.
Export
BibTeX
@techreport{LeonardiMarchetti-SpaccamelaPresciuttiRosten,
TITLE = {Randomized on-line call control revisited},
AUTHOR = {Leonardi, Stefano and Marchetti-Spaccamela, Alessio Presciutti},
LANGUAGE = {eng},
NUMBER = {MPI-I-97-1-023},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1997},
DATE = {1997},
ABSTRACT = {We consider the on-line problem of call admission and routing on trees and meshes. Previous work considered randomized algorithms and analyzed the {\em competitive ratio} of the algorithms. However, these previous algorithms could obtain very low profit with high probability. We investigate the question if it is possible to devise on-line competitive algorithms for these problems that would guarantee a ``good'' solution with ``good'' probability. We give a new family of randomized algorithms with provably optimal (up to constant factors) competitive ratios, and provably good probability to get a profit close to the expectation. We also give lower bounds that show bounds on how high the probability of such algorithms, to get a profit close to the expectation, can be. We also see this work as a first step towards understanding how well can the profit of an competitively-optimal randomized on-line algorithm be concentrated around its expectation.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Leonardi, Stefano
%A Marchetti-Spaccamela, Alessio Presciutti
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T Randomized on-line call control revisited :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-9D6E-8
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1997
%P 19 p.
%X We consider the on-line problem of call admission and routing on
trees and meshes. Previous work considered randomized algorithms
and analyzed the {\em competitive ratio} of the algorithms.
However, these previous algorithms could obtain very low profit with
high probability.
We investigate the question if it is possible to devise on-line
competitive algorithms for these problems that would guarantee a ``good''
solution with ``good'' probability. We give a new family of
randomized algorithms with provably optimal (up to constant factors)
competitive ratios, and provably good probability to get a profit
close to the expectation. We also give lower bounds that show
bounds on how high the probability of such algorithms, to get a profit close
to the expectation, can be.
We also see
this work as a first step towards understanding
how well can the profit of an competitively-optimal randomized on-line
algorithm be concentrated around its expectation.
%B Research Report / Max-Planck-Institut für Informatik
The practical use of the A* algorithm for exact multiple sequence alignment
M. Lermen and K. Reinert
Technical Report, 1997
M. Lermen and K. Reinert
Technical Report, 1997
Abstract
Multiple alignment is an important problem in computational biology. It is well known that it can be solved exactly by a dynamic programming algorithm which in turn can be interpreted as a shortest path computation in a directed acyclic graph. The $\cal{A}^*$ algorithm (or goal directed unidirectional search) is a technique that speeds up the computation of a shortest path by transforming the edge lengths without losing the optimality of the shortest path. We implemented the $\cal{A}^*$ algorithm in a computer program similar to MSA~\cite{GupKecSch95} and FMA~\cite{ShiIma97}. We incorporated in this program new bounding strategies for both, lower and upper bounds and show that the $\cal{A}^*$ algorithm, together with our improvements, can speed up comput ations considerably. Additionally we show that the $\cal{A}^*$ algorithm together with a standard bounding technique is superior to the well known Carillo-Lipman bounding since it excludes more nodes from consideration.
Export
BibTeX
@techreport{LermenReinert97,
TITLE = {The practical use of the A* algorithm for exact multiple sequence alignment},
AUTHOR = {Lermen, Martin and Reinert, Knut},
LANGUAGE = {eng},
NUMBER = {MPI-I-97-1-028},
YEAR = {1997},
DATE = {1997},
ABSTRACT = {Multiple alignment is an important problem in computational biology. It is well known that it can be solved exactly by a dynamic programming algorithm which in turn can be interpreted as a shortest path computation in a directed acyclic graph. The $\cal{A}^*$ algorithm (or goal directed unidirectional search) is a technique that speeds up the computation of a shortest path by transforming the edge lengths without losing the optimality of the shortest path. We implemented the $\cal{A}^*$ algorithm in a computer program similar to MSA~\cite{GupKecSch95} and FMA~\cite{ShiIma97}. We incorporated in this program new bounding strategies for both, lower and upper bounds and show that the $\cal{A}^*$ algorithm, together with our improvements, can speed up comput ations considerably. Additionally we show that the $\cal{A}^*$ algorithm together with a standard bounding technique is superior to the well known Carillo-Lipman bounding since it excludes more nodes from consideration.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Lermen, Martin
%A Reinert, Knut
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T The practical use of the A* algorithm for exact multiple sequence alignment :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-9CD5-4
%D 1997
%X Multiple alignment is an important problem in computational biology. It is well known that it can be solved exactly by a dynamic programming algorithm which in turn can be interpreted as a shortest path computation in a directed acyclic graph. The $\cal{A}^*$ algorithm (or goal directed unidirectional search) is a technique that speeds up the computation of a shortest path by transforming the edge lengths without losing the optimality of the shortest path. We implemented the $\cal{A}^*$ algorithm in a computer program similar to MSA~\cite{GupKecSch95} and FMA~\cite{ShiIma97}. We incorporated in this program new bounding strategies for both, lower and upper bounds and show that the $\cal{A}^*$ algorithm, together with our improvements, can speed up comput ations considerably. Additionally we show that the $\cal{A}^*$ algorithm together with a standard bounding technique is superior to the well known Carillo-Lipman bounding since it excludes more nodes from consideration.
%B Research Report / Max-Planck-Institut für Informatik
An alternative method to crossing minimization on hierarchical graphs
P. Mutzel
Technical Report, 1997
P. Mutzel
Technical Report, 1997
Abstract
A common method for drawing directed graphs is, as a first step, to partition the
vertices into a set of $k$ levels and then, as a second step, to permute the verti
ces within the levels such that the number of crossings is minimized.
We suggest an alternative method for the second step, namely, removing the minimal
number of edges such that the resulting graph is $k$-level planar. For the final
diagram the removed edges are reinserted into a $k$-level planar drawing. Hence, i
nstead of considering the $k$-level crossing minimization problem, we suggest solv
ing the $k$-level planarization problem.
In this paper we address the case $k=2$. First, we give a motivation for our appro
ach.
Then, we address the problem of extracting a 2-level planar subgraph of maximum we
ight in a given 2-level graph. This problem is NP-hard. Based on a characterizatio
n of 2-level planar graphs, we give an integer linear programming formulation for
the 2-level planarization problem. Moreover, we define and investigate the polytop
e $\2LPS(G)$ associated with the set of all 2-level planar subgraphs of a given 2
-level graph $G$. We will see that this polytope has full dimension and that the i
nequalities occuring in the integer linear description are facet-defining for $\2L
PS(G)$.
The inequalities in the integer linear programming formulation can be separated in
polynomial time, hence they can be used efficiently in a branch-and-cut method fo
r solving practical instances of the 2-level planarization problem.
Furthermore, we derive new inequalities that substantially improve the quality of
the obtained
solution. We report on extensive computational results.
Export
BibTeX
@techreport{Mutzel97,
TITLE = {An alternative method to crossing minimization on hierarchical graphs},
AUTHOR = {Mutzel, Petra},
LANGUAGE = {eng},
NUMBER = {MPI-I-1997-1-008},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1997},
DATE = {1997},
ABSTRACT = {A common method for drawing directed graphs is, as a first step, to partition the vertices into a set of $k$ levels and then, as a second step, to permute the verti ces within the levels such that the number of crossings is minimized. We suggest an alternative method for the second step, namely, removing the minimal number of edges such that the resulting graph is $k$-level planar. For the final diagram the removed edges are reinserted into a $k$-level planar drawing. Hence, i nstead of considering the $k$-level crossing minimization problem, we suggest solv ing the $k$-level planarization problem. In this paper we address the case $k=2$. First, we give a motivation for our appro ach. Then, we address the problem of extracting a 2-level planar subgraph of maximum we ight in a given 2-level graph. This problem is NP-hard. Based on a characterizatio n of 2-level planar graphs, we give an integer linear programming formulation for the 2-level planarization problem. Moreover, we define and investigate the polytop e $\2LPS(G)$ associated with the set of all 2-level planar subgraphs of a given 2 -level graph $G$. We will see that this polytope has full dimension and that the i nequalities occuring in the integer linear description are facet-defining for $\2L PS(G)$. The inequalities in the integer linear programming formulation can be separated in polynomial time, hence they can be used efficiently in a branch-and-cut method fo r solving practical instances of the 2-level planarization problem. Furthermore, we derive new inequalities that substantially improve the quality of the obtained solution. We report on extensive computational results.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Mutzel, Petra
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T An alternative method to crossing minimization on hierarchical graphs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-9E22-8
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1997
%P 15 p.
%X A common method for drawing directed graphs is, as a first step, to partition the
vertices into a set of $k$ levels and then, as a second step, to permute the verti
ces within the levels such that the number of crossings is minimized.
We suggest an alternative method for the second step, namely, removing the minimal
number of edges such that the resulting graph is $k$-level planar. For the final
diagram the removed edges are reinserted into a $k$-level planar drawing. Hence, i
nstead of considering the $k$-level crossing minimization problem, we suggest solv
ing the $k$-level planarization problem.
In this paper we address the case $k=2$. First, we give a motivation for our appro
ach.
Then, we address the problem of extracting a 2-level planar subgraph of maximum we
ight in a given 2-level graph. This problem is NP-hard. Based on a characterizatio
n of 2-level planar graphs, we give an integer linear programming formulation for
the 2-level planarization problem. Moreover, we define and investigate the polytop
e $\2LPS(G)$ associated with the set of all 2-level planar subgraphs of a given 2
-level graph $G$. We will see that this polytope has full dimension and that the i
nequalities occuring in the integer linear description are facet-defining for $\2L
PS(G)$.
The inequalities in the integer linear programming formulation can be separated in
polynomial time, hence they can be used efficiently in a branch-and-cut method fo
r solving practical instances of the 2-level planarization problem.
Furthermore, we derive new inequalities that substantially improve the quality of
the obtained
solution. We report on extensive computational results.
%B Research Report / Max-Planck-Institut für Informatik
On Batcher’s Merge Sorts as Parallel Sorting Algorithms
C. Rüb
Technical Report, 1997
C. Rüb
Technical Report, 1997
Abstract
In this paper we examine the average running times of Batcher's bitonic
merge and Batcher's odd-even merge when they are used as parallel merging
algorithms. It has been shown previously that the running time of
odd-even merge can be upper bounded by a function of the maximal rank difference
for elements in the two input sequences. Here we give an almost matching lower bound
for odd-even merge as well as a similar upper bound for (a special version
of) bitonic merge.
>From this follows that the average running time of odd-even merge (bitonic
merge) is $\Theta((n/p)(1+\log(1+p^2/n)))$ ($O((n/p)(1+\log(1+p^2/n)))$, resp.)
where $n$ is the size of the input and $p$ is the number of processors used.
Using these results we then show that the average running times of
odd-even merge sort and bitonic merge sort are $O((n/p)(\log n + (\log(1+p^2/n))^2))$,
that is, the two algorithms are optimal on the average if
$n\geq p^2/2^{\sqrt{\log p}}$.
The derived bounds do not allow to compare the two sorting algorithms
program, for various sizes of input and numbers of processors.
Export
BibTeX
@techreport{Rub97,
TITLE = {On Batcher's Merge Sorts as Parallel Sorting Algorithms},
AUTHOR = {R{\"u}b, Christine},
LANGUAGE = {eng},
NUMBER = {MPI-I-1997-1-012},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1997},
DATE = {1997},
ABSTRACT = {In this paper we examine the average running times of Batcher's bitonic merge and Batcher's odd-even merge when they are used as parallel merging algorithms. It has been shown previously that the running time of odd-even merge can be upper bounded by a function of the maximal rank difference for elements in the two input sequences. Here we give an almost matching lower bound for odd-even merge as well as a similar upper bound for (a special version of) bitonic merge. >From this follows that the average running time of odd-even merge (bitonic merge) is $\Theta((n/p)(1+\log(1+p^2/n)))$ ($O((n/p)(1+\log(1+p^2/n)))$, resp.) where $n$ is the size of the input and $p$ is the number of processors used. Using these results we then show that the average running times of odd-even merge sort and bitonic merge sort are $O((n/p)(\log n + (\log(1+p^2/n))^2))$, that is, the two algorithms are optimal on the average if $n\geq p^2/2^{\sqrt{\log p}}$. The derived bounds do not allow to compare the two sorting algorithms program, for various sizes of input and numbers of processors.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Rüb, Christine
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On Batcher's Merge Sorts as Parallel Sorting Algorithms :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-9E16-4
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1997
%P 23 p.
%X In this paper we examine the average running times of Batcher's bitonic
merge and Batcher's odd-even merge when they are used as parallel merging
algorithms. It has been shown previously that the running time of
odd-even merge can be upper bounded by a function of the maximal rank difference
for elements in the two input sequences. Here we give an almost matching lower bound
for odd-even merge as well as a similar upper bound for (a special version
of) bitonic merge.
>From this follows that the average running time of odd-even merge (bitonic
merge) is $\Theta((n/p)(1+\log(1+p^2/n)))$ ($O((n/p)(1+\log(1+p^2/n)))$, resp.)
where $n$ is the size of the input and $p$ is the number of processors used.
Using these results we then show that the average running times of
odd-even merge sort and bitonic merge sort are $O((n/p)(\log n + (\log(1+p^2/n))^2))$,
that is, the two algorithms are optimal on the average if
$n\geq p^2/2^{\sqrt{\log p}}$.
The derived bounds do not allow to compare the two sorting algorithms
program, for various sizes of input and numbers of processors.
%B Research Report / Max-Planck-Institut für Informatik
Designing a Computational Geometry Algorithms Library
S. Schirra
Technical Report, 1997
S. Schirra
Technical Report, 1997
Abstract
In these notes, which were originally written as lecture
notes for Advanced School on Algorithmic Foundations of Geographic
Information Systems, CISM, held in Udine, Italy, in September, 1996,
we discuss issues related to the design of a computational
geometry algorithms library.
We discuss modularity and generality, efficiency and robustness, and
ease of use. We argue that exact geometric
computation is the most promising approach to ensure robustness
in a geometric algorithms library.
Many of the presented concepts have been developed
jointly in the kernel design group of CGAL and/or in the geometry group of
LEDA. However, the view held in these notes is a personal view, not
the official view of CGAL.
Export
BibTeX
@techreport{Schirra97,
TITLE = {Designing a Computational Geometry Algorithms Library},
AUTHOR = {Schirra, Stefan},
LANGUAGE = {eng},
NUMBER = {MPI-I-1997-1-014},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1997},
DATE = {1997},
ABSTRACT = {In these notes, which were originally written as lecture notes for Advanced School on Algorithmic Foundations of Geographic Information Systems, CISM, held in Udine, Italy, in September, 1996, we discuss issues related to the design of a computational geometry algorithms library. We discuss modularity and generality, efficiency and robustness, and ease of use. We argue that exact geometric computation is the most promising approach to ensure robustness in a geometric algorithms library. Many of the presented concepts have been developed jointly in the kernel design group of CGAL and/or in the geometry group of LEDA. However, the view held in these notes is a personal view, not the official view of CGAL.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Schirra, Stefan
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Designing a Computational Geometry Algorithms Library :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-9D89-8
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1997
%P 8 p.
%X In these notes, which were originally written as lecture
notes for Advanced School on Algorithmic Foundations of Geographic
Information Systems, CISM, held in Udine, Italy, in September, 1996,
we discuss issues related to the design of a computational
geometry algorithms library.
We discuss modularity and generality, efficiency and robustness, and
ease of use. We argue that exact geometric
computation is the most promising approach to ensure robustness
in a geometric algorithms library.
Many of the presented concepts have been developed
jointly in the kernel design group of CGAL and/or in the geometry group of
LEDA. However, the view held in these notes is a personal view, not
the official view of CGAL.
%B Research Report / Max-Planck-Institut für Informatik
From parallel to external list ranking
J. Sibeyn
Technical Report, 1997
J. Sibeyn
Technical Report, 1997
Abstract
Novel algorithms are presented for parallel and external memory
list-ranking. The same algorithms can be used for computing basic
tree functions, such as the depth of a node.
The parallel algorithm stands out through its low memory use, its
simplicity and its performance. For a large range of problem sizes,
it is almost as fast as the fastest previous algorithms. On a
Paragon with 100 PUs, each holding 10^6 nodes, we obtain speed-up 25.
For external-memory list-ranking, the best algorithm so far is
an optimized version of independent-set-removal. Actually,
this algorithm is not good at all: for a list of length N, the
paging volume is about 72 N. Our new algorithm reduces
this to 18 N. The algorithm has been implemented,
and the theoretical results are confirmed.
Export
BibTeX
@techreport{Sibeyn97,
TITLE = {From parallel to external list ranking},
AUTHOR = {Sibeyn, Jop},
LANGUAGE = {eng},
NUMBER = {MPI-I-1997-1-021},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1997},
DATE = {1997},
ABSTRACT = {Novel algorithms are presented for parallel and external memory list-ranking. The same algorithms can be used for computing basic tree functions, such as the depth of a node. The parallel algorithm stands out through its low memory use, its simplicity and its performance. For a large range of problem sizes, it is almost as fast as the fastest previous algorithms. On a Paragon with 100 PUs, each holding 10^6 nodes, we obtain speed-up 25. For external-memory list-ranking, the best algorithm so far is an optimized version of independent-set-removal. Actually, this algorithm is not good at all: for a list of length N, the paging volume is about 72 N. Our new algorithm reduces this to 18 N. The algorithm has been implemented, and the theoretical results are confirmed.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Sibeyn, Jop
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T From parallel to external list ranking :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-9D76-1
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1997
%P 15 p.
%X Novel algorithms are presented for parallel and external memory
list-ranking. The same algorithms can be used for computing basic
tree functions, such as the depth of a node.
The parallel algorithm stands out through its low memory use, its
simplicity and its performance. For a large range of problem sizes,
it is almost as fast as the fastest previous algorithms. On a
Paragon with 100 PUs, each holding 10^6 nodes, we obtain speed-up 25.
For external-memory list-ranking, the best algorithm so far is
an optimized version of independent-set-removal. Actually,
this algorithm is not good at all: for a list of length N, the
paging volume is about 72 N. Our new algorithm reduces
this to 18 N. The algorithm has been implemented,
and the theoretical results are confirmed.
%B Research Report / Max-Planck-Institut für Informatik
BSP-like external-memory computation
J. Sibeyn and M. Kaufmann
Technical Report, 1997
J. Sibeyn and M. Kaufmann
Technical Report, 1997
Abstract
In this paper we present a paradigm for solving external-memory
problems, and illustrate it by algorithms for matrix multiplication,
sorting, list ranking, transitive closure and FFT. Our paradigm is
based on the use of BSP algorithms. The correspondence is almost
perfect, and especially the notion of x-optimality carries over
to algorithms designed according to our paradigm.
The advantages of the approach are similar to the advantages of
BSP algorithms for parallel computing: scalability, portability,
predictability. The performance measure here is the total work, not
only the number of I/O operations as in previous approaches. The
predicted performances are therefore more useful for practical
applications.
Export
BibTeX
@techreport{SibeynKaufmann97,
TITLE = {{BSP}-like external-memory computation},
AUTHOR = {Sibeyn, Jop and Kaufmann, Michael},
LANGUAGE = {eng},
NUMBER = {MPI-I-1997-1-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1997},
DATE = {1997},
ABSTRACT = {In this paper we present a paradigm for solving external-memory problems, and illustrate it by algorithms for matrix multiplication, sorting, list ranking, transitive closure and FFT. Our paradigm is based on the use of BSP algorithms. The correspondence is almost perfect, and especially the notion of x-optimality carries over to algorithms designed according to our paradigm. The advantages of the approach are similar to the advantages of BSP algorithms for parallel computing: scalability, portability, predictability. The performance measure here is the total work, not only the number of I/O operations as in previous approaches. The predicted performances are therefore more useful for practical applications.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Sibeyn, Jop
%A Kaufmann, Michael
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T BSP-like external-memory computation :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-9FD6-C
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1997
%P 14 p.
%X In this paper we present a paradigm for solving external-memory
problems, and illustrate it by algorithms for matrix multiplication,
sorting, list ranking, transitive closure and FFT. Our paradigm is
based on the use of BSP algorithms. The correspondence is almost
perfect, and especially the notion of x-optimality carries over
to algorithms designed according to our paradigm.
The advantages of the approach are similar to the advantages of
BSP algorithms for parallel computing: scalability, portability,
predictability. The performance measure here is the total work, not
only the number of I/O operations as in previous approaches. The
predicted performances are therefore more useful for practical
applications.
%B Research Report / Max-Planck-Institut für Informatik
Faster deterministic sorting and priority queues in linear space
M. Thorup
Technical Report, 1997
M. Thorup
Technical Report, 1997
Abstract
The RAM complexity of deterministic linear space sorting of
integers in words is improved from $O(n\sqrt{\log n})$ to
$O(n(\log\log n)^2)$. No better
bounds are known for polynomial space. In fact, the techniques give a
deterministic linear space priority queue supporting insert and delete in
$O((\log\log n)^2)$ amortized time and find-min in constant time. The priority
queue can be implemented using addition, shift, and
bit-wise boolean operations.
Export
BibTeX
@techreport{Mikkel97,
TITLE = {Faster deterministic sorting and priority queues in linear space},
AUTHOR = {Thorup, Mikkel},
LANGUAGE = {eng},
NUMBER = {MPI-I-1997-1-016},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1997},
DATE = {1997},
ABSTRACT = {The RAM complexity of deterministic linear space sorting of integers in words is improved from $O(n\sqrt{\log n})$ to $O(n(\log\log n)^2)$. No better bounds are known for polynomial space. In fact, the techniques give a deterministic linear space priority queue supporting insert and delete in $O((\log\log n)^2)$ amortized time and find-min in constant time. The priority queue can be implemented using addition, shift, and bit-wise boolean operations.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Thorup, Mikkel
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Faster deterministic sorting and priority queues in linear space :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-9D86-E
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1997
%P 9 p.
%X The RAM complexity of deterministic linear space sorting of
integers in words is improved from $O(n\sqrt{\log n})$ to
$O(n(\log\log n)^2)$. No better
bounds are known for polynomial space. In fact, the techniques give a
deterministic linear space priority queue supporting insert and delete in
$O((\log\log n)^2)$ amortized time and find-min in constant time. The priority
queue can be implemented using addition, shift, and
bit-wise boolean operations.
%B Research Report / Max-Planck-Institut für Informatik
Bicriteria job sequencing with release dates
Y. Wang
Technical Report, 1997
Y. Wang
Technical Report, 1997
Abstract
We consider the single machine job sequencing problem
with release dates. The main purpose of this paper
is to investigate efficient and effective
approximation algorithms with a bicriteria performance
guarantee. That is, for some $(\rho_1, \rho_2)$, they
find schedules simultaneously within a factor of $\rho_1$ of
the minimum total weighted completion times and
within a factor of $\rho_2$ of the minimum makespan.
The main results of the paper are summarized as follows.
First, we present a new $O(n\log n)$ algorithm with the performance
guarantee $\left(1+\frac{1}{\beta}, 1+\beta\right)$ for any
$\beta \in [0,1]$. For the problem with integer processing times
and release dates, the algorithm has the bicriteria performance guarantee
$\left(2-\frac{1}{p_{max}}, 2-\frac{1}{p_{max}}\right)$,
where $p_{max}$ is the maximum processing time.
Next, we study an elegant approximation algorithm
introduced recently by Goemans. We show that
its randomized version has expected bicriteria performance
guarantee $(1.7735, 1.51)$ and the derandomized
version has the guarantee $(1.7735, 2-\frac{1}{p_{max}})$.
To establish the performance guarantee, we also use two
LP relaxations and some randomization techniques
as Goemans does, but take a different approach
in the analysis, based on a decomposition theorem. Finally, we
present a family of bad instances showing that
it is impossible to achieve $\rho_1\leq 1.5$ with this LP lower
bound.
Export
BibTeX
@techreport{Wang1997,
TITLE = {Bicriteria job sequencing with release dates},
AUTHOR = {Wang, Yaoguang},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1997-1-005},
NUMBER = {MPI-I-1997-1-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1997},
DATE = {1997},
ABSTRACT = {We consider the single machine job sequencing problem with release dates. The main purpose of this paper is to investigate efficient and effective approximation algorithms with a bicriteria performance guarantee. That is, for some $(\rho_1, \rho_2)$, they find schedules simultaneously within a factor of $\rho_1$ of the minimum total weighted completion times and within a factor of $\rho_2$ of the minimum makespan. The main results of the paper are summarized as follows. First, we present a new $O(n\log n)$ algorithm with the performance guarantee $\left(1+\frac{1}{\beta}, 1+\beta\right)$ for any $\beta \in [0,1]$. For the problem with integer processing times and release dates, the algorithm has the bicriteria performance guarantee $\left(2-\frac{1}{p_{max}}, 2-\frac{1}{p_{max}}\right)$, where $p_{max}$ is the maximum processing time. Next, we study an elegant approximation algorithm introduced recently by Goemans. We show that its randomized version has expected bicriteria performance guarantee $(1.7735, 1.51)$ and the derandomized version has the guarantee $(1.7735, 2-\frac{1}{p_{max}})$. To establish the performance guarantee, we also use two LP relaxations and some randomization techniques as Goemans does, but take a different approach in the analysis, based on a decomposition theorem. Finally, we present a family of bad instances showing that it is impossible to achieve $\rho_1\leq 1.5$ with this LP lower bound.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Wang, Yaoguang
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Bicriteria job sequencing with release dates :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-9F79-B
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1997-1-005
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1997
%P 18 p.
%X We consider the single machine job sequencing problem
with release dates. The main purpose of this paper
is to investigate efficient and effective
approximation algorithms with a bicriteria performance
guarantee. That is, for some $(\rho_1, \rho_2)$, they
find schedules simultaneously within a factor of $\rho_1$ of
the minimum total weighted completion times and
within a factor of $\rho_2$ of the minimum makespan.
The main results of the paper are summarized as follows.
First, we present a new $O(n\log n)$ algorithm with the performance
guarantee $\left(1+\frac{1}{\beta}, 1+\beta\right)$ for any
$\beta \in [0,1]$. For the problem with integer processing times
and release dates, the algorithm has the bicriteria performance guarantee
$\left(2-\frac{1}{p_{max}}, 2-\frac{1}{p_{max}}\right)$,
where $p_{max}$ is the maximum processing time.
Next, we study an elegant approximation algorithm
introduced recently by Goemans. We show that
its randomized version has expected bicriteria performance
guarantee $(1.7735, 1.51)$ and the derandomized
version has the guarantee $(1.7735, 2-\frac{1}{p_{max}})$.
To establish the performance guarantee, we also use two
LP relaxations and some randomization techniques
as Goemans does, but take a different approach
in the analysis, based on a decomposition theorem. Finally, we
present a family of bad instances showing that
it is impossible to achieve $\rho_1\leq 1.5$ with this LP lower
bound.
%B Research Report / Max-Planck-Institut für Informatik
1996
A survey of self-organizing data structures
S. Albers and J. Westbrook
Technical Report, 1996
S. Albers and J. Westbrook
Technical Report, 1996
Abstract
This paper surveys results in the design and analysis of self-organizing
data structures for the search problem. We concentrate on two simple but
very popular data structures: the unsorted linear list and the binary search
tree. A self-organizing data structure has a rule or algorithm for
changing pointers or state data. The self-organizing rule is designed to
get the structure into a good state so that future operations can be
processed efficiently. Self-organizing data structures differ from constraint
structures in that no structural invariant, such as a balance constraint in
a binary search tree, has to be satisfied.
In the area of self-organizing linear lists we present a series of
deterministic and randomized on-line algorithms. We concentrate on
competitive algorithms, i.e., algorithms that have a guaranteed performance
with respect to an optimal offline algorithm.
In the area of binary search trees we present both on-line and off-line
algorithms. We also discuss a famous self-organizing
on-line rule called splaying and present important theorems and
open conjectures on splay trees. In the third part of the paper we show
that algorithms for self-organizing lists and trees can be used to build
very effective data compression schemes. We report on theoretical
and experimental results.
Export
BibTeX
@techreport{AlbersWestbrook96,
TITLE = {A survey of self-organizing data structures},
AUTHOR = {Albers, Susanne and Westbrook, Jeffery},
LANGUAGE = {eng},
NUMBER = {MPI-I-1996-1-026},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
ABSTRACT = {This paper surveys results in the design and analysis of self-organizing data structures for the search problem. We concentrate on two simple but very popular data structures: the unsorted linear list and the binary search tree. A self-organizing data structure has a rule or algorithm for changing pointers or state data. The self-organizing rule is designed to get the structure into a good state so that future operations can be processed efficiently. Self-organizing data structures differ from constraint structures in that no structural invariant, such as a balance constraint in a binary search tree, has to be satisfied. In the area of self-organizing linear lists we present a series of deterministic and randomized on-line algorithms. We concentrate on competitive algorithms, i.e., algorithms that have a guaranteed performance with respect to an optimal offline algorithm. In the area of binary search trees we present both on-line and off-line algorithms. We also discuss a famous self-organizing on-line rule called splaying and present important theorems and open conjectures on splay trees. In the third part of the paper we show that algorithms for self-organizing lists and trees can be used to build very effective data compression schemes. We report on theoretical and experimental results.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Albers, Susanne
%A Westbrook, Jeffery
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T A survey of self-organizing data structures :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A03D-0
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 39 p.
%X This paper surveys results in the design and analysis of self-organizing
data structures for the search problem. We concentrate on two simple but
very popular data structures: the unsorted linear list and the binary search
tree. A self-organizing data structure has a rule or algorithm for
changing pointers or state data. The self-organizing rule is designed to
get the structure into a good state so that future operations can be
processed efficiently. Self-organizing data structures differ from constraint
structures in that no structural invariant, such as a balance constraint in
a binary search tree, has to be satisfied.
In the area of self-organizing linear lists we present a series of
deterministic and randomized on-line algorithms. We concentrate on
competitive algorithms, i.e., algorithms that have a guaranteed performance
with respect to an optimal offline algorithm.
In the area of binary search trees we present both on-line and off-line
algorithms. We also discuss a famous self-organizing
on-line rule called splaying and present important theorems and
open conjectures on splay trees. In the third part of the paper we show
that algorithms for self-organizing lists and trees can be used to build
very effective data compression schemes. We report on theoretical
and experimental results.
%B Research Report / Max-Planck-Institut für Informatik
All-pairs min-cut in sparse networks
S. Arikati, S. Chaudhuri and C. Zaroliagis
Technical Report, 1996
S. Arikati, S. Chaudhuri and C. Zaroliagis
Technical Report, 1996
Abstract
Algorithms are presented for the all-pairs min-cut problem in bounded tree-width, planar and sparse networks. The approach used is to preprocess the input $n$-vertex network so that, afterwards, the value of a min-cut between any two vertices can be efficiently computed. A tradeoff is shown between the preprocessing time and the time taken to compute min-cuts subsequently. In particular, after an $O(n\log n)$ preprocessing of a bounded tree-width network, it is possible to find the value of a min-cut between any two vertices in constant time. This implies that for such networks the all-pairs min-cut problem can be solved in time $O(n^2)$.
This algorithm is used in conjunction with a graph decomposition technique of Frederickson to obtain algorithms for sparse and planar networks. The running times depend upon a topological property, $\gamma$, of the input network.
The parameter $\gamma$ varies between 1 and $\Theta(n)$; the algorithms perform well when $\gamma = o(n)$.
The value of a min-cut can be found in time $O(n + \gamma^2 \log \gamma)$ and all-pairs min-cut can be solved in time $O(n^2 + \gamma^4 \log \gamma)$ for sparse networks. The corresponding running times4 for planar networks are $O(n+\gamma \log \gamma)$ and $O(n^2 + \gamma^3 \log \gamma)$, respectively. The latter bounds depend on a result of independent interest: outerplanar networks have small ``mimicking'' networks which are also outerplanar.
Export
BibTeX
@techreport{ArikatiChaudhuriZaroliagis96,
TITLE = {All-pairs min-cut in sparse networks},
AUTHOR = {Arikati, Srinivasa and Chaudhuri, Shiva and Zaroliagis, Christos},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-007},
NUMBER = {MPI-I-1996-1-007},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
ABSTRACT = {Algorithms are presented for the all-pairs min-cut problem in bounded tree-width, planar and sparse networks. The approach used is to preprocess the input $n$-vertex network so that, afterwards, the value of a min-cut between any two vertices can be efficiently computed. A tradeoff is shown between the preprocessing time and the time taken to compute min-cuts subsequently. In particular, after an $O(n\log n)$ preprocessing of a bounded tree-width network, it is possible to find the value of a min-cut between any two vertices in constant time. This implies that for such networks the all-pairs min-cut problem can be solved in time $O(n^2)$. This algorithm is used in conjunction with a graph decomposition technique of Frederickson to obtain algorithms for sparse and planar networks. The running times depend upon a topological property, $\gamma$, of the input network. The parameter $\gamma$ varies between 1 and $\Theta(n)$; the algorithms perform well when $\gamma = o(n)$. The value of a min-cut can be found in time $O(n + \gamma^2 \log \gamma)$ and all-pairs min-cut can be solved in time $O(n^2 + \gamma^4 \log \gamma)$ for sparse networks. The corresponding running times4 for planar networks are $O(n+\gamma \log \gamma)$ and $O(n^2 + \gamma^3 \log \gamma)$, respectively. The latter bounds depend on a result of independent interest: outerplanar networks have small ``mimicking'' networks which are also outerplanar.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Arikati, Srinivasa
%A Chaudhuri, Shiva
%A Zaroliagis, Christos
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T All-pairs min-cut in sparse networks :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A418-4
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-007
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 27 p.
%X Algorithms are presented for the all-pairs min-cut problem in bounded tree-width, planar and sparse networks. The approach used is to preprocess the input $n$-vertex network so that, afterwards, the value of a min-cut between any two vertices can be efficiently computed. A tradeoff is shown between the preprocessing time and the time taken to compute min-cuts subsequently. In particular, after an $O(n\log n)$ preprocessing of a bounded tree-width network, it is possible to find the value of a min-cut between any two vertices in constant time. This implies that for such networks the all-pairs min-cut problem can be solved in time $O(n^2)$.
This algorithm is used in conjunction with a graph decomposition technique of Frederickson to obtain algorithms for sparse and planar networks. The running times depend upon a topological property, $\gamma$, of the input network.
The parameter $\gamma$ varies between 1 and $\Theta(n)$; the algorithms perform well when $\gamma = o(n)$.
The value of a min-cut can be found in time $O(n + \gamma^2 \log \gamma)$ and all-pairs min-cut can be solved in time $O(n^2 + \gamma^4 \log \gamma)$ for sparse networks. The corresponding running times4 for planar networks are $O(n+\gamma \log \gamma)$ and $O(n^2 + \gamma^3 \log \gamma)$, respectively. The latter bounds depend on a result of independent interest: outerplanar networks have small ``mimicking'' networks which are also outerplanar.
%B Research Report / Max-Planck-Institut für Informatik
Lower bounds for row minima searching
P. G. Bradford and K. Reinert
Technical Report, 1996
P. G. Bradford and K. Reinert
Technical Report, 1996
Abstract
This paper shows that finding the row minima (maxima) in an
$n \times n$ totally monotone matrix in the worst case requires
any algorithm to make $3n-5$ comparisons or $4n -5$ matrix accesses.
Where the, so called, SMAWK algorithm of Aggarwal {\em et al\/.}
finds the row minima in no more than $5n -2 \lg n - 6$ comparisons.
Export
BibTeX
@techreport{BradfordReinert96,
TITLE = {Lower bounds for row minima searching},
AUTHOR = {Bradford, Phillip Gnassi and Reinert, Knut},
LANGUAGE = {eng},
NUMBER = {MPI-I-1996-1-029},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
ABSTRACT = {This paper shows that finding the row minima (maxima) in an $n \times n$ totally monotone matrix in the worst case requires any algorithm to make $3n-5$ comparisons or $4n -5$ matrix accesses. Where the, so called, SMAWK algorithm of Aggarwal {\em et al\/.} finds the row minima in no more than $5n -2 \lg n -- 6$ comparisons.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Bradford, Phillip Gnassi
%A Reinert, Knut
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Lower bounds for row minima searching :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A021-C
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 12 p.
%X This paper shows that finding the row minima (maxima) in an
$n \times n$ totally monotone matrix in the worst case requires
any algorithm to make $3n-5$ comparisons or $4n -5$ matrix accesses.
Where the, so called, SMAWK algorithm of Aggarwal {\em et al\/.}
finds the row minima in no more than $5n -2 \lg n - 6$ comparisons.
%B Research Report / Max-Planck-Institut für Informatik
Rotations of periodic strings and short superstrings
D. Breslauer, T. Jiang and Z. Jiang
Technical Report, 1996
D. Breslauer, T. Jiang and Z. Jiang
Technical Report, 1996
Abstract
This paper presents two simple approximation algorithms for the shortest superstring problem, with approximation ratios $2 {2\over 3}$ ($\approx 2.67$) and $2 {25\over 42}$ ($\approx 2.596$), improving the best previously published $2 {3\over 4}$ approximation.
The framework of our improved algorithms is similar to that of previous algorithms in the sense that they construct a superstring by computing some optimal cycle covers on the distance graph of the given strings, and then break and merge the cycles to finally obtain
a Hamiltonian path, but we make use of new bounds on the overlap between two strings.
We prove that for each periodic semi-infinite string $\alpha = a_1 a_2 \cdots$ of period $q$, there exists an integer $k$, such that for {\em any} (finite) string $s$ of period $p$ which is {\em inequivalent} to $\alpha$, the overlap between $s$ and the {\em rotation}
$\alpha[k] = a_k a_{k+1} \cdots$ is at most $p+{1\over 2}q$.
Moreover, if $p \leq q$, then the overlap between $s$ and $\alpha[k]$ is not larger than ${2\over 3}(p+q)$. In the previous shortest superstring algorithms $p+q$ was used as the standard bound on overlap between two strings with periods $p$ and $q$.
Export
BibTeX
@techreport{BreslauerJiangZhigen97,
TITLE = {Rotations of periodic strings and short superstrings},
AUTHOR = {Breslauer, Dany and Jiang, Tao and Jiang, Zhigen},
LANGUAGE = {eng},
NUMBER = {MPI-I-1996-1-019},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
ABSTRACT = {This paper presents two simple approximation algorithms for the shortest superstring problem, with approximation ratios $2 {2\over 3}$ ($\approx 2.67$) and $2 {25\over 42}$ ($\approx 2.596$), improving the best previously published $2 {3\over 4}$ approximation. The framework of our improved algorithms is similar to that of previous algorithms in the sense that they construct a superstring by computing some optimal cycle covers on the distance graph of the given strings, and then break and merge the cycles to finally obtain a Hamiltonian path, but we make use of new bounds on the overlap between two strings. We prove that for each periodic semi-infinite string $\alpha = a_1 a_2 \cdots$ of period $q$, there exists an integer $k$, such that for {\em any} (finite) string $s$ of period $p$ which is {\em inequivalent} to $\alpha$, the overlap between $s$ and the {\em rotation} $\alpha[k] = a_k a_{k+1} \cdots$ is at most $p+{1\over 2}q$. Moreover, if $p \leq q$, then the overlap between $s$ and $\alpha[k]$ is not larger than ${2\over 3}(p+q)$. In the previous shortest superstring algorithms $p+q$ was used as the standard bound on overlap between two strings with periods $p$ and $q$.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Breslauer, Dany
%A Jiang, Tao
%A Jiang, Zhigen
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
%T Rotations of periodic strings and short superstrings :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A17F-5
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 13 p.
%X This paper presents two simple approximation algorithms for the shortest superstring problem, with approximation ratios $2 {2\over 3}$ ($\approx 2.67$) and $2 {25\over 42}$ ($\approx 2.596$), improving the best previously published $2 {3\over 4}$ approximation.
The framework of our improved algorithms is similar to that of previous algorithms in the sense that they construct a superstring by computing some optimal cycle covers on the distance graph of the given strings, and then break and merge the cycles to finally obtain
a Hamiltonian path, but we make use of new bounds on the overlap between two strings.
We prove that for each periodic semi-infinite string $\alpha = a_1 a_2 \cdots$ of period $q$, there exists an integer $k$, such that for {\em any} (finite) string $s$ of period $p$ which is {\em inequivalent} to $\alpha$, the overlap between $s$ and the {\em rotation}
$\alpha[k] = a_k a_{k+1} \cdots$ is at most $p+{1\over 2}q$.
Moreover, if $p \leq q$, then the overlap between $s$ and $\alpha[k]$ is not larger than ${2\over 3}(p+q)$. In the previous shortest superstring algorithms $p+q$ was used as the standard bound on overlap between two strings with periods $p$ and $q$.
%B Research Report / Max-Planck-Institut für Informatik
The randomized complexity of maintaining the minimum
G. S. Brodal, S. Chaudhuri and J. Radhakrishnan
Technical Report, 1996
G. S. Brodal, S. Chaudhuri and J. Radhakrishnan
Technical Report, 1996
Abstract
The complexity of maintaining a set under the operations {\sf Insert}, {\sf Delete} and {\sf FindMin} is considered. In the comparison model it is shown that any randomized algorithm with expected amortized cost $t$ comparisons per {\sf Insert} and {\sf Delete} has expected cost at least $n/(e2^{2t})-1$ comparisons for {\sf FindMin}. If {\sf FindMin} 474 is replaced by a weaker operation, {\sf FindAny}, then it is shown that a randomized algorithm with constant expected cost per operation exists, but no deterministic algorithm. Finally, a deterministic algorithm with constant amortized cost per operation for an offline version of the problem is given.
Export
BibTeX
@techreport{BrodalChaudhuriRadhakrishnan96,
TITLE = {The randomized complexity of maintaining the minimum},
AUTHOR = {Brodal, Gerth St{\o}lting and Chaudhuri, Shiva and Radhakrishnan, Jaikumar},
LANGUAGE = {eng},
NUMBER = {MPI-I-1996-1-014},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
ABSTRACT = {The complexity of maintaining a set under the operations {\sf Insert}, {\sf Delete} and {\sf FindMin} is considered. In the comparison model it is shown that any randomized algorithm with expected amortized cost $t$ comparisons per {\sf Insert} and {\sf Delete} has expected cost at least $n/(e2^{2t})-1$ comparisons for {\sf FindMin}. If {\sf FindMin} 474 is replaced by a weaker operation, {\sf FindAny}, then it is shown that a randomized algorithm with constant expected cost per operation exists, but no deterministic algorithm. Finally, a deterministic algorithm with constant amortized cost per operation for an offline version of the problem is given.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Brodal, Gerth Stølting
%A Chaudhuri, Shiva
%A Radhakrishnan, Jaikumar
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T The randomized complexity of maintaining the minimum :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A18C-7
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 12 p.
%X The complexity of maintaining a set under the operations {\sf Insert}, {\sf Delete} and {\sf FindMin} is considered. In the comparison model it is shown that any randomized algorithm with expected amortized cost $t$ comparisons per {\sf Insert} and {\sf Delete} has expected cost at least $n/(e2^{2t})-1$ comparisons for {\sf FindMin}. If {\sf FindMin} 474 is replaced by a weaker operation, {\sf FindAny}, then it is shown that a randomized algorithm with constant expected cost per operation exists, but no deterministic algorithm. Finally, a deterministic algorithm with constant amortized cost per operation for an offline version of the problem is given.
%B Research Report / Max-Planck-Institut für Informatik
The LEDA class real number
C. Burnikel, K. Mehlhorn and S. Schirra
Technical Report, 1996
C. Burnikel, K. Mehlhorn and S. Schirra
Technical Report, 1996
Abstract
We describe the implementation of the LEDA data type {\bf real}. Every integer is a real and reals are closed under the operations addition, subtraction, multiplication, division and squareroot.
The main features of the data type real are
\begin{itemize}
\item The user--interface is similar to that of the built--in data type double.
\item All comparison operators $\{>, \geq, <, \leq, =\}$ are {\em exact}.
In order to determine the sign of a real number $x$ the data type first computes a rational number $q$ such that $|x| \leq q$ implies $x = 0$ and then computes an approximation of $x$ of sufficient precision to decide the sign of $x$.
The user may assist the data type by providing a separation bound $q$.
\item The data type also allows to evaluate real expressions with arbitrary precision. One may either set the mantissae length of the underlying floating point system and then evaluate the expression with that mantissa length or one may specify an error bound $q$. The data type then computes an approximation with absolute error at most $q$.
\end{itemize}
The implementation of the data type real is based on the LEDA data types {\bf integer} and {\bf bigfloat} which are the types of arbitrary precision integers and floating point numbers, respectively.The implementation takes various shortcuts for increased efficiency, e.g., a {\bf double} approximation of any real number together with an error bound is maintained and tests are first performed on these approximations.
A high precision computation is only started when the test on the {\bf double} approximation is inconclusive.
Export
BibTeX
@techreport{BurnikelMehlhornSchirra96,
TITLE = {The {LEDA} class real number},
AUTHOR = {Burnikel, Christoph and Mehlhorn, Kurt and Schirra, Stefan},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-001},
NUMBER = {MPI-I-1996-1-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
ABSTRACT = {We describe the implementation of the LEDA data type {\bf real}. Every integer is a real and reals are closed under the operations addition, subtraction, multiplication, division and squareroot. The main features of the data type real are \begin{itemize} \item The user--interface is similar to that of the built--in data type double. \item All comparison operators $\{>, \geq, <, \leq, =\}$ are {\em exact}. In order to determine the sign of a real number $x$ the data type first computes a rational number $q$ such that $|x| \leq q$ implies $x = 0$ and then computes an approximation of $x$ of sufficient precision to decide the sign of $x$. The user may assist the data type by providing a separation bound $q$. \item The data type also allows to evaluate real expressions with arbitrary precision. One may either set the mantissae length of the underlying floating point system and then evaluate the expression with that mantissa length or one may specify an error bound $q$. The data type then computes an approximation with absolute error at most $q$. \end{itemize} The implementation of the data type real is based on the LEDA data types {\bf integer} and {\bf bigfloat} which are the types of arbitrary precision integers and floating point numbers, respectively.The implementation takes various shortcuts for increased efficiency, e.g., a {\bf double} approximation of any real number together with an error bound is maintained and tests are first performed on these approximations. A high precision computation is only started when the test on the {\bf double} approximation is inconclusive.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Burnikel, Christoph
%A Mehlhorn, Kurt
%A Schirra, Stefan
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T The LEDA class real number :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A1AD-E
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 52 p.
%X We describe the implementation of the LEDA data type {\bf real}. Every integer is a real and reals are closed under the operations addition, subtraction, multiplication, division and squareroot.
The main features of the data type real are
\begin{itemize}
\item The user--interface is similar to that of the built--in data type double.
\item All comparison operators $\{>, \geq, <, \leq, =\}$ are {\em exact}.
In order to determine the sign of a real number $x$ the data type first computes a rational number $q$ such that $|x| \leq q$ implies $x = 0$ and then computes an approximation of $x$ of sufficient precision to decide the sign of $x$.
The user may assist the data type by providing a separation bound $q$.
\item The data type also allows to evaluate real expressions with arbitrary precision. One may either set the mantissae length of the underlying floating point system and then evaluate the expression with that mantissa length or one may specify an error bound $q$. The data type then computes an approximation with absolute error at most $q$.
\end{itemize}
The implementation of the data type real is based on the LEDA data types {\bf integer} and {\bf bigfloat} which are the types of arbitrary precision integers and floating point numbers, respectively.The implementation takes various shortcuts for increased efficiency, e.g., a {\bf double} approximation of any real number together with an error bound is maintained and tests are first performed on these approximations.
A high precision computation is only started when the test on the {\bf double} approximation is inconclusive.
%B Research Report
High-precision floating point numbers in LEDA
C. Burnikel and J. Könemann
Technical Report, 1996
C. Burnikel and J. Könemann
Technical Report, 1996
Export
BibTeX
@techreport{BurnikelKoenemann96,
TITLE = {High-precision floating point numbers in {LEDA}},
AUTHOR = {Burnikel, Christoph and K{\"o}nemann, Jochen},
LANGUAGE = {eng},
NUMBER = {MPI-I-1996-1-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Burnikel, Christoph
%A Könemann, Jochen
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T High-precision floating point numbers in LEDA :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A1AA-3
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 47 p.
%B Research Report / Max-Planck-Institut für Informatik
A branch-and-cut approach to physical mapping with end-probes
T. Christof, M. Jünger, J. Kececioglou, P. Mutzel and G. Reinelt
Technical Report, 1996
T. Christof, M. Jünger, J. Kececioglou, P. Mutzel and G. Reinelt
Technical Report, 1996
Abstract
A fundamental problem in computational biology is the
construction of physical maps of chromosomes from hybridization
experiments between unique probes and clones of chromosome fragments
in the presence of error.
Alizadeh, Karp, Weisser and Zweig~\cite{AKWZ94} first considered a
maximum-likelihood model of the problem that is equivalent to finding
an ordering of the probes that minimizes a weighted sum of errors,
and developed several effective heuristics. We show that by exploiting
information about the end-probes of clones, this model can be formulated
as a weighted Betweenness Problem.
This affords the significant advantage of allowing the well-developed tools
of integer linear-programming and branch-and-cut algorithms to be brought
to bear on physical mapping, enabling us for the first time to solve
small mapping instances to optimality even in the presence of high error.
We also show that by combining the optimal solution of many small
overlapping Betweenness Problems, one can effectively screen errors
from larger instances, and solve the edited instance to optimality
as a Hamming-Distance Traveling Salesman Problem.
This suggests a new combined approach to physical map construction.
Export
BibTeX
@techreport{ChristofJungerKececioglouMutzelReinelt96,
TITLE = {A branch-and-cut approach to physical mapping with end-probes},
AUTHOR = {Christof, Thomas and J{\"u}nger, Michael and Kececioglou, John and Mutzel, Petra and Reinelt, Gerhard},
LANGUAGE = {eng},
NUMBER = {MPI-I-1996-1-027},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
ABSTRACT = {A fundamental problem in computational biology is the construction of physical maps of chromosomes from hybridization experiments between unique probes and clones of chromosome fragments in the presence of error. Alizadeh, Karp, Weisser and Zweig~\cite{AKWZ94} first considered a maximum-likelihood model of the problem that is equivalent to finding an ordering of the probes that minimizes a weighted sum of errors, and developed several effective heuristics. We show that by exploiting information about the end-probes of clones, this model can be formulated as a weighted Betweenness Problem. This affords the significant advantage of allowing the well-developed tools of integer linear-programming and branch-and-cut algorithms to be brought to bear on physical mapping, enabling us for the first time to solve small mapping instances to optimality even in the presence of high error. We also show that by combining the optimal solution of many small overlapping Betweenness Problems, one can effectively screen errors from larger instances, and solve the edited instance to optimality as a Hamming-Distance Traveling Salesman Problem. This suggests a new combined approach to physical map construction.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Christof, Thomas
%A Jünger, Michael
%A Kececioglou, John
%A Mutzel, Petra
%A Reinelt, Gerhard
%+ External Organizations
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T A branch-and-cut approach to physical mapping with end-probes :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A03A-5
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 10 p.
%X A fundamental problem in computational biology is the
construction of physical maps of chromosomes from hybridization
experiments between unique probes and clones of chromosome fragments
in the presence of error.
Alizadeh, Karp, Weisser and Zweig~\cite{AKWZ94} first considered a
maximum-likelihood model of the problem that is equivalent to finding
an ordering of the probes that minimizes a weighted sum of errors,
and developed several effective heuristics. We show that by exploiting
information about the end-probes of clones, this model can be formulated
as a weighted Betweenness Problem.
This affords the significant advantage of allowing the well-developed tools
of integer linear-programming and branch-and-cut algorithms to be brought
to bear on physical mapping, enabling us for the first time to solve
small mapping instances to optimality even in the presence of high error.
We also show that by combining the optimal solution of many small
overlapping Betweenness Problems, one can effectively screen errors
from larger instances, and solve the edited instance to optimality
as a Hamming-Distance Traveling Salesman Problem.
This suggests a new combined approach to physical map construction.
%B Research Report / Max-Planck-Institut für Informatik
On the complexity of approximating Euclidean traveling salesman tours and minimum spanning trees
G. Das, S. Kapoor and M. Smid
Technical Report, 1996
G. Das, S. Kapoor and M. Smid
Technical Report, 1996
Abstract
We consider the problems of computing $r$-approximate traveling salesman tours and $r$-approximate minimum spanning trees for a set of $n$ points in $\IR^d$, where $d \geq 1$ is a constant.
In the algebraic computation tree model, the complexities of both these problems are shown to be $\Theta(n \log n/r)$, for all $n$ and $r$ such that $r<n$ and $r$ is larger than some constant. In the more powerful model of computation that additionally uses the floor function and random access, both problems can be solved in $O(n)$ time if $r = \Theta( n^{1-1/d} )$.
Export
BibTeX
@techreport{DasKapoorSmid96,
TITLE = {On the complexity of approximating Euclidean traveling salesman tours and minimum spanning trees},
AUTHOR = {Das, Gautam and Kapoor, Sanjiv and Smid, Michiel},
LANGUAGE = {eng},
NUMBER = {MPI-I-1996-1-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
ABSTRACT = {We consider the problems of computing $r$-approximate traveling salesman tours and $r$-approximate minimum spanning trees for a set of $n$ points in $\IR^d$, where $d \geq 1$ is a constant. In the algebraic computation tree model, the complexities of both these problems are shown to be $\Theta(n \log n/r)$, for all $n$ and $r$ such that $r<n$ and $r$ is larger than some constant. In the more powerful model of computation that additionally uses the floor function and random access, both problems can be solved in $O(n)$ time if $r = \Theta( n^{1-1/d} )$.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Das, Gautam
%A Kapoor, Sanjiv
%A Smid, Michiel
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On the complexity of approximating Euclidean traveling salesman tours and minimum spanning trees :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A1A1-6
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 14 p.
%X We consider the problems of computing $r$-approximate traveling salesman tours and $r$-approximate minimum spanning trees for a set of $n$ points in $\IR^d$, where $d \geq 1$ is a constant.
In the algebraic computation tree model, the complexities of both these problems are shown to be $\Theta(n \log n/r)$, for all $n$ and $r$ such that $r<n$ and $r$ is larger than some constant. In the more powerful model of computation that additionally uses the floor function and random access, both problems can be solved in $O(n)$ time if $r = \Theta( n^{1-1/d} )$.
%B Research Report / Max-Planck-Institut für Informatik
Exact ground states of two-dimensional $\pm J$ Ising Spin Glasses
C. De Simone, M. Diehl, M. Jünger, P. Mutzel, G. Reinelt and G. Rinaldi
Technical Report, 1996
C. De Simone, M. Diehl, M. Jünger, P. Mutzel, G. Reinelt and G. Rinaldi
Technical Report, 1996
Abstract
In this paper we study the problem of finding an exact ground state of a two-dimensional $\pm J$ Ising spin glass on a square lattice with nearest neighbor interactions and periodic boundary conditions when there is
a concentration $p$ of negative bonds, with $p$ ranging between $0.1$ and $0.9$. With our exact algorithm we can determine ground states of grids of sizes up to $50\times 50$ in a moderate amount of computation time (up to one hour each) for several values of $p$. For the ground state energy of an infinite spin glass system with $p=0.5$ we estimate $E_{0.5}^\infty = -1.4015 \pm0.0008$.
We report on extensive computational tests based on more than $22\,000$ experiments.
Export
BibTeX
@techreport{DeSimoneDiehlJuengerMutzelReineltRinaldi96a,
TITLE = {Exact ground states of two-dimensional \${\textbackslash}pm J\$ Ising Spin Glasses},
AUTHOR = {De Simone, C. and Diehl, M. and J{\"u}nger, Michael and Mutzel, Petra and Reinelt, Gerhard and Rinaldi, G.},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-004},
NUMBER = {MPI-I-1996-1-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
ABSTRACT = {In this paper we study the problem of finding an exact ground state of a two-dimensional $\pm J$ Ising spin glass on a square lattice with nearest neighbor interactions and periodic boundary conditions when there is a concentration $p$ of negative bonds, with $p$ ranging between $0.1$ and $0.9$. With our exact algorithm we can determine ground states of grids of sizes up to $50\times 50$ in a moderate amount of computation time (up to one hour each) for several values of $p$. For the ground state energy of an infinite spin glass system with $p=0.5$ we estimate $E_{0.5}^\infty = -1.4015 \pm0.0008$. We report on extensive computational tests based on more than $22\,000$ experiments.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A De Simone, C.
%A Diehl, M.
%A Jünger, Michael
%A Mutzel, Petra
%A Reinelt, Gerhard
%A Rinaldi, G.
%+ External Organizations
External Organizations
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
%T Exact ground states of two-dimensional $\pm J$ Ising Spin Glasses :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A1A4-F
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-004
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 10 p.
%X In this paper we study the problem of finding an exact ground state of a two-dimensional $\pm J$ Ising spin glass on a square lattice with nearest neighbor interactions and periodic boundary conditions when there is
a concentration $p$ of negative bonds, with $p$ ranging between $0.1$ and $0.9$. With our exact algorithm we can determine ground states of grids of sizes up to $50\times 50$ in a moderate amount of computation time (up to one hour each) for several values of $p$. For the ground state energy of an infinite spin glass system with $p=0.5$ we estimate $E_{0.5}^\infty = -1.4015 \pm0.0008$.
We report on extensive computational tests based on more than $22\,000$ experiments.
%B Research Report / Max-Planck-Institut für Informatik
More general parallel tree contraction: Register allocation and broadcasting in a tree
K. Diks and T. Hagerup
Technical Report, 1996
K. Diks and T. Hagerup
Technical Report, 1996
Abstract
We consider arithmetic expressions over operators
$+$, $-$, $*$, $/$, and $\sqrt{\ }$,
with integer operands. For an expression $E$, a separation bound
$sep(E)$ is a positive real number with the property that $E\neq 0$ implies
$|E| \geq sep(E)$. We propose a new separation bound that is easy to compute an
d stronger than previous bounds.
Export
BibTeX
@techreport{DiksHagerup96,
TITLE = {More general parallel tree contraction: Register allocation and broadcasting in a tree},
AUTHOR = {Diks, Krzysztof and Hagerup, Torben},
LANGUAGE = {eng},
NUMBER = {MPI-I-1996-1-024},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
ABSTRACT = {We consider arithmetic expressions over operators $+$, $-$, $*$, $/$, and $\sqrt{\ }$, with integer operands. For an expression $E$, a separation bound $sep(E)$ is a positive real number with the property that $E\neq 0$ implies $|E| \geq sep(E)$. We propose a new separation bound that is easy to compute an d stronger than previous bounds.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Diks, Krzysztof
%A Hagerup, Torben
%+ External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T More general parallel tree contraction: Register allocation and broadcasting in a tree :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A055-7
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 24 p.
%X We consider arithmetic expressions over operators
$+$, $-$, $*$, $/$, and $\sqrt{\ }$,
with integer operands. For an expression $E$, a separation bound
$sep(E)$ is a positive real number with the property that $E\neq 0$ implies
$|E| \geq sep(E)$. We propose a new separation bound that is easy to compute an
d stronger than previous bounds.
%B Research Report / Max-Planck-Institut für Informatik
Negative dependence through the FKG Inequality
D. P. Dubhashi, V. Priebe and D. Ranjan
Technical Report, 1996
D. P. Dubhashi, V. Priebe and D. Ranjan
Technical Report, 1996
Abstract
We investigate random variables arising in occupancy problems, and show the variables to be negatively associated, that is, negatively
dependent in a strong sense. Our proofs are based on the FKG correlation inequality, and they suggest a useful, general technique
for proving negative dependence among random variables. We also show that in the special case of two binary random variables, the notions of negative correlation and negative association coincide.
Export
BibTeX
@techreport{DubhashiPriebeRanjan96,
TITLE = {Negative dependence through the {FKG} Inequality},
AUTHOR = {Dubhashi, Devdatt P. and Priebe, Volker and Ranjan, Desh},
LANGUAGE = {eng},
NUMBER = {MPI-I-1996-1-020},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
ABSTRACT = {We investigate random variables arising in occupancy problems, and show the variables to be negatively associated, that is, negatively dependent in a strong sense. Our proofs are based on the FKG correlation inequality, and they suggest a useful, general technique for proving negative dependence among random variables. We also show that in the special case of two binary random variables, the notions of negative correlation and negative association coincide.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Dubhashi, Devdatt P.
%A Priebe, Volker
%A Ranjan, Desh
%+ External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Negative dependence through the FKG Inequality :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A157-E
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 10 p.
%X We investigate random variables arising in occupancy problems, and show the variables to be negatively associated, that is, negatively
dependent in a strong sense. Our proofs are based on the FKG correlation inequality, and they suggest a useful, general technique
for proving negative dependence among random variables. We also show that in the special case of two binary random variables, the notions of negative correlation and negative association coincide.
%B Research Report / Max-Planck-Institut für Informatik
Runtime prediction of real programs on real machines
U. Finkler and K. Mehlhorn
Technical Report, 1996
U. Finkler and K. Mehlhorn
Technical Report, 1996
Abstract
Algorithms are more and more made available as part of libraries or tool
kits. For a user of such a library statements of asymptotic
running times are almost meaningless as he has no way to estimate the
constants involved. To choose the right algorithm for the targeted problem
size and the available hardware, knowledge about these constants is
important.
Methods to determine the constants based on regression analysis or operation
counting are not practicable in the general case due to inaccuracy and costs
respectively.
We present a new general method to determine the implementation and hardware
specific running time constants for combinatorial
algorithms. This method requires no changes of the implementation
of the investigated algorithm and is
applicable to a wide range of
of programming languages. Only some additional code is necessary.
The determined constants
are correct within a constant factor which depends only on the
hardware platform. As an example the constants of an implementation
of a hierarchy of algorithms and data structures are determined.
The hierarchy consists of an algorithm for the
maximum weighted bipartite matching problem (MWBM), Dijkstra's algorithm,
a Fibonacci heap and a graph representation based on adjacency lists.
ion
frequencies are at most 50 \% on the tested hardware platforms.
Export
BibTeX
@techreport{FinklerMehlhorn96,
TITLE = {Runtime prediction of real programs on real machines},
AUTHOR = {Finkler, Ulrich and Mehlhorn, Kurt},
LANGUAGE = {eng},
NUMBER = {MPI-I-1996-1-032},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
ABSTRACT = {Algorithms are more and more made available as part of libraries or tool kits. For a user of such a library statements of asymptotic running times are almost meaningless as he has no way to estimate the constants involved. To choose the right algorithm for the targeted problem size and the available hardware, knowledge about these constants is important. Methods to determine the constants based on regression analysis or operation counting are not practicable in the general case due to inaccuracy and costs respectively. We present a new general method to determine the implementation and hardware specific running time constants for combinatorial algorithms. This method requires no changes of the implementation of the investigated algorithm and is applicable to a wide range of of programming languages. Only some additional code is necessary. The determined constants are correct within a constant factor which depends only on the hardware platform. As an example the constants of an implementation of a hierarchy of algorithms and data structures are determined. The hierarchy consists of an algorithm for the maximum weighted bipartite matching problem (MWBM), Dijkstra's algorithm, a Fibonacci heap and a graph representation based on adjacency lists. ion frequencies are at most 50 \% on the tested hardware platforms.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Finkler, Ulrich
%A Mehlhorn, Kurt
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Runtime prediction of real programs on real machines :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A40D-D
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 10 p.
%X Algorithms are more and more made available as part of libraries or tool
kits. For a user of such a library statements of asymptotic
running times are almost meaningless as he has no way to estimate the
constants involved. To choose the right algorithm for the targeted problem
size and the available hardware, knowledge about these constants is
important.
Methods to determine the constants based on regression analysis or operation
counting are not practicable in the general case due to inaccuracy and costs
respectively.
We present a new general method to determine the implementation and hardware
specific running time constants for combinatorial
algorithms. This method requires no changes of the implementation
of the investigated algorithm and is
applicable to a wide range of
of programming languages. Only some additional code is necessary.
The determined constants
are correct within a constant factor which depends only on the
hardware platform. As an example the constants of an implementation
of a hierarchy of algorithms and data structures are determined.
The hierarchy consists of an algorithm for the
maximum weighted bipartite matching problem (MWBM), Dijkstra's algorithm,
a Fibonacci heap and a graph representation based on adjacency lists.
ion
frequencies are at most 50 \% on the tested hardware platforms.
%B Research Report / Max-Planck-Institut für Informatik
Generalized $k$-Center Problems
N. Garg, S. Chaudhuri and R. Ravi
Technical Report, 1996
N. Garg, S. Chaudhuri and R. Ravi
Technical Report, 1996
Abstract
The $k$-center problem with triangle inequality is that of placing $k$ center nodes in a weighted undirected graph in which the edge weights obey the triangle inequality, so that the maximum distance of any node to its nearest center is minimized. In this paper, we consider a generalization of this problem where, given a number $p$, we wish to place $k$ centers so as to minimize the maximum distance of any node to its $p\th$ closest center. We consider three different versions of this reliable $k$-center problem depending on which of the nodes can serve as centers and non-centers and derive best possible approximation algorithms for all three versions.
Export
BibTeX
@techreport{GargChaudhuriRavi96,
TITLE = {Generalized \$k\$-Center Problems},
AUTHOR = {Garg, Naveen and Chaudhuri, Shiva and Ravi, R.},
LANGUAGE = {eng},
NUMBER = {MPI-I-1996-1-021},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
ABSTRACT = {The $k$-center problem with triangle inequality is that of placing $k$ center nodes in a weighted undirected graph in which the edge weights obey the triangle inequality, so that the maximum distance of any node to its nearest center is minimized. In this paper, we consider a generalization of this problem where, given a number $p$, we wish to place $k$ centers so as to minimize the maximum distance of any node to its $p\th$ closest center. We consider three different versions of this reliable $k$-center problem depending on which of the nodes can serve as centers and non-centers and derive best possible approximation algorithms for all three versions.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Garg, Naveen
%A Chaudhuri, Shiva
%A Ravi, R.
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T Generalized $k$-Center Problems :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A121-4
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 9 p.
%X The $k$-center problem with triangle inequality is that of placing $k$ center nodes in a weighted undirected graph in which the edge weights obey the triangle inequality, so that the maximum distance of any node to its nearest center is minimized. In this paper, we consider a generalization of this problem where, given a number $p$, we wish to place $k$ centers so as to minimize the maximum distance of any node to its $p\th$ closest center. We consider three different versions of this reliable $k$-center problem depending on which of the nodes can serve as centers and non-centers and derive best possible approximation algorithms for all three versions.
%B Research Report / Max-Planck-Institut für Informatik
Distributed list coloring: how to dynamically allocate frequencies to mobile base stations
N. Garg, M. Papatriantafilou and P. Tsigas
Technical Report, 1996
N. Garg, M. Papatriantafilou and P. Tsigas
Technical Report, 1996
Abstract
To avoid signal interference in mobile communication it is necessary that the channels used by base stations for broadcast communication within their cells are chosen so that the same channel is never concurrently used by two neighboring stations. We model this channel allocation problem as a {\em generalized list coloring problem} and we provide two distributed solutions, which are also able to cope with crash failures, by limiting the size of the network affected by a faulty station in terms of the distance from that station.
Our first solution uses a powerful synchronization mechanism to achieve a response time that depends only on $\Delta$, the maximum degree of the signal interference graph, and a failure locality of 4.
Our second solution is a simple randomized solution in which each node can expect to pick $f/4\Delta$ colors where $f$ is the size of the list at the node; the response time of this solution is a constant and the failure locality 1.
Besides being efficient (their complexity measures involve only small constants), the protocols presented in this work are simple and easy to apply in practice, provided the existence of distributed infrastructure in networks that are in use.
Export
BibTeX
@techreport{GargPapatriantafilouTsigas96,
TITLE = {Distributed list coloring: how to dynamically allocate frequencies to mobile base stations},
AUTHOR = {Garg, Naveen and Papatriantafilou, Marina and Tsigas, Philippas},
LANGUAGE = {eng},
NUMBER = {MPI-I-1996-1-010},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
ABSTRACT = {To avoid signal interference in mobile communication it is necessary that the channels used by base stations for broadcast communication within their cells are chosen so that the same channel is never concurrently used by two neighboring stations. We model this channel allocation problem as a {\em generalized list coloring problem} and we provide two distributed solutions, which are also able to cope with crash failures, by limiting the size of the network affected by a faulty station in terms of the distance from that station. Our first solution uses a powerful synchronization mechanism to achieve a response time that depends only on $\Delta$, the maximum degree of the signal interference graph, and a failure locality of 4. Our second solution is a simple randomized solution in which each node can expect to pick $f/4\Delta$ colors where $f$ is the size of the list at the node; the response time of this solution is a constant and the failure locality 1. Besides being efficient (their complexity measures involve only small constants), the protocols presented in this work are simple and easy to apply in practice, provided the existence of distributed infrastructure in networks that are in use.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Garg, Naveen
%A Papatriantafilou, Marina
%A Tsigas, Philippas
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Distributed list coloring: how to dynamically allocate frequencies to mobile base stations :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A198-B
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 15 p.
%X To avoid signal interference in mobile communication it is necessary that the channels used by base stations for broadcast communication within their cells are chosen so that the same channel is never concurrently used by two neighboring stations. We model this channel allocation problem as a {\em generalized list coloring problem} and we provide two distributed solutions, which are also able to cope with crash failures, by limiting the size of the network affected by a faulty station in terms of the distance from that station.
Our first solution uses a powerful synchronization mechanism to achieve a response time that depends only on $\Delta$, the maximum degree of the signal interference graph, and a failure locality of 4.
Our second solution is a simple randomized solution in which each node can expect to pick $f/4\Delta$ colors where $f$ is the size of the list at the node; the response time of this solution is a constant and the failure locality 1.
Besides being efficient (their complexity measures involve only small constants), the protocols presented in this work are simple and easy to apply in practice, provided the existence of distributed infrastructure in networks that are in use.
%B Research Report / Max-Planck-Institut für Informatik
On the complexity of computing evolutionary trees
L. Gasieniec, J. Jansson, A. Lingas and A. Östlin
Technical Report, 1996
L. Gasieniec, J. Jansson, A. Lingas and A. Östlin
Technical Report, 1996
Abstract
In this paper we study a few
important tree optimization problems with
applications to computational biology.
These problems ask for trees that are consistent with an
as large part of the given data as possible.
We show that the maximum homeomorphic agreement
subtree problem cannot be approximated within a factor of
$N^{\epsilon}$, where $N$ is the input size, for any $0 \leq \epsilon
< \frac{1}{18}$ in polynomial time, unless P=NP. On the other hand,
we present an $O(N\log N)$-time heuristic for the restriction of this
problem to instances with $O(1)$ trees of height $O(1)$,
yielding solutions within a constant factor of the optimum.
We prove that the maximum inferred consensus tree
problem is NP-complete and we provide a simple fast heuristic
for it, yielding solutions within one third of the optimum.
We also present a more specialized polynomial-time heuristic
for the maximum inferred local consensus tree problem.
Export
BibTeX
@techreport{GasieniecJanssonLingasOstlin96,
TITLE = {On the complexity of computing evolutionary trees},
AUTHOR = {Gasieniec, Leszek and Jansson, Jesper and Lingas, Andrzej and {\"O}stlin, Anna},
LANGUAGE = {eng},
NUMBER = {MPI-I-1996-1-031},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
ABSTRACT = {In this paper we study a few important tree optimization problems with applications to computational biology. These problems ask for trees that are consistent with an as large part of the given data as possible. We show that the maximum homeomorphic agreement subtree problem cannot be approximated within a factor of $N^{\epsilon}$, where $N$ is the input size, for any $0 \leq \epsilon < \frac{1}{18}$ in polynomial time, unless P=NP. On the other hand, we present an $O(N\log N)$-time heuristic for the restriction of this problem to instances with $O(1)$ trees of height $O(1)$, yielding solutions within a constant factor of the optimum. We prove that the maximum inferred consensus tree problem is NP-complete and we provide a simple fast heuristic for it, yielding solutions within one third of the optimum. We also present a more specialized polynomial-time heuristic for the maximum inferred local consensus tree problem.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Gasieniec, Leszek
%A Jansson, Jesper
%A Lingas, Andrzej
%A Östlin, Anna
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
External Organizations
%T On the complexity of computing evolutionary trees :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A01E-5
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 14 p.
%X In this paper we study a few
important tree optimization problems with
applications to computational biology.
These problems ask for trees that are consistent with an
as large part of the given data as possible.
We show that the maximum homeomorphic agreement
subtree problem cannot be approximated within a factor of
$N^{\epsilon}$, where $N$ is the input size, for any $0 \leq \epsilon
< \frac{1}{18}$ in polynomial time, unless P=NP. On the other hand,
we present an $O(N\log N)$-time heuristic for the restriction of this
problem to instances with $O(1)$ trees of height $O(1)$,
yielding solutions within a constant factor of the optimum.
We prove that the maximum inferred consensus tree
problem is NP-complete and we provide a simple fast heuristic
for it, yielding solutions within one third of the optimum.
We also present a more specialized polynomial-time heuristic
for the maximum inferred local consensus tree problem.
%B Research Report / Max-Planck-Institut für Informatik
External inverse pattern matching
L. Gasieniec, P. Indyk and P. Krysta
Technical Report, 1996
L. Gasieniec, P. Indyk and P. Krysta
Technical Report, 1996
Abstract
We consider {\sl external inverse pattern matching} problem.
Given a text $\t$ of length $n$ over an ordered alphabet $\Sigma$,
such that $|\Sigma|=\sigma$, and a number $m\le n$.
The entire problem is to find a pattern $\pe\in \Sigma^m$ which
is not a subword of $\t$ and which maximizes the sum of Hamming
distances between $\pe$ and all subwords of $\t$ of length $m$.
We present optimal $O(n\log\sigma)$-time algorithm for the external
inverse pattern matching problem which substantially improves
the only known polynomial $O(nm\log\sigma)$-time algorithm
introduced by Amir, Apostolico and Lewenstein.
Moreover we discuss a fast parallel implementation of our algorithm on the
CREW PRAM model.
Export
BibTeX
@techreport{GasieniecIndykKrysta96,
TITLE = {External inverse pattern matching},
AUTHOR = {Gasieniec, Leszek and Indyk, Piotr and Krysta, Piotr},
LANGUAGE = {eng},
NUMBER = {MPI-I-1996-1-030},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
ABSTRACT = {We consider {\sl external inverse pattern matching} problem. Given a text $\t$ of length $n$ over an ordered alphabet $\Sigma$, such that $|\Sigma|=\sigma$, and a number $m\le n$. The entire problem is to find a pattern $\pe\in \Sigma^m$ which is not a subword of $\t$ and which maximizes the sum of Hamming distances between $\pe$ and all subwords of $\t$ of length $m$. We present optimal $O(n\log\sigma)$-time algorithm for the external inverse pattern matching problem which substantially improves the only known polynomial $O(nm\log\sigma)$-time algorithm introduced by Amir, Apostolico and Lewenstein. Moreover we discuss a fast parallel implementation of our algorithm on the CREW PRAM model.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Gasieniec, Leszek
%A Indyk, Piotr
%A Krysta, Piotr
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T External inverse pattern matching :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A410-3
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 12 p.
%X We consider {\sl external inverse pattern matching} problem.
Given a text $\t$ of length $n$ over an ordered alphabet $\Sigma$,
such that $|\Sigma|=\sigma$, and a number $m\le n$.
The entire problem is to find a pattern $\pe\in \Sigma^m$ which
is not a subword of $\t$ and which maximizes the sum of Hamming
distances between $\pe$ and all subwords of $\t$ of length $m$.
We present optimal $O(n\log\sigma)$-time algorithm for the external
inverse pattern matching problem which substantially improves
the only known polynomial $O(nm\log\sigma)$-time algorithm
introduced by Amir, Apostolico and Lewenstein.
Moreover we discuss a fast parallel implementation of our algorithm on the
CREW PRAM model.
%B Research Report / Max-Planck-Institut für Informatik
Discovering all most specific sentences by randomized algorithms
D. Gunopulos, H. Mannila and S. Saluja
Technical Report, 1996
D. Gunopulos, H. Mannila and S. Saluja
Technical Report, 1996
Abstract
Data mining can in many instances be viewed as the task of computing a
representation of a theory of a model or of a database. In this paper
we present a randomized algorithm that can be used to compute the
representation of a theory in terms of the most specific sentences of
that theory. In addition to randomization, the algorithm uses a
generalization of the concept of hypergraph transversals. We apply
the general algorithm in two ways, for the problem of discovering
maximal frequent sets in 0/1 data, and for computing minimal keys in
relations. We present some empirical results on the performance of
these methods on real data. We also show some complexity theoretic
evidence of the hardness of these problems.
Export
BibTeX
@techreport{GunopulosMannilaSaluja96,
TITLE = {Discovering all most specific sentences by randomized algorithms},
AUTHOR = {Gunopulos, Dimitrios and Mannila, Heikki and Saluja, Sanjeev},
LANGUAGE = {eng},
NUMBER = {MPI-I-1996-1-023},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
ABSTRACT = {Data mining can in many instances be viewed as the task of computing a representation of a theory of a model or of a database. In this paper we present a randomized algorithm that can be used to compute the representation of a theory in terms of the most specific sentences of that theory. In addition to randomization, the algorithm uses a generalization of the concept of hypergraph transversals. We apply the general algorithm in two ways, for the problem of discovering maximal frequent sets in 0/1 data, and for computing minimal keys in relations. We present some empirical results on the performance of these methods on real data. We also show some complexity theoretic evidence of the hardness of these problems.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Gunopulos, Dimitrios
%A Mannila, Heikki
%A Saluja, Sanjeev
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Discovering all most specific sentences by randomized algorithms :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A109-B
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 23 p.
%X Data mining can in many instances be viewed as the task of computing a
representation of a theory of a model or of a database. In this paper
we present a randomized algorithm that can be used to compute the
representation of a theory in terms of the most specific sentences of
that theory. In addition to randomization, the algorithm uses a
generalization of the concept of hypergraph transversals. We apply
the general algorithm in two ways, for the problem of discovering
maximal frequent sets in 0/1 data, and for computing minimal keys in
relations. We present some empirical results on the performance of
these methods on real data. We also show some complexity theoretic
evidence of the hardness of these problems.
%B Research Report / Max-Planck-Institut für Informatik
Efficient algorithms for counting and reporting pairwise intersections between convex polygons
P. Gupta, R. Janardan and M. Smid
Technical Report, 1996a
P. Gupta, R. Janardan and M. Smid
Technical Report, 1996a
Export
BibTeX
@techreport{GuptaJanardanSmid96a,
TITLE = {Efficient algorithms for counting and reporting pairwise intersections between convex polygons},
AUTHOR = {Gupta, Prosenjit and Janardan, Ravi and Smid, Michiel},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-008},
NUMBER = {MPI-I-1996-1-008},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Gupta, Prosenjit
%A Janardan, Ravi
%A Smid, Michiel
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Efficient algorithms for counting and reporting pairwise intersections between convex polygons :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A19E-0
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-008
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 11 p.
%B Research Report / Max-Planck-Institut für Informatik
A technique for adding range restrictions to generalized searching problems
P. Gupta, R. Janardan and M. Smid
Technical Report, 1996b
P. Gupta, R. Janardan and M. Smid
Technical Report, 1996b
Abstract
In a generalized searching problem, a set $S$ of $n$ colored
geometric objects has to be stored in a data structure, such
that for any given query object $q$, the distinct colors of
the objects of $S$ intersected by $q$ can be reported
efficiently. In this paper, a general technique is presented
for adding a range restriction to such a problem. The technique
is applied to the problem of querying a set of colored points
(resp.\ fat triangles) with a fat triangle (resp.\ point).
For both problems, a data structure is obtained having size
$O(n^{1+\epsilon})$ and query time $O((\log n)^2 + C)$.
Here, $C$ denotes the number of colors reported by the query,
and $\epsilon$ is an arbitrarily small positive constant.
Export
BibTeX
@techreport{GuptaJanardanSmid96b,
TITLE = {A technique for adding range restrictions to generalized searching problems},
AUTHOR = {Gupta, Prosenjit and Janardan, Ravi and Smid, Michiel},
LANGUAGE = {eng},
NUMBER = {MPI-I-1996-1-017},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
ABSTRACT = {In a generalized searching problem, a set $S$ of $n$ colored geometric objects has to be stored in a data structure, such that for any given query object $q$, the distinct colors of the objects of $S$ intersected by $q$ can be reported efficiently. In this paper, a general technique is presented for adding a range restriction to such a problem. The technique is applied to the problem of querying a set of colored points (resp.\ fat triangles) with a fat triangle (resp.\ point). For both problems, a data structure is obtained having size $O(n^{1+\epsilon})$ and query time $O((\log n)^2 + C)$. Here, $C$ denotes the number of colors reported by the query, and $\epsilon$ is an arbitrarily small positive constant.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Gupta, Prosenjit
%A Janardan, Ravi
%A Smid, Michiel
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A technique for adding range restrictions to generalized searching problems :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A15E-F
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 9 p.
%X In a generalized searching problem, a set $S$ of $n$ colored
geometric objects has to be stored in a data structure, such
that for any given query object $q$, the distinct colors of
the objects of $S$ intersected by $q$ can be reported
efficiently. In this paper, a general technique is presented
for adding a range restriction to such a problem. The technique
is applied to the problem of querying a set of colored points
(resp.\ fat triangles) with a fat triangle (resp.\ point).
For both problems, a data structure is obtained having size
$O(n^{1+\epsilon})$ and query time $O((\log n)^2 + C)$.
Here, $C$ denotes the number of colors reported by the query,
and $\epsilon$ is an arbitrarily small positive constant.
%B Research Report / Max-Planck-Institut für Informatik
Vorlesungsskript Komplexitätstheorie
T. Hagerup
Technical Report, 1996
T. Hagerup
Technical Report, 1996
Export
BibTeX
@techreport{MPI-I-96-1-005,
TITLE = {Vorlesungsskript Komplexit{\"a}tstheorie},
AUTHOR = {Hagerup, Torben},
LANGUAGE = {eng},
NUMBER = {MPI-I-96-1-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Hagerup, Torben
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Vorlesungsskript Komplexitätstheorie :
%G eng
%U http://hdl.handle.net/21.11116/0000-0001-6AB6-B
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 156 p.
%B Research Report / Max-Planck-Institut für Informatik
Common Syntax of the DFG-Schwerpunktprogramm Deduktion’'
R. Hähnle, M. Kerber and C. Weidenbach
Technical Report, 1996
R. Hähnle, M. Kerber and C. Weidenbach
Technical Report, 1996
Export
BibTeX
@techreport{HaehnleKerberEtAl96,
TITLE = {Common Syntax of the {DFG-Schwerpunktprogramm} ''Deduktion''},
AUTHOR = {H{\"a}hnle, Reiner and Kerber, Manfred and Weidenbach, Christoph},
LANGUAGE = {eng},
NUMBER = {10/96},
INSTITUTION = {Universit{\"a}t Karlsruhe},
ADDRESS = {Karlsruhe},
YEAR = {1996},
DATE = {1996},
}
Endnote
%0 Report
%A Hähnle, Reiner
%A Kerber, Manfred
%A Weidenbach, Christoph
%+ External Organizations
External Organizations
Automation of Logic, MPI for Informatics, Max Planck Society
%T Common Syntax of the DFG-Schwerpunktprogramm ''Deduktion'' :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-001A-1CDF-C
%Y Universität Karlsruhe
%C Karlsruhe
%D 1996
2-Layer straigthline crossing minimization: performance of exact and heuristic algorithms
M. Jünger and P. Mutzel
Technical Report, 1996
M. Jünger and P. Mutzel
Technical Report, 1996
Abstract
We present algorithms for the two layer straightline crossing
minimization problem that are able to compute exact optima. Our
computational results lead us to the conclusion that there is no
need for heuristics if one layer is fixed, even though the problem
is NP-hard, and that for the general problem with two variable layers,
true optima can be computed for sparse instances in which the smaller
layer contains up to 15 nodes. For bigger instances, the iterated
barycenter method turns out to be the method of choice among several
popular heuristics whose performance we could assess by comparing the
results to optimum solutions.
Export
BibTeX
@techreport{JungerMutzel96,
TITLE = {2-Layer straigthline crossing minimization: performance of exact and heuristic algorithms},
AUTHOR = {J{\"u}nger, Michael and Mutzel, Petra},
LANGUAGE = {eng},
NUMBER = {MPI-I-1996-1-025},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
ABSTRACT = {We present algorithms for the two layer straightline crossing minimization problem that are able to compute exact optima. Our computational results lead us to the conclusion that there is no need for heuristics if one layer is fixed, even though the problem is NP-hard, and that for the general problem with two variable layers, true optima can be computed for sparse instances in which the smaller layer contains up to 15 nodes. For bigger instances, the iterated barycenter method turns out to be the method of choice among several popular heuristics whose performance we could assess by comparing the results to optimum solutions.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Jünger, Michael
%A Mutzel, Petra
%+ External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T 2-Layer straigthline crossing minimization: performance of exact and heuristic algorithms :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A040-6
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 14 p.
%X We present algorithms for the two layer straightline crossing
minimization problem that are able to compute exact optima. Our
computational results lead us to the conclusion that there is no
need for heuristics if one layer is fixed, even though the problem
is NP-hard, and that for the general problem with two variable layers,
true optima can be computed for sparse instances in which the smaller
layer contains up to 15 nodes. For bigger instances, the iterated
barycenter method turns out to be the method of choice among several
popular heuristics whose performance we could assess by comparing the
results to optimum solutions.
%B Research Report / Max-Planck-Institut für Informatik
Derandomizing semidefinite programming based approximation algorithms
S. Mahajan and R. Hariharan
Technical Report, 1996
S. Mahajan and R. Hariharan
Technical Report, 1996
Export
BibTeX
@techreport{MahajanRamesh96,
TITLE = {Derandomizing semidefinite programming based approximation algorithms},
AUTHOR = {Mahajan, Sanjeev and Hariharan, Ramesh},
LANGUAGE = {eng},
NUMBER = {MPI-I-1996-1-013},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Mahajan, Sanjeev
%A Hariharan, Ramesh
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Derandomizing semidefinite programming based approximation algorithms :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A18F-1
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 22 p.
%B Research Report / Max-Planck-Institut für Informatik
The impact of timing on linearizability in counting networks
M. Mavronicolas, M. Papatriantafilou and P. Tsigas
Technical Report, 1996
M. Mavronicolas, M. Papatriantafilou and P. Tsigas
Technical Report, 1996
Abstract
{\em Counting networks} form a new class of distributed, low-contention data structures, made up of {\em balancers} and {\em wires,}
which are suitable for solving a variety of multiprocessor synchronization problems that can be expressed as counting problems.
A {\em linearizable} counting network guarantees that the order of the values it returns respects the real-time order they were requested.
Linearizability significantly raises the capabilities of the network, but at a possible price in network size or synchronization support.
In this work, we further pursue the systematic study of the impact of {\em timing} assumptions on linearizability for
counting networks, along the line of research recently initiated by Lynch~{\em et~al.} in [18].
We consider two basic {\em timing} models, the {instantaneous balancer} model, in which the transition of a token from an input to an output port of a balancer is modeled as an instantaneous event, and the {\em periodic balancer} model, where balancers send out tokens at a fixed rate. In both models, we assume lower and upper bounds on the delays incurred by wires connecting the balancers.
We present necessary and sufficient conditions for linearizability in these models, in the form of precise inequalities that involve not only parameters of the timing models, but also certain structural parameters of the counting network, which may be of more general interest.
Our results extend and strengthen previous impossibility and possibility results on linearizability in counting networks.
Export
BibTeX
@techreport{MavronicolasPapatriantafilouTsigas96,
TITLE = {The impact of timing on linearizability in counting networks},
AUTHOR = {Mavronicolas, Marios and Papatriantafilou, Marina and Tsigas, Philippas},
LANGUAGE = {eng},
NUMBER = {MPI-I-1996-1-011},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
ABSTRACT = {{\em Counting networks} form a new class of distributed, low-contention data structures, made up of {\em balancers} and {\em wires,} which are suitable for solving a variety of multiprocessor synchronization problems that can be expressed as counting problems. A {\em linearizable} counting network guarantees that the order of the values it returns respects the real-time order they were requested. Linearizability significantly raises the capabilities of the network, but at a possible price in network size or synchronization support. In this work, we further pursue the systematic study of the impact of {\em timing} assumptions on linearizability for counting networks, along the line of research recently initiated by Lynch~{\em et~al.} in [18]. We consider two basic {\em timing} models, the {instantaneous balancer} model, in which the transition of a token from an input to an output port of a balancer is modeled as an instantaneous event, and the {\em periodic balancer} model, where balancers send out tokens at a fixed rate. In both models, we assume lower and upper bounds on the delays incurred by wires connecting the balancers. We present necessary and sufficient conditions for linearizability in these models, in the form of precise inequalities that involve not only parameters of the timing models, but also certain structural parameters of the counting network, which may be of more general interest. Our results extend and strengthen previous impossibility and possibility results on linearizability in counting networks.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Mavronicolas, Marios
%A Papatriantafilou, Marina
%A Tsigas, Philippas
%+ External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T The impact of timing on linearizability in counting networks :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A195-2
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 19 p.
%X {\em Counting networks} form a new class of distributed, low-contention data structures, made up of {\em balancers} and {\em wires,}
which are suitable for solving a variety of multiprocessor synchronization problems that can be expressed as counting problems.
A {\em linearizable} counting network guarantees that the order of the values it returns respects the real-time order they were requested.
Linearizability significantly raises the capabilities of the network, but at a possible price in network size or synchronization support.
In this work, we further pursue the systematic study of the impact of {\em timing} assumptions on linearizability for
counting networks, along the line of research recently initiated by Lynch~{\em et~al.} in [18].
We consider two basic {\em timing} models, the {instantaneous balancer} model, in which the transition of a token from an input to an output port of a balancer is modeled as an instantaneous event, and the {\em periodic balancer} model, where balancers send out tokens at a fixed rate. In both models, we assume lower and upper bounds on the delays incurred by wires connecting the balancers.
We present necessary and sufficient conditions for linearizability in these models, in the form of precise inequalities that involve not only parameters of the timing models, but also certain structural parameters of the counting network, which may be of more general interest.
Our results extend and strengthen previous impossibility and possibility results on linearizability in counting networks.
%B Research Report / Max-Planck-Institut für Informatik
A computational basis for higher-dimensional computational geometry
K. Mehlhorn, S. Näher, S. Schirra, M. Seel and C. Uhrig
Technical Report, 1996
K. Mehlhorn, S. Näher, S. Schirra, M. Seel and C. Uhrig
Technical Report, 1996
Abstract
We specify and implement a kernel for computational geometry in
arbitrary finite dimensional space. The kernel provides points,
vectors, directions, hyperplanes, segments, rays, lines, affine
transformations, and operations connecting these types. Points have
rational coordinates, hyperplanes have rational coefficients, and
analogous statements hold for the other types. We therefore call our
types \emph{rat\_point}, \emph{rat\_vector}, \emph{rat\_direction},
\emph{rat\_hyperplane}, \emph{rat\_segment}, \emph{rat\_ray} and
\emph{rat\_line}. All geometric primitives are \emph{exact}, i.e.,
they do not incur rounding error (because they are implemented using
rational arithmetic) and always produce the correct result. To this
end we provide types \emph{integer\_vector} and \emph{integer\_matrix}
which realize exact linear algebra over the integers.
The kernel is submitted to the CGAL-Consortium as a proposal for its
higher-dimensional geometry kernel and will become part of the LEDA
platform for combinatorial and geometric computing.
Export
BibTeX
@techreport{MehlhornNaherSchirraSeelUhrig96,
TITLE = {A computational basis for higher-dimensional computational geometry},
AUTHOR = {Mehlhorn, Kurt and N{\"a}her, Stefan and Schirra, Stefan and Seel, Michael and Uhrig, Christian},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-016},
NUMBER = {MPI-I-1996-1-016},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
ABSTRACT = {We specify and implement a kernel for computational geometry in arbitrary finite dimensional space. The kernel provides points, vectors, directions, hyperplanes, segments, rays, lines, affine transformations, and operations connecting these types. Points have rational coordinates, hyperplanes have rational coefficients, and analogous statements hold for the other types. We therefore call our types \emph{rat\_point}, \emph{rat\_vector}, \emph{rat\_direction}, \emph{rat\_hyperplane}, \emph{rat\_segment}, \emph{rat\_ray} and \emph{rat\_line}. All geometric primitives are \emph{exact}, i.e., they do not incur rounding error (because they are implemented using rational arithmetic) and always produce the correct result. To this end we provide types \emph{integer\_vector} and \emph{integer\_matrix} which realize exact linear algebra over the integers. The kernel is submitted to the CGAL-Consortium as a proposal for its higher-dimensional geometry kernel and will become part of the LEDA platform for combinatorial and geometric computing.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%A Näher, Stefan
%A Schirra, Stefan
%A Seel, Michael
%A Uhrig, Christian
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A computational basis for higher-dimensional computational geometry :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A163-1
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-016
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 120 p.
%X We specify and implement a kernel for computational geometry in
arbitrary finite dimensional space. The kernel provides points,
vectors, directions, hyperplanes, segments, rays, lines, affine
transformations, and operations connecting these types. Points have
rational coordinates, hyperplanes have rational coefficients, and
analogous statements hold for the other types. We therefore call our
types \emph{rat\_point}, \emph{rat\_vector}, \emph{rat\_direction},
\emph{rat\_hyperplane}, \emph{rat\_segment}, \emph{rat\_ray} and
\emph{rat\_line}. All geometric primitives are \emph{exact}, i.e.,
they do not incur rounding error (because they are implemented using
rational arithmetic) and always produce the correct result. To this
end we provide types \emph{integer\_vector} and \emph{integer\_matrix}
which realize exact linear algebra over the integers.
The kernel is submitted to the CGAL-Consortium as a proposal for its
higher-dimensional geometry kernel and will become part of the LEDA
platform for combinatorial and geometric computing.
%B Research Report
The thickness of graphs: a survey
P. Mutzel, T. Odenthal and M. Scharbrodt
Technical Report, 1996
P. Mutzel, T. Odenthal and M. Scharbrodt
Technical Report, 1996
Abstract
We give a state-of-the-art survey of the thickness of a graph from both a theoretical and a practical point of view. After summarizing the relevant results concerning this topological invariant of a graph, we deal with practical computation of the thickness.
We present some modifications of a basic heuristic and investigate their usefulness for evaluating the thickness and determining a decomposition of a graph in planar subgraphs.
Export
BibTeX
@techreport{MutzelOdenthalScharbrodt96,
TITLE = {The thickness of graphs: a survey},
AUTHOR = {Mutzel, Petra and Odenthal, Thomas and Scharbrodt, Mark},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-009},
NUMBER = {MPI-I-1996-1-009},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
ABSTRACT = {We give a state-of-the-art survey of the thickness of a graph from both a theoretical and a practical point of view. After summarizing the relevant results concerning this topological invariant of a graph, we deal with practical computation of the thickness. We present some modifications of a basic heuristic and investigate their usefulness for evaluating the thickness and determining a decomposition of a graph in planar subgraphs.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Mutzel, Petra
%A Odenthal, Thomas
%A Scharbrodt, Mark
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
%T The thickness of graphs: a survey :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A19B-5
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-009
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 18 p.
%X We give a state-of-the-art survey of the thickness of a graph from both a theoretical and a practical point of view. After summarizing the relevant results concerning this topological invariant of a graph, we deal with practical computation of the thickness.
We present some modifications of a basic heuristic and investigate their usefulness for evaluating the thickness and determining a decomposition of a graph in planar subgraphs.
%B Research Report / Max-Planck-Institut für Informatik
A branch-and-cut algorithm for multiple sequence alignment
K. Reinert, H.-P. Lenhof, P. Mutzel, K. Mehlhorn and J. Kececioglou
Technical Report, 1996
K. Reinert, H.-P. Lenhof, P. Mutzel, K. Mehlhorn and J. Kececioglou
Technical Report, 1996
Abstract
Multiple sequence alignment is an important problem in computational biology.
We study the Maximum Trace formulation introduced by
Kececioglu~\cite{Kececioglu91}.
We first phrase the problem in terms of forbidden subgraphs,
which enables us to express Maximum Trace as an integer linear-programming
problem,
and then solve the integer linear program using methods from polyhedral
combinatorics.
The trace {\it polytope\/} is the convex hull of all feasible solutions
to the Maximum Trace problem;
for the case of two sequences,
we give a complete characterization of this polytope.
This yields a polynomial-time algorithm
for a general version of pairwise sequence alignment
that, perhaps suprisingly, does not use dynamic programming;
this yields, for instance, a non-dynamic-programming algorithm for
sequence comparison under the 0-1 metric,
which gives another answer to a long-open question in the area of string algorithms
\cite{PW93}.
For the multiple-sequence case,
we derive several classes of facet-defining inequalities
and show that for all but one class, the corresponding separation problem
can be solved in polynomial time.
This leads to a branch-and-cut algorithm for multiple sequence alignment,
and we report on our first computational experience.
It appears that a polyhedral approach to multiple sequence alignment
can solve instances that are beyond present dynamic-programming approaches.
Export
BibTeX
@techreport{ReinertLenhofMutzelMehlhornKececioglou96,
TITLE = {A branch-and-cut algorithm for multiple sequence alignment},
AUTHOR = {Reinert, Knut and Lenhof, Hans-Peter and Mutzel, Petra and Mehlhorn, Kurt and Kececioglou, John},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-028},
NUMBER = {MPI-I-1996-1-028},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
ABSTRACT = {Multiple sequence alignment is an important problem in computational biology. We study the Maximum Trace formulation introduced by Kececioglu~\cite{Kececioglu91}. We first phrase the problem in terms of forbidden subgraphs, which enables us to express Maximum Trace as an integer linear-programming problem, and then solve the integer linear program using methods from polyhedral combinatorics. The trace {\it polytope\/} is the convex hull of all feasible solutions to the Maximum Trace problem; for the case of two sequences, we give a complete characterization of this polytope. This yields a polynomial-time algorithm for a general version of pairwise sequence alignment that, perhaps suprisingly, does not use dynamic programming; this yields, for instance, a non-dynamic-programming algorithm for sequence comparison under the 0-1 metric, which gives another answer to a long-open question in the area of string algorithms \cite{PW93}. For the multiple-sequence case, we derive several classes of facet-defining inequalities and show that for all but one class, the corresponding separation problem can be solved in polynomial time. This leads to a branch-and-cut algorithm for multiple sequence alignment, and we report on our first computational experience. It appears that a polyhedral approach to multiple sequence alignment can solve instances that are beyond present dynamic-programming approaches.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Reinert, Knut
%A Lenhof, Hans-Peter
%A Mutzel, Petra
%A Mehlhorn, Kurt
%A Kececioglou, John
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A branch-and-cut algorithm for multiple sequence alignment :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A037-B
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-028
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 15 p.
%X Multiple sequence alignment is an important problem in computational biology.
We study the Maximum Trace formulation introduced by
Kececioglu~\cite{Kececioglu91}.
We first phrase the problem in terms of forbidden subgraphs,
which enables us to express Maximum Trace as an integer linear-programming
problem,
and then solve the integer linear program using methods from polyhedral
combinatorics.
The trace {\it polytope\/} is the convex hull of all feasible solutions
to the Maximum Trace problem;
for the case of two sequences,
we give a complete characterization of this polytope.
This yields a polynomial-time algorithm
for a general version of pairwise sequence alignment
that, perhaps suprisingly, does not use dynamic programming;
this yields, for instance, a non-dynamic-programming algorithm for
sequence comparison under the 0-1 metric,
which gives another answer to a long-open question in the area of string algorithms
\cite{PW93}.
For the multiple-sequence case,
we derive several classes of facet-defining inequalities
and show that for all but one class, the corresponding separation problem
can be solved in polynomial time.
This leads to a branch-and-cut algorithm for multiple sequence alignment,
and we report on our first computational experience.
It appears that a polyhedral approach to multiple sequence alignment
can solve instances that are beyond present dynamic-programming approaches.
%B Research Report / Max-Planck-Institut für Informatik
Proximity in arrangements of algebraic sets
J. Rieger
Technical Report, 1996
J. Rieger
Technical Report, 1996
Abstract
Let $X$ be an arrangement of $n$ algebraic sets $X_i$ in $d$-space, where the $X_i$ are either parameterized or zero-sets of dimension $0\le m_i\le d-1$. We study a number of decompositions of $d$-space into connected regions in which the distance-squared function to $X$ has certain invariances. These decompositions can be used in the following of proximity problems: given some point, find the $k$ nearest sets $X_i$ in the arrangement, find the nearest point in $X$ or (assuming that $X$ is compact) find the farthest point in $X$ and hence the smallest enclosing $(d-1)$-sphere. We give bounds on the complexity of the decompositions in terms of $n$, $d$, and the degrees and dimensions of the algebraic sets $X_i$.
Export
BibTeX
@techreport{Rieger93,
TITLE = {Proximity in arrangements of algebraic sets},
AUTHOR = {Rieger, Joachim},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-003},
NUMBER = {MPI-I-1996-1-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
ABSTRACT = {Let $X$ be an arrangement of $n$ algebraic sets $X_i$ in $d$-space, where the $X_i$ are either parameterized or zero-sets of dimension $0\le m_i\le d-1$. We study a number of decompositions of $d$-space into connected regions in which the distance-squared function to $X$ has certain invariances. These decompositions can be used in the following of proximity problems: given some point, find the $k$ nearest sets $X_i$ in the arrangement, find the nearest point in $X$ or (assuming that $X$ is compact) find the farthest point in $X$ and hence the smallest enclosing $(d-1)$-sphere. We give bounds on the complexity of the decompositions in terms of $n$, $d$, and the degrees and dimensions of the algebraic sets $X_i$.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Rieger, Joachim
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Proximity in arrangements of algebraic sets :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A1A7-9
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-003
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 25 p.
%X Let $X$ be an arrangement of $n$ algebraic sets $X_i$ in $d$-space, where the $X_i$ are either parameterized or zero-sets of dimension $0\le m_i\le d-1$. We study a number of decompositions of $d$-space into connected regions in which the distance-squared function to $X$ has certain invariances. These decompositions can be used in the following of proximity problems: given some point, find the $k$ nearest sets $X_i$ in the arrangement, find the nearest point in $X$ or (assuming that $X$ is compact) find the farthest point in $X$ and hence the smallest enclosing $(d-1)$-sphere. We give bounds on the complexity of the decompositions in terms of $n$, $d$, and the degrees and dimensions of the algebraic sets $X_i$.
%B Research Report / Max-Planck-Institut für Informatik
Optimal algorithms for some proximity problems on the Gaussian sphere with applications
S. Saluja and P. Gupta
Technical Report, 1996
S. Saluja and P. Gupta
Technical Report, 1996
Abstract
We consider some geometric problems on the unit sphere which arise in
$NC$-machining. Optimal linear time algorithms are given for these
problems using linear and quadratic programming in three dimensions.
Export
BibTeX
@techreport{SalujaGupta96,
TITLE = {Optimal algorithms for some proximity problems on the Gaussian sphere with applications},
AUTHOR = {Saluja, Sanjeev and Gupta, Prosenjit},
LANGUAGE = {eng},
NUMBER = {MPI-I-1996-1-022},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
ABSTRACT = {We consider some geometric problems on the unit sphere which arise in $NC$-machining. Optimal linear time algorithms are given for these problems using linear and quadratic programming in three dimensions.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Saluja, Sanjeev
%A Gupta, Prosenjit
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Optimal algorithms for some proximity problems on the Gaussian sphere with applications :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A413-E
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 8 p.
%X We consider some geometric problems on the unit sphere which arise in
$NC$-machining. Optimal linear time algorithms are given for these
problems using linear and quadratic programming in three dimensions.
%B Research Report / Max-Planck-Institut für Informatik
A runtime test of integer arithmetic and linear algebra in LEDA
M. Seel
Technical Report, 1996
M. Seel
Technical Report, 1996
Abstract
In this Research Report we want to clarify the current efficiency
of two LEDA software layers. We examine the runtime of the
LEDA big integer number type |integer| and of the linear algebra
classes |integer_matrix| and |integer_vector|.
Export
BibTeX
@techreport{Seel97,
TITLE = {A runtime test of integer arithmetic and linear algebra in {LEDA}},
AUTHOR = {Seel, Michael},
LANGUAGE = {eng},
NUMBER = {MPI-I-1996-1-033},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
ABSTRACT = {In this Research Report we want to clarify the current efficiency of two LEDA software layers. We examine the runtime of the LEDA big integer number type |integer| and of the linear algebra classes |integer_matrix| and |integer_vector|.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Seel, Michael
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A runtime test of integer arithmetic and linear algebra in LEDA :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A01B-B
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 10 p.
%X In this Research Report we want to clarify the current efficiency
of two LEDA software layers. We examine the runtime of the
LEDA big integer number type |integer| and of the linear algebra
classes |integer_matrix| and |integer_vector|.
%B Research Report / Max-Planck-Institut für Informatik
Gossiping on meshes and tori
J. F. Sibeyn, P. S. Rao and B. H. H. Juurlink
Technical Report, 1996
J. F. Sibeyn, P. S. Rao and B. H. H. Juurlink
Technical Report, 1996
Abstract
Algorithms for performing gossiping on one- and higher dimensional
meshes are presented. As a routing model, we assume the
practically important worm-hole routing.
For one-dimensional arrays and rings, we give a novel lower bound
and an asymptotically optimal gossiping algorithm for all choices of
the parameters involved.
For two-dimensional meshes and tori, several simple algorithms
composed of one-dimensional phases are presented. For an important
range of packet and mesh sizes it gives clear improvements upon
previously developed algorithms. The algorithm is analyzed
theoretically, and the achieved improvements are also convincingly
demonstrated by simulations and by an implementation on the Paragon.
For example, on a Paragon with $81$ processors and messages of size
32 KB, relying on the built-in router requires $716$ milliseconds,
while our algorithm requires only $79$ milliseconds.
For higher dimensional meshes, we give algorithms which are based
on a generalized notion of a diagonal. These are analyzed
theoretically and by simulation.
Export
BibTeX
@techreport{SibeynRaoJuurlink96,
TITLE = {Gossiping on meshes and tori},
AUTHOR = {Sibeyn, Jop Frederic and Rao, P. Srinivasa and Juurlink, Ben H. H.},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-018},
NUMBER = {MPI-I-1996-1-018},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
ABSTRACT = {Algorithms for performing gossiping on one- and higher dimensional meshes are presented. As a routing model, we assume the practically important worm-hole routing. For one-dimensional arrays and rings, we give a novel lower bound and an asymptotically optimal gossiping algorithm for all choices of the parameters involved. For two-dimensional meshes and tori, several simple algorithms composed of one-dimensional phases are presented. For an important range of packet and mesh sizes it gives clear improvements upon previously developed algorithms. The algorithm is analyzed theoretically, and the achieved improvements are also convincingly demonstrated by simulations and by an implementation on the Paragon. For example, on a Paragon with $81$ processors and messages of size 32 KB, relying on the built-in router requires $716$ milliseconds, while our algorithm requires only $79$ milliseconds. For higher dimensional meshes, we give algorithms which are based on a generalized notion of a diagonal. These are analyzed theoretically and by simulation.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Sibeyn, Jop Frederic
%A Rao, P. Srinivasa
%A Juurlink, Ben H. H.
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T Gossiping on meshes and tori :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A15A-8
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-018
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 19 p.
%X Algorithms for performing gossiping on one- and higher dimensional
meshes are presented. As a routing model, we assume the
practically important worm-hole routing.
For one-dimensional arrays and rings, we give a novel lower bound
and an asymptotically optimal gossiping algorithm for all choices of
the parameters involved.
For two-dimensional meshes and tori, several simple algorithms
composed of one-dimensional phases are presented. For an important
range of packet and mesh sizes it gives clear improvements upon
previously developed algorithms. The algorithm is analyzed
theoretically, and the achieved improvements are also convincingly
demonstrated by simulations and by an implementation on the Paragon.
For example, on a Paragon with $81$ processors and messages of size
32 KB, relying on the built-in router requires $716$ milliseconds,
while our algorithm requires only $79$ milliseconds.
For higher dimensional meshes, we give algorithms which are based
on a generalized notion of a diagonal. These are analyzed
theoretically and by simulation.
%B Research Report
A simple parallel algorithm for the single-source shortest path problem on planar diagraphs
J. L. Träff and C. Zaroliagis
Technical Report, 1996
J. L. Träff and C. Zaroliagis
Technical Report, 1996
Abstract
We present a simple parallel algorithm for the {\em single-source shortest path problem} in {\em planar digraphs} with nonnegative real edge weights.
The algorithm runs on the EREW PRAM model of parallel computation in $O((n^{2\epsilon} + n^{1-\epsilon})\log n)$ time, performing
$O(n^{1+\epsilon}\log n)$ work for any $0<\epsilon<1/2$. The strength of the algorithm is its simplicity, making it easy to implement, and presumably 474 quite efficient in practice.
The algorithm improves upon the work of all previous algorithms.
The work can be further reduced to $O(n^{1+\epsilon})$, by plugging in a less practical, sequential planar shortest path
algorithm.
Our algorithm is based on a region decomposition of the input graph, and uses a well-known parallel implementation of Dijkstra's algorithm.
Export
BibTeX
@techreport{TraffZaroliagis96,
TITLE = {A simple parallel algorithm for the single-source shortest path problem on planar diagraphs},
AUTHOR = {Tr{\"a}ff, Jesper Larsson and Zaroliagis, Christos},
LANGUAGE = {eng},
NUMBER = {MPI-I-1996-1-012},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
ABSTRACT = {We present a simple parallel algorithm for the {\em single-source shortest path problem} in {\em planar digraphs} with nonnegative real edge weights. The algorithm runs on the EREW PRAM model of parallel computation in $O((n^{2\epsilon} + n^{1-\epsilon})\log n)$ time, performing $O(n^{1+\epsilon}\log n)$ work for any $0<\epsilon<1/2$. The strength of the algorithm is its simplicity, making it easy to implement, and presumably 474 quite efficient in practice. The algorithm improves upon the work of all previous algorithms. The work can be further reduced to $O(n^{1+\epsilon})$, by plugging in a less practical, sequential planar shortest path algorithm. Our algorithm is based on a region decomposition of the input graph, and uses a well-known parallel implementation of Dijkstra's algorithm.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Träff, Jesper Larsson
%A Zaroliagis, Christos
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A simple parallel algorithm for the single-source shortest path problem on planar diagraphs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A192-8
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 17 p.
%X We present a simple parallel algorithm for the {\em single-source shortest path problem} in {\em planar digraphs} with nonnegative real edge weights.
The algorithm runs on the EREW PRAM model of parallel computation in $O((n^{2\epsilon} + n^{1-\epsilon})\log n)$ time, performing
$O(n^{1+\epsilon}\log n)$ work for any $0<\epsilon<1/2$. The strength of the algorithm is its simplicity, making it easy to implement, and presumably 474 quite efficient in practice.
The algorithm improves upon the work of all previous algorithms.
The work can be further reduced to $O(n^{1+\epsilon})$, by plugging in a less practical, sequential planar shortest path
algorithm.
Our algorithm is based on a region decomposition of the input graph, and uses a well-known parallel implementation of Dijkstra's algorithm.
%B Research Report / Max-Planck-Institut für Informatik
Computational Molecular Biology
M. Vingron, H.-P. Lenhof and P. Mutzel
Technical Report, 1996
M. Vingron, H.-P. Lenhof and P. Mutzel
Technical Report, 1996
Abstract
Computational Biology is a fairly new subject that arose in response to the computational problems posed by the analysis and the processing of biomolecular sequence and structure data. The field was initiated in the late 60's and early 70's largely by pioneers working in the life sciences. Physicists and mathematicians entered the field in the 70's and 80's, while Computer Science became
involved with the new biological problems in the late 1980's.
Computational problems have gained further importance in molecular biology through the various genome projects which produce enormous amounts of data.
For this bibliography we focus on those areas of computational molecular biology that involve discrete algorithms
or discrete optimization. We thus neglect several other areas of computational molecular biology, like most of the literature on
the protein folding problem, as well as databases for molecular and genetic data, and genetic mapping algorithms.
Due to the availability of review papers and a bibliography this bibliography.
Export
BibTeX
@techreport{VingronLenhofMutzel96,
TITLE = {Computational Molecular Biology},
AUTHOR = {Vingron, M. and Lenhof, Hans-Peter and Mutzel, Petra},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-015},
NUMBER = {MPI-I-1996-1-015},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1996},
DATE = {1996},
ABSTRACT = {Computational Biology is a fairly new subject that arose in response to the computational problems posed by the analysis and the processing of biomolecular sequence and structure data. The field was initiated in the late 60's and early 70's largely by pioneers working in the life sciences. Physicists and mathematicians entered the field in the 70's and 80's, while Computer Science became involved with the new biological problems in the late 1980's. Computational problems have gained further importance in molecular biology through the various genome projects which produce enormous amounts of data. For this bibliography we focus on those areas of computational molecular biology that involve discrete algorithms or discrete optimization. We thus neglect several other areas of computational molecular biology, like most of the literature on the protein folding problem, as well as databases for molecular and genetic data, and genetic mapping algorithms. Due to the availability of review papers and a bibliography this bibliography.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Vingron, M.
%A Lenhof, Hans-Peter
%A Mutzel, Petra
%+ External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Computational Molecular Biology :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A188-F
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1996-1-015
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1996
%P 26 p.
%X Computational Biology is a fairly new subject that arose in response to the computational problems posed by the analysis and the processing of biomolecular sequence and structure data. The field was initiated in the late 60's and early 70's largely by pioneers working in the life sciences. Physicists and mathematicians entered the field in the 70's and 80's, while Computer Science became
involved with the new biological problems in the late 1980's.
Computational problems have gained further importance in molecular biology through the various genome projects which produce enormous amounts of data.
For this bibliography we focus on those areas of computational molecular biology that involve discrete algorithms
or discrete optimization. We thus neglect several other areas of computational molecular biology, like most of the literature on
the protein folding problem, as well as databases for molecular and genetic data, and genetic mapping algorithms.
Due to the availability of review papers and a bibliography this bibliography.
%B Research Report
1995
Sorting in linear time?
A. Andersson, S. Nilsson, T. Hagerup and R. Raman
Technical Report, 1995
A. Andersson, S. Nilsson, T. Hagerup and R. Raman
Technical Report, 1995
Abstract
We show that a unit-cost RAM with a word
length of $w$ bits can sort $n$ integers
in the range $0\Ttwodots 2^w-1$ in
$O(n\log\log n)$ time, for arbitrary $w\ge\log n$,
a significant improvement over
the bound of $O(n\sqrt{\log n})$ achieved
by the fusion trees of Fredman and Willard.
Provided that $w\ge(\log n)^{2+\epsilon}$
for some fixed $\epsilon>0$, the sorting can even
be accomplished in linear expected time
with a randomized algorithm.
Both of our algorithms parallelize without
loss on a unit-cost PRAM with a word
length of $w$ bits.
The first one yields an algorithm that uses
$O(\log n)$ time and\break
$O(n\log\log n)$ operations on a
deterministic CRCW PRAM.
The second one yields an algorithm that uses
$O(\log n)$ expected time and $O(n)$ expected
operations on a randomized EREW PRAM,
provided that $w\ge(\log n)^{2+\epsilon}$
for some fixed $\epsilon>0$.
Our deterministic and randomized sequential
and parallel algorithms generalize to the
lexicographic sorting problem of sorting
multiple-precision integers represented
in several words.
Export
BibTeX
@techreport{AnderssonNilssonHagerupRaman95,
TITLE = {Sorting in linear time?},
AUTHOR = {Andersson, A. and Nilsson, S. and Hagerup, Torben and Raman, Rajeev},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-024},
NUMBER = {MPI-I-1995-1-024},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1995},
DATE = {1995},
ABSTRACT = {We show that a unit-cost RAM with a word length of $w$ bits can sort $n$ integers in the range $0\Ttwodots 2^w-1$ in $O(n\log\log n)$ time, for arbitrary $w\ge\log n$, a significant improvement over the bound of $O(n\sqrt{\log n})$ achieved by the fusion trees of Fredman and Willard. Provided that $w\ge(\log n)^{2+\epsilon}$ for some fixed $\epsilon>0$, the sorting can even be accomplished in linear expected time with a randomized algorithm. Both of our algorithms parallelize without loss on a unit-cost PRAM with a word length of $w$ bits. The first one yields an algorithm that uses $O(\log n)$ time and\break $O(n\log\log n)$ operations on a deterministic CRCW PRAM. The second one yields an algorithm that uses $O(\log n)$ expected time and $O(n)$ expected operations on a randomized EREW PRAM, provided that $w\ge(\log n)^{2+\epsilon}$ for some fixed $\epsilon>0$. Our deterministic and randomized sequential and parallel algorithms generalize to the lexicographic sorting problem of sorting multiple-precision integers represented in several words.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Andersson, A.
%A Nilsson, S.
%A Hagerup, Torben
%A Raman, Rajeev
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Sorting in linear time? :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A1DE-D
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-024
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1995
%P 32 p.
%X We show that a unit-cost RAM with a word
length of $w$ bits can sort $n$ integers
in the range $0\Ttwodots 2^w-1$ in
$O(n\log\log n)$ time, for arbitrary $w\ge\log n$,
a significant improvement over
the bound of $O(n\sqrt{\log n})$ achieved
by the fusion trees of Fredman and Willard.
Provided that $w\ge(\log n)^{2+\epsilon}$
for some fixed $\epsilon>0$, the sorting can even
be accomplished in linear expected time
with a randomized algorithm.
Both of our algorithms parallelize without
loss on a unit-cost PRAM with a word
length of $w$ bits.
The first one yields an algorithm that uses
$O(\log n)$ time and\break
$O(n\log\log n)$ operations on a
deterministic CRCW PRAM.
The second one yields an algorithm that uses
$O(\log n)$ expected time and $O(n)$ expected
operations on a randomized EREW PRAM,
provided that $w\ge(\log n)^{2+\epsilon}$
for some fixed $\epsilon>0$.
Our deterministic and randomized sequential
and parallel algorithms generalize to the
lexicographic sorting problem of sorting
multiple-precision integers represented
in several words.
%B Research Report / Max-Planck-Institut für Informatik
Efficient computation of implicit representations of sparse graphs (revised version)
S. R. Arikati, A. Maheshwari and C. Zaroliagis
Technical Report, 1995
S. R. Arikati, A. Maheshwari and C. Zaroliagis
Technical Report, 1995
Abstract
The problem of finding an implicit representation for a graph
such that vertex adjacency can be tested quickly is
fundamental to all graph algorithms. In particular, it
is possible to represent sparse graphs on $n$ vertices
using $O(n)$ space such that vertex adjacency is tested
in $O(1)$ time. We show here how to construct such a
representation efficiently by providing simple and optimal
algorithms, both in a sequential and a parallel setting.
Our sequential algorithm runs in $O(n)$ time.
The parallel algorithm runs in $O(\log n)$ time using
$O(n/{\log n})$ CRCW PRAM processors, or in $O(\log n\log^*n)$
time using $O(n/\log n\log^*n)$ EREW PRAM processors.
Previous results for this problem
are based on matroid partitioning and thus have a high complexity.
Export
BibTeX
@techreport{ArikatiMaheshwariZaroliagis95,
TITLE = {Efficient computation of implicit representations of sparse graphs (revised version)},
AUTHOR = {Arikati, Srinivasa R. and Maheshwari, Anil and Zaroliagis, Christos},
LANGUAGE = {eng},
NUMBER = {MPI-I-1995-1-013},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1995},
DATE = {1995},
ABSTRACT = {The problem of finding an implicit representation for a graph such that vertex adjacency can be tested quickly is fundamental to all graph algorithms. In particular, it is possible to represent sparse graphs on $n$ vertices using $O(n)$ space such that vertex adjacency is tested in $O(1)$ time. We show here how to construct such a representation efficiently by providing simple and optimal algorithms, both in a sequential and a parallel setting. Our sequential algorithm runs in $O(n)$ time. The parallel algorithm runs in $O(\log n)$ time using $O(n/{\log n})$ CRCW PRAM processors, or in $O(\log n\log^*n)$ time using $O(n/\log n\log^*n)$ EREW PRAM processors. Previous results for this problem are based on matroid partitioning and thus have a high complexity.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Arikati, Srinivasa R.
%A Maheshwari, Anil
%A Zaroliagis, Christos
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Efficient computation of implicit representations of sparse graphs (revised version) :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A704-1
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1995
%P 16 p.
%X The problem of finding an implicit representation for a graph
such that vertex adjacency can be tested quickly is
fundamental to all graph algorithms. In particular, it
is possible to represent sparse graphs on $n$ vertices
using $O(n)$ space such that vertex adjacency is tested
in $O(1)$ time. We show here how to construct such a
representation efficiently by providing simple and optimal
algorithms, both in a sequential and a parallel setting.
Our sequential algorithm runs in $O(n)$ time.
The parallel algorithm runs in $O(\log n)$ time using
$O(n/{\log n})$ CRCW PRAM processors, or in $O(\log n\log^*n)$
time using $O(n/\log n\log^*n)$ EREW PRAM processors.
Previous results for this problem
are based on matroid partitioning and thus have a high complexity.
%B Research Report / Max-Planck-Institut für Informatik
Parallel Algorithms with Optimal Speedup for Bounded Treewidth
H. L. Bodlaender and T. Hagerup
Technical Report, 1995
H. L. Bodlaender and T. Hagerup
Technical Report, 1995
Abstract
We describe the first parallel algorithm with
optimal speedup for constructing minimum-width
tree decompositions of graphs of bounded treewidth.
On $n$-vertex input graphs, the algorithm works in
$O((\log n)^2)$ time using $O(n)$ operations
on the EREW PRAM.
We also give faster parallel algorithms with
optimal speedup for the problem of deciding
whether the treewidth of an input graph is
bounded by a given constant and for a variety of
problems on graphs of bounded treewidth,
including all decision problems expressible
in monadic second-order logic.
On $n$-vertex input graphs, the algorithms use
$O(n)$ operations together with $O(\log n\Tlogstar n)$
time on the EREW PRAM, or $O(\log n)$ time on the CRCW PRAM.
Export
BibTeX
@techreport{Bodlaender-Hagerup95,
TITLE = {Parallel Algorithms with Optimal Speedup for Bounded Treewidth},
AUTHOR = {Bodlaender, Hans L. and Hagerup, Torben},
LANGUAGE = {eng},
NUMBER = {MPI-I-95-1-017},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1995},
DATE = {1995},
ABSTRACT = {We describe the first parallel algorithm with optimal speedup for constructing minimum-width tree decompositions of graphs of bounded treewidth. On $n$-vertex input graphs, the algorithm works in $O((\log n)^2)$ time using $O(n)$ operations on the EREW PRAM. We also give faster parallel algorithms with optimal speedup for the problem of deciding whether the treewidth of an input graph is bounded by a given constant and for a variety of problems on graphs of bounded treewidth, including all decision problems expressible in monadic second-order logic. On $n$-vertex input graphs, the algorithms use $O(n)$ operations together with $O(\log n\Tlogstar n)$ time on the EREW PRAM, or $O(\log n)$ time on the CRCW PRAM.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Bodlaender, Hans L.
%A Hagerup, Torben
%+ External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Parallel Algorithms with Optimal Speedup for Bounded Treewidth :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0019-DBA6-8
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1995
%X We describe the first parallel algorithm with
optimal speedup for constructing minimum-width
tree decompositions of graphs of bounded treewidth.
On $n$-vertex input graphs, the algorithm works in
$O((\log n)^2)$ time using $O(n)$ operations
on the EREW PRAM.
We also give faster parallel algorithms with
optimal speedup for the problem of deciding
whether the treewidth of an input graph is
bounded by a given constant and for a variety of
problems on graphs of bounded treewidth,
including all decision problems expressible
in monadic second-order logic.
On $n$-vertex input graphs, the algorithms use
$O(n)$ operations together with $O(\log n\Tlogstar n)$
time on the EREW PRAM, or $O(\log n)$ time on the CRCW PRAM.
%B Research Report / Max-Planck-Institut für Informatik
Matching nuts and bolts optimally
P. G. Bradford
Technical Report, 1995
P. G. Bradford
Technical Report, 1995
Abstract
The nuts and bolts problem is the following :
Given a collection of $n$ nuts of distinct sizes and $n$ bolts of distinct
sizes such that for each nut there is exactly one matching bolt,
find for each nut its corresponding bolt subject
to the restriction that we can {\em only} compare nuts to bolts.
That is we can neither compare nuts to nuts, nor bolts to bolts.
This humble restriction on the comparisons appears to make
this problem quite difficult to solve.
In this paper, we illustrate the existence of an algorithm
for solving the nuts and bolts problem that makes
$O(n \lg n)$ nut-and-bolt comparisons.
We show the existence of this algorithm by showing
the existence of certain expander-based comparator networks.
Our algorithm is asymptotically optimal in terms of the number
of nut-and-bolt comparisons it does.
Another view of this result is that we show the existence of a
decision tree with depth $O(n \lg n)$ that solves this problem.
Export
BibTeX
@techreport{Bradford95,
TITLE = {Matching nuts and bolts optimally},
AUTHOR = {Bradford, Phillip G.},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-025},
NUMBER = {MPI-I-1995-1-025},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1995},
DATE = {1995},
ABSTRACT = {The nuts and bolts problem is the following : Given a collection of $n$ nuts of distinct sizes and $n$ bolts of distinct sizes such that for each nut there is exactly one matching bolt, find for each nut its corresponding bolt subject to the restriction that we can {\em only} compare nuts to bolts. That is we can neither compare nuts to nuts, nor bolts to bolts. This humble restriction on the comparisons appears to make this problem quite difficult to solve. In this paper, we illustrate the existence of an algorithm for solving the nuts and bolts problem that makes $O(n \lg n)$ nut-and-bolt comparisons. We show the existence of this algorithm by showing the existence of certain expander-based comparator networks. Our algorithm is asymptotically optimal in terms of the number of nut-and-bolt comparisons it does. Another view of this result is that we show the existence of a decision tree with depth $O(n \lg n)$ that solves this problem.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Bradford, Phillip G.
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Matching nuts and bolts optimally :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A1DB-4
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-025
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1995
%P 24 p.
%X The nuts and bolts problem is the following :
Given a collection of $n$ nuts of distinct sizes and $n$ bolts of distinct
sizes such that for each nut there is exactly one matching bolt,
find for each nut its corresponding bolt subject
to the restriction that we can {\em only} compare nuts to bolts.
That is we can neither compare nuts to nuts, nor bolts to bolts.
This humble restriction on the comparisons appears to make
this problem quite difficult to solve.
In this paper, we illustrate the existence of an algorithm
for solving the nuts and bolts problem that makes
$O(n \lg n)$ nut-and-bolt comparisons.
We show the existence of this algorithm by showing
the existence of certain expander-based comparator networks.
Our algorithm is asymptotically optimal in terms of the number
of nut-and-bolt comparisons it does.
Another view of this result is that we show the existence of a
decision tree with depth $O(n \lg n)$ that solves this problem.
%B Research Report / Max-Planck-Institut für Informatik
Weak epsilon-nets for points on a hypersphere
P. G. Bradford and V. Capoyleas
Technical Report, 1995
P. G. Bradford and V. Capoyleas
Technical Report, 1995
Abstract
We present algorithms for the two layer straightline crossing
minimization problem that are able to compute exact optima.
Our computational results lead us to the conclusion that there
is no need for heuristics if one layer is fixed, even though
the problem is NP-hard, and that for the general problem with
two variable layers, true optima can be computed for sparse
instances in which the smaller layer contains up to 15 nodes.
For bigger instances, the iterated barycenter method turns out
to be the method of choice among several popular heuristics
whose performance we could assess by comparing the results
to optimum solutions.
Export
BibTeX
@techreport{BradfordCapoyleas95,
TITLE = {Weak epsilon-nets for points on a hypersphere},
AUTHOR = {Bradford, Phillip G. and Capoyleas, Vasilis},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-029},
NUMBER = {MPI-I-1995-1-029},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1995},
DATE = {1995},
ABSTRACT = {We present algorithms for the two layer straightline crossing minimization problem that are able to compute exact optima. Our computational results lead us to the conclusion that there is no need for heuristics if one layer is fixed, even though the problem is NP-hard, and that for the general problem with two variable layers, true optima can be computed for sparse instances in which the smaller layer contains up to 15 nodes. For bigger instances, the iterated barycenter method turns out to be the method of choice among several popular heuristics whose performance we could assess by comparing the results to optimum solutions.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Bradford, Phillip G.
%A Capoyleas, Vasilis
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Weak epsilon-nets for points on a hypersphere :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A1CF-F
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-029
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1995
%P 8 p.
%X We present algorithms for the two layer straightline crossing
minimization problem that are able to compute exact optima.
Our computational results lead us to the conclusion that there
is no need for heuristics if one layer is fixed, even though
the problem is NP-hard, and that for the general problem with
two variable layers, true optima can be computed for sparse
instances in which the smaller layer contains up to 15 nodes.
For bigger instances, the iterated barycenter method turns out
to be the method of choice among several popular heuristics
whose performance we could assess by comparing the results
to optimum solutions.
%B Research Report
A polylog-time and $O(n\sqrt\lg n)$-work parallel algorithm for finding the row minima in totally monotone matrices
P. G. Bradford, R. Fleischer and M. Smid
Technical Report, 1995
P. G. Bradford, R. Fleischer and M. Smid
Technical Report, 1995
Abstract
We give a parallel algorithm for computing all row minima
in a totally monotone $n\times n$ matrix which is simpler and more
work efficient than previous polylog-time algorithms.
It runs in
$O(\lg n \lg\lg n)$ time doing $O(n\sqrt{\lg n})$ work on a {\sf CRCW}, in
$O(\lg n (\lg\lg n)^2)$ time doing $O(n\sqrt{\lg n})$ work on a {\sf CREW},
and in $O(\lg n\sqrt{\lg n \lg\lg n})$ time
doing $O(n\sqrt{\lg n\lg\lg n})$ work on an {\sf EREW}.
Export
BibTeX
@techreport{BradfordFleischerSmid95,
TITLE = {A polylog-time and \$O(n{\textbackslash}sqrt{\textbackslash}lg n)\$-work parallel algorithm for finding the row minima in totally monotone matrices},
AUTHOR = {Bradford, Phillip Gnassi and Fleischer, Rudolf and Smid, Michiel},
LANGUAGE = {eng},
NUMBER = {MPI-I-1995-1-006},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1995},
DATE = {1995},
ABSTRACT = {We give a parallel algorithm for computing all row minima in a totally monotone $n\times n$ matrix which is simpler and more work efficient than previous polylog-time algorithms. It runs in $O(\lg n \lg\lg n)$ time doing $O(n\sqrt{\lg n})$ work on a {\sf CRCW}, in $O(\lg n (\lg\lg n)^2)$ time doing $O(n\sqrt{\lg n})$ work on a {\sf CREW}, and in $O(\lg n\sqrt{\lg n \lg\lg n})$ time doing $O(n\sqrt{\lg n\lg\lg n})$ work on an {\sf EREW}.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Bradford, Phillip Gnassi
%A Fleischer, Rudolf
%A Smid, Michiel
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A polylog-time and $O(n\sqrt\lg n)$-work parallel algorithm for finding the row minima in totally monotone matrices :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A75F-8
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1995
%P 12 p.
%X We give a parallel algorithm for computing all row minima
in a totally monotone $n\times n$ matrix which is simpler and more
work efficient than previous polylog-time algorithms.
It runs in
$O(\lg n \lg\lg n)$ time doing $O(n\sqrt{\lg n})$ work on a {\sf CRCW}, in
$O(\lg n (\lg\lg n)^2)$ time doing $O(n\sqrt{\lg n})$ work on a {\sf CREW},
and in $O(\lg n\sqrt{\lg n \lg\lg n})$ time
doing $O(n\sqrt{\lg n\lg\lg n})$ work on an {\sf EREW}.
%B Research Report / Max-Planck-Institut für Informatik
Matching nuts and bolts faster
P. G. Bradford and R. Fleischer
Technical Report, 1995
P. G. Bradford and R. Fleischer
Technical Report, 1995
Abstract
The problem of matching nuts and bolts is the following :
Given a collection of $n$ nuts of distinct sizes and $n$ bolts
such that there is a one-to-one correspondence between the nuts
and the bolts, find for each nut its corresponding bolt.
We can {\em only} compare nuts to bolts.
That is we can neither compare nuts to nuts, nor bolts to bolts.
This humble restriction on the comparisons appears to make
this problem very hard to solve.
In fact, the best deterministic solution to date is due
to Alon {\it et al\/.} [1] and takes $\Theta(n \log^4 n)$
time. Their solution uses (efficient) graph expanders. In this paper,
we give a simpler $\Theta(n \log^2 n)$ time algorithm which uses only
a simple (and not so efficient) expander.
Export
BibTeX
@techreport{BradfordFleischer,
TITLE = {Matching nuts and bolts faster},
AUTHOR = {Bradford, Phillip Gnassi and Fleischer, Rudolf},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-003},
NUMBER = {MPI-I-1995-1-003},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1995},
DATE = {1995},
ABSTRACT = {The problem of matching nuts and bolts is the following : Given a collection of $n$ nuts of distinct sizes and $n$ bolts such that there is a one-to-one correspondence between the nuts and the bolts, find for each nut its corresponding bolt. We can {\em only} compare nuts to bolts. That is we can neither compare nuts to nuts, nor bolts to bolts. This humble restriction on the comparisons appears to make this problem very hard to solve. In fact, the best deterministic solution to date is due to Alon {\it et al\/.} [1] and takes $\Theta(n \log^4 n)$ time. Their solution uses (efficient) graph expanders. In this paper, we give a simpler $\Theta(n \log^2 n)$ time algorithm which uses only a simple (and not so efficient) expander.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Bradford, Phillip Gnassi
%A Fleischer, Rudolf
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Matching nuts and bolts faster :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A846-5
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-003
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1995
%P 7 p.
%X The problem of matching nuts and bolts is the following :
Given a collection of $n$ nuts of distinct sizes and $n$ bolts
such that there is a one-to-one correspondence between the nuts
and the bolts, find for each nut its corresponding bolt.
We can {\em only} compare nuts to bolts.
That is we can neither compare nuts to nuts, nor bolts to bolts.
This humble restriction on the comparisons appears to make
this problem very hard to solve.
In fact, the best deterministic solution to date is due
to Alon {\it et al\/.} [1] and takes $\Theta(n \log^4 n)$
time. Their solution uses (efficient) graph expanders. In this paper,
we give a simpler $\Theta(n \log^2 n)$ time algorithm which uses only
a simple (and not so efficient) expander.
%B Research Report / Max-Planck-Institut für Informatik
Shortest paths in digraphs of small treewidth. Part I: Sequential algorithms
S. Chaudhuri and C. Zaroliagis
Technical Report, 1995a
S. Chaudhuri and C. Zaroliagis
Technical Report, 1995a
Abstract
We consider the problem of preprocessing an $n$-vertex digraph with
real edge weights so that subsequent queries for the shortest path or distance
between any two vertices can be efficiently answered.
We give algorithms
that depend on the {\em treewidth} of the input graph. When the
treewidth is a constant, our algorithms can answer distance queries in
$O(\alpha(n))$ time after $O(n)$ preprocessing. This improves upon
previously known results for the same problem.
We also give a
dynamic algorithm which, after a change in an edge weight, updates the
data structure in time $O(n^\beta)$, for any constant $0 < \beta < 1$.
Furthermore, an algorithm of independent interest is given:
computing a shortest path tree, or finding a negative cycle in linear
time.
Export
BibTeX
@techreport{ChaudhuriZaroliagis95a,
TITLE = {Shortest paths in digraphs of small treewidth. Part I: Sequential algorithms},
AUTHOR = {Chaudhuri, Shiva and Zaroliagis, Christos},
LANGUAGE = {eng},
NUMBER = {MPI-I-1995-1-020},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1995},
DATE = {1995},
ABSTRACT = {We consider the problem of preprocessing an $n$-vertex digraph with real edge weights so that subsequent queries for the shortest path or distance between any two vertices can be efficiently answered. We give algorithms that depend on the {\em treewidth} of the input graph. When the treewidth is a constant, our algorithms can answer distance queries in $O(\alpha(n))$ time after $O(n)$ preprocessing. This improves upon previously known results for the same problem. We also give a dynamic algorithm which, after a change in an edge weight, updates the data structure in time $O(n^\beta)$, for any constant $0 < \beta < 1$. Furthermore, an algorithm of independent interest is given: computing a shortest path tree, or finding a negative cycle in linear time.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Chaudhuri, Shiva
%A Zaroliagis, Christos
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Shortest paths in digraphs of small treewidth. Part I: Sequential algorithms :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A428-F
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1995
%P 17 p.
%X We consider the problem of preprocessing an $n$-vertex digraph with
real edge weights so that subsequent queries for the shortest path or distance
between any two vertices can be efficiently answered.
We give algorithms
that depend on the {\em treewidth} of the input graph. When the
treewidth is a constant, our algorithms can answer distance queries in
$O(\alpha(n))$ time after $O(n)$ preprocessing. This improves upon
previously known results for the same problem.
We also give a
dynamic algorithm which, after a change in an edge weight, updates the
data structure in time $O(n^\beta)$, for any constant $0 < \beta < 1$.
Furthermore, an algorithm of independent interest is given:
computing a shortest path tree, or finding a negative cycle in linear
time.
%B Research Report / Max-Planck-Institut für Informatik
Shortest paths in digraphs of small treewidth part II: optimal parallel algirithms
S. Chaudhuri and C. Zaroliagis
Technical Report, 1995b
S. Chaudhuri and C. Zaroliagis
Technical Report, 1995b
Abstract
We consider the problem of preprocessing an $n$-vertex digraph with
real edge weights so that subsequent queries for the shortest path or distance
between any two vertices can be efficiently answered.
We give parallel algorithms for the EREW PRAM model of computation
that depend on the {\em treewidth} of
the input graph. When the treewidth is a constant, our algorithms
can answer distance queries in $O(\alpha(n))$ time using a single
processor, after a preprocessing of $O(\log^2n)$ time and $O(n)$ work,
where $\alpha(n)$ is the inverse of Ackermann's function.
The class of constant treewidth graphs
contains outerplanar graphs and series-parallel graphs, among
others. To the best of our knowledge, these
are the first parallel algorithms which achieve these bounds
for any class of graphs except trees.
We also give a dynamic algorithm which, after a change in
an edge weight, updates our data structures in $O(\log n)$ time
using $O(n^\beta)$ work, for any constant $0 < \beta < 1$.
Moreover, we give an algorithm of independent interest:
computing a shortest path tree, or finding a negative cycle in
$O(\log^2 n)$ time using $O(n)$ work.
Export
BibTeX
@techreport{ChaudhuriZaroliagis95b,
TITLE = {Shortest paths in digraphs of small treewidth part {II}: optimal parallel algirithms},
AUTHOR = {Chaudhuri, Shiva and Zaroliagis, Christos},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-021},
NUMBER = {MPI-I-1995-1-021},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1995},
DATE = {1995},
ABSTRACT = {We consider the problem of preprocessing an $n$-vertex digraph with real edge weights so that subsequent queries for the shortest path or distance between any two vertices can be efficiently answered. We give parallel algorithms for the EREW PRAM model of computation that depend on the {\em treewidth} of the input graph. When the treewidth is a constant, our algorithms can answer distance queries in $O(\alpha(n))$ time using a single processor, after a preprocessing of $O(\log^2n)$ time and $O(n)$ work, where $\alpha(n)$ is the inverse of Ackermann's function. The class of constant treewidth graphs contains outerplanar graphs and series-parallel graphs, among others. To the best of our knowledge, these are the first parallel algorithms which achieve these bounds for any class of graphs except trees. We also give a dynamic algorithm which, after a change in an edge weight, updates our data structures in $O(\log n)$ time using $O(n^\beta)$ work, for any constant $0 < \beta < 1$. Moreover, we give an algorithm of independent interest: computing a shortest path tree, or finding a negative cycle in $O(\log^2 n)$ time using $O(n)$ work.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Chaudhuri, Shiva
%A Zaroliagis, Christos
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Shortest paths in digraphs of small treewidth part II: optimal parallel algirithms :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A41F-5
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-021
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1995
%P 20 p.
%X We consider the problem of preprocessing an $n$-vertex digraph with
real edge weights so that subsequent queries for the shortest path or distance
between any two vertices can be efficiently answered.
We give parallel algorithms for the EREW PRAM model of computation
that depend on the {\em treewidth} of
the input graph. When the treewidth is a constant, our algorithms
can answer distance queries in $O(\alpha(n))$ time using a single
processor, after a preprocessing of $O(\log^2n)$ time and $O(n)$ work,
where $\alpha(n)$ is the inverse of Ackermann's function.
The class of constant treewidth graphs
contains outerplanar graphs and series-parallel graphs, among
others. To the best of our knowledge, these
are the first parallel algorithms which achieve these bounds
for any class of graphs except trees.
We also give a dynamic algorithm which, after a change in
an edge weight, updates our data structures in $O(\log n)$ time
using $O(n^\beta)$ work, for any constant $0 < \beta < 1$.
Moreover, we give an algorithm of independent interest:
computing a shortest path tree, or finding a negative cycle in
$O(\log^2 n)$ time using $O(n)$ work.
%B Research Report / Max-Planck-Institut für Informatik
Exact ground states of Ising spin classes: new experimental results with a branch and cut algorithm
M. Diehl, C. De Simone, M. Jünger, P. Mutzel, G. Reinelt and G. Rinaldi
Technical Report, 1995
M. Diehl, C. De Simone, M. Jünger, P. Mutzel, G. Reinelt and G. Rinaldi
Technical Report, 1995
Abstract
In this paper we study 2-dimensional Ising spin glasses on a grid with nearest
neighbor and periodic boundary interactions, based on a Gaussian bond
distribution, and an exterior magnetic field.
We show how using a technique called branch and cut, the exact
ground states of grids of sizes up to $100\times 100$ can be determined in a
moderate amount of computation time, and we report on extensive computational
tests. With our method we produce results based on more than $20\,000$
experiments
on the properties of spin glasses whose errors depend only on the assumptions
on the
model and not on the computational process. This feature is a clear advantage
of the method over other more popular ways to compute the ground state, like
Monte Carlo simulation including simulated annealing, evolutionary, and
genetic algorithms, that provide only approximate
ground states with a degree of accuracy that cannot be determined a priori.
Our ground state energy estimation at zero field is~$-1.317$.
Export
BibTeX
@techreport{DiehlDeSimoneJuengerMutzelReineltRinaldi,
TITLE = {Exact ground states of Ising spin classes: new experimental results with a branch and cut algorithm},
AUTHOR = {Diehl, M. and De Simone, C. and J{\"u}nger, Michael and Mutzel, Petra and Reinelt, Gerhard and Rinaldi, G.},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-004},
NUMBER = {MPI-I-1995-1-004},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1995},
DATE = {1995},
ABSTRACT = {In this paper we study 2-dimensional Ising spin glasses on a grid with nearest neighbor and periodic boundary interactions, based on a Gaussian bond distribution, and an exterior magnetic field. We show how using a technique called branch and cut, the exact ground states of grids of sizes up to $100\times 100$ can be determined in a moderate amount of computation time, and we report on extensive computational tests. With our method we produce results based on more than $20\,000$ experiments on the properties of spin glasses whose errors depend only on the assumptions on the model and not on the computational process. This feature is a clear advantage of the method over other more popular ways to compute the ground state, like Monte Carlo simulation including simulated annealing, evolutionary, and genetic algorithms, that provide only approximate ground states with a degree of accuracy that cannot be determined a priori. Our ground state energy estimation at zero field is~$-1.317$.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Diehl, M.
%A De Simone, C.
%A Jünger, Michael
%A Mutzel, Petra
%A Reinelt, Gerhard
%A Rinaldi, G.
%+ External Organizations
External Organizations
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
%T Exact ground states of Ising spin classes: new experimental results with a branch and cut algorithm :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A765-7
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-004
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1995
%P 17 p.
%X In this paper we study 2-dimensional Ising spin glasses on a grid with nearest
neighbor and periodic boundary interactions, based on a Gaussian bond
distribution, and an exterior magnetic field.
We show how using a technique called branch and cut, the exact
ground states of grids of sizes up to $100\times 100$ can be determined in a
moderate amount of computation time, and we report on extensive computational
tests. With our method we produce results based on more than $20\,000$
experiments
on the properties of spin glasses whose errors depend only on the assumptions
on the
model and not on the computational process. This feature is a clear advantage
of the method over other more popular ways to compute the ground state, like
Monte Carlo simulation including simulated annealing, evolutionary, and
genetic algorithms, that provide only approximate
ground states with a degree of accuracy that cannot be determined a priori.
Our ground state energy estimation at zero field is~$-1.317$.
%B Research Report
The fourth moment in Luby`s distribution
D. P. Dubhashi, G. E. Pantziou, P. G. Spirakis and C. Zaroliagis
Technical Report, 1995
D. P. Dubhashi, G. E. Pantziou, P. G. Spirakis and C. Zaroliagis
Technical Report, 1995
Abstract
Luby (1988) proposed a way to derandomize randomized
computations which is based on the construction of a small probability
space whose elements are $3$-wise independent.
In this paper we prove some new properties of Luby's space.
More precisely, we analyze the fourth moment and
prove an interesting technical property which helps
to understand better Luby's distribution. As an application,
we study the behavior of random edge cuts in a weighted graph.
Export
BibTeX
@techreport{DubhashiPantziouSpirakisZaroliagis95,
TITLE = {The fourth moment in Luby`s distribution},
AUTHOR = {Dubhashi, Devdatt P. and Pantziou, Grammati E. and Spirakis, P. G. and Zaroliagis, Christos},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-019},
NUMBER = {MPI-I-1995-1-019},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1995},
DATE = {1995},
ABSTRACT = {Luby (1988) proposed a way to derandomize randomized computations which is based on the construction of a small probability space whose elements are $3$-wise independent. In this paper we prove some new properties of Luby's space. More precisely, we analyze the fourth moment and prove an interesting technical property which helps to understand better Luby's distribution. As an application, we study the behavior of random edge cuts in a weighted graph.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Dubhashi, Devdatt P.
%A Pantziou, Grammati E.
%A Spirakis, P. G.
%A Zaroliagis, Christos
%+ External Organizations
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T The fourth moment in Luby`s distribution :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A432-8
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-019
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1995
%P 10 p.
%X Luby (1988) proposed a way to derandomize randomized
computations which is based on the construction of a small probability
space whose elements are $3$-wise independent.
In this paper we prove some new properties of Luby's space.
More precisely, we analyze the fourth moment and
prove an interesting technical property which helps
to understand better Luby's distribution. As an application,
we study the behavior of random edge cuts in a weighted graph.
%B Research Report / Max-Planck-Institut für Informatik
Towards self-stabilizing wait-free shared memory objects
J.-H. Hoepmann, M. Papatriantafilou and P. Tsigas
Technical Report, 1995
J.-H. Hoepmann, M. Papatriantafilou and P. Tsigas
Technical Report, 1995
Abstract
Past research on fault tolerant distributed systems has focussed on either
processor failures, ranging from benign crash failures to the malicious
byzantine failure types, or on transient memory failures, which can
suddenly corrupt the state of the system.
An interesting question in the theory of distributed computing is whether one
can device highly fault tolerant protocols which can
tolerate both processor failures as well as transient errors.
To answer this question we consider the construction of
self-stabilizing wait-free shared memory objects.
These objects occur naturally in distributed systems in which both processors
and memory may be faulty.
Our contribution in this paper is threefold. First, we propose a general
definition of a self-stabilizing wait-free shared memory object that
expresses safety guarantees even in the face of processor failures.
Second, we show that within this framework one cannot construct a
self-stabilizing single-reader single-writer regular bit from
single-reader single-writer safe bits. This result leads us to postulate a
self-stabilizing {\footnotesize\it dual\/}-reader single-writer safe bit with
which, as a
third contribution, we construct self-stabilizing regular and atomic registers.
Export
BibTeX
@techreport{HoepmannPapatriantafilouTsigas95,
TITLE = {Towards self-stabilizing wait-free shared memory objects},
AUTHOR = {Hoepmann, J.-H. and Papatriantafilou, Marina and Tsigas, Philippas},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-005},
NUMBER = {MPI-I-1995-1-005},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1995},
DATE = {1995},
ABSTRACT = {Past research on fault tolerant distributed systems has focussed on either processor failures, ranging from benign crash failures to the malicious byzantine failure types, or on transient memory failures, which can suddenly corrupt the state of the system. An interesting question in the theory of distributed computing is whether one can device highly fault tolerant protocols which can tolerate both processor failures as well as transient errors. To answer this question we consider the construction of self-stabilizing wait-free shared memory objects. These objects occur naturally in distributed systems in which both processors and memory may be faulty. Our contribution in this paper is threefold. First, we propose a general definition of a self-stabilizing wait-free shared memory object that expresses safety guarantees even in the face of processor failures. Second, we show that within this framework one cannot construct a self-stabilizing single-reader single-writer regular bit from single-reader single-writer safe bits. This result leads us to postulate a self-stabilizing {\footnotesize\it dual\/}-reader single-writer safe bit with which, as a third contribution, we construct self-stabilizing regular and atomic registers.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Hoepmann, J.-H.
%A Papatriantafilou, Marina
%A Tsigas, Philippas
%+ External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Towards self-stabilizing wait-free shared memory objects :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A762-D
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-005
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1995
%P 15 p.
%X Past research on fault tolerant distributed systems has focussed on either
processor failures, ranging from benign crash failures to the malicious
byzantine failure types, or on transient memory failures, which can
suddenly corrupt the state of the system.
An interesting question in the theory of distributed computing is whether one
can device highly fault tolerant protocols which can
tolerate both processor failures as well as transient errors.
To answer this question we consider the construction of
self-stabilizing wait-free shared memory objects.
These objects occur naturally in distributed systems in which both processors
and memory may be faulty.
Our contribution in this paper is threefold. First, we propose a general
definition of a self-stabilizing wait-free shared memory object that
expresses safety guarantees even in the face of processor failures.
Second, we show that within this framework one cannot construct a
self-stabilizing single-reader single-writer regular bit from
single-reader single-writer safe bits. This result leads us to postulate a
self-stabilizing {\footnotesize\it dual\/}-reader single-writer safe bit with
which, as a
third contribution, we construct self-stabilizing regular and atomic registers.
%B Research Report / Max-Planck-Institut für Informatik
The thickness of a minor-excluded class of graphs
M. Jünger, P. Mutzel, T. Odenthal and M. Scharbrodt
Technical Report, 1995
M. Jünger, P. Mutzel, T. Odenthal and M. Scharbrodt
Technical Report, 1995
Abstract
The thickness problem on graphs is $\cal NP$-hard and only few
results concerning this graph invariant are known. Using a decomposition
theorem of Truemper, we show that the thickness of
the class of graphs without $G_{12}$-minors is less
than or equal to two (and therefore, the same is true for the more
well-known class of the graphs without $K_5$-minors).
Consequently, the thickness of this class of graphs can
be determined with a planarity testing algorithm in linear time.
Export
BibTeX
@techreport{JuengerMutzelOdenthalScharbrodt95,
TITLE = {The thickness of a minor-excluded class of graphs},
AUTHOR = {J{\"u}nger, Michael and Mutzel, Petra and Odenthal, Thomas and Scharbrodt, Mark},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-027},
NUMBER = {MPI-I-1995-1-027},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1995},
DATE = {1995},
ABSTRACT = {The thickness problem on graphs is $\cal NP$-hard and only few results concerning this graph invariant are known. Using a decomposition theorem of Truemper, we show that the thickness of the class of graphs without $G_{12}$-minors is less than or equal to two (and therefore, the same is true for the more well-known class of the graphs without $K_5$-minors). Consequently, the thickness of this class of graphs can be determined with a planarity testing algorithm in linear time.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Jünger, Michael
%A Mutzel, Petra
%A Odenthal, Thomas
%A Scharbrodt, Mark
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T The thickness of a minor-excluded class of graphs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A1D5-0
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-027
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1995
%P 9 p.
%X The thickness problem on graphs is $\cal NP$-hard and only few
results concerning this graph invariant are known. Using a decomposition
theorem of Truemper, we show that the thickness of
the class of graphs without $G_{12}$-minors is less
than or equal to two (and therefore, the same is true for the more
well-known class of the graphs without $K_5$-minors).
Consequently, the thickness of this class of graphs can
be determined with a planarity testing algorithm in linear time.
%B Research Report
Exact and heuristic algorithms for 2-layer straightline crossing minimization
M. Jünger and P. Mutzel
Technical Report, 1995
M. Jünger and P. Mutzel
Technical Report, 1995
Abstract
We present algorithms for the two layer straightline crossing
minimization problem that are able to compute exact optima.
Our computational results lead us to the conclusion that there
is no need for heuristics if one layer is fixed, even though
the problem is NP-hard, and that for the general problem with
two variable layers, true optima can be computed for sparse
instances in which the smaller layer contains up to 15 nodes.
For bigger instances, the iterated barycenter method turns out
to be the method of choice among several popular heuristics
whose performance we could assess by comparing the results
to optimum solutions.
Export
BibTeX
@techreport{JuengerMutzel95,
TITLE = {Exact and heuristic algorithms for 2-layer straightline crossing minimization},
AUTHOR = {J{\"u}nger, Michael and Mutzel, Petra},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-028},
NUMBER = {MPI-I-1995-1-028},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1995},
DATE = {1995},
ABSTRACT = {We present algorithms for the two layer straightline crossing minimization problem that are able to compute exact optima. Our computational results lead us to the conclusion that there is no need for heuristics if one layer is fixed, even though the problem is NP-hard, and that for the general problem with two variable layers, true optima can be computed for sparse instances in which the smaller layer contains up to 15 nodes. For bigger instances, the iterated barycenter method turns out to be the method of choice among several popular heuristics whose performance we could assess by comparing the results to optimum solutions.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Jünger, Michael
%A Mutzel, Petra
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Exact and heuristic algorithms for 2-layer straightline crossing minimization :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A1D2-5
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-028
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1995
%P 12 p.
%X We present algorithms for the two layer straightline crossing
minimization problem that are able to compute exact optima.
Our computational results lead us to the conclusion that there
is no need for heuristics if one layer is fixed, even though
the problem is NP-hard, and that for the general problem with
two variable layers, true optima can be computed for sparse
instances in which the smaller layer contains up to 15 nodes.
For bigger instances, the iterated barycenter method turns out
to be the method of choice among several popular heuristics
whose performance we could assess by comparing the results
to optimum solutions.
%B Research Report
Dynamic maintenance of 2-d convex hulls and order decomposable problems
S. Kapoor
Technical Report, 1995
S. Kapoor
Technical Report, 1995
Abstract
In this paper, we consider dynamic data structures for order
decomposable problems. This class of problems include the convex hull
problem, the Voronoi diagram problem, the maxima problem,
and intersection of halfspaces.
This paper first describes a scheme for maintaining convex hulls in
the plane dynamically in $O(\log n)$ amortized time for insertions and
$O(\log^2 n)$ time for deletions. $O(n)$ space is used.
The scheme improves on the time complexity of the general scheme
by Overmars and Van Leeuwen. We then consider the general class
of Order Decomposable Problems. We show improved behavior for
insertions in the presence of deletions, under some assumptions.
The main assumption we make is that the problems are required
to be {\em change sensitive}, i.e., updates to the solution
of the problem at an insertion can be obtained in time proportional
to the changes.
Export
BibTeX
@techreport{,
TITLE = {Dynamic maintenance of 2-d convex hulls and order decomposable problems},
AUTHOR = {Kapoor, Sanjiv},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-015},
NUMBER = {MPI-I-1995-1-015},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1995},
DATE = {1995},
ABSTRACT = {In this paper, we consider dynamic data structures for order decomposable problems. This class of problems include the convex hull problem, the Voronoi diagram problem, the maxima problem, and intersection of halfspaces. This paper first describes a scheme for maintaining convex hulls in the plane dynamically in $O(\log n)$ amortized time for insertions and $O(\log^2 n)$ time for deletions. $O(n)$ space is used. The scheme improves on the time complexity of the general scheme by Overmars and Van Leeuwen. We then consider the general class of Order Decomposable Problems. We show improved behavior for insertions in the presence of deletions, under some assumptions. The main assumption we make is that the problems are required to be {\em change sensitive}, i.e., updates to the solution of the problem at an insertion can be obtained in time proportional to the changes.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Kapoor, Sanjiv
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Dynamic maintenance of 2-d convex hulls and order decomposable problems :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B6D6-1
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-015
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1995
%P 22 p.
%X In this paper, we consider dynamic data structures for order
decomposable problems. This class of problems include the convex hull
problem, the Voronoi diagram problem, the maxima problem,
and intersection of halfspaces.
This paper first describes a scheme for maintaining convex hulls in
the plane dynamically in $O(\log n)$ amortized time for insertions and
$O(\log^2 n)$ time for deletions. $O(n)$ space is used.
The scheme improves on the time complexity of the general scheme
by Overmars and Van Leeuwen. We then consider the general class
of Order Decomposable Problems. We show improved behavior for
insertions in the presence of deletions, under some assumptions.
The main assumption we make is that the problems are required
to be {\em change sensitive}, i.e., updates to the solution
of the problem at an insertion can be obtained in time proportional
to the changes.
%B Research Report / Max-Planck-Institut für Informatik
Radix heaps an efficient implementation for priority queues
J. Könemann, C. Schmitz and C. Schwarz
Technical Report, 1995
J. Könemann, C. Schmitz and C. Schwarz
Technical Report, 1995
Abstract
We describe the implementation of a data structure called radix heap,
which is a priority queue with restricted functionality.
Its restrictions are observed by Dijkstra's algorithm, which uses
priority queues to solve the single source shortest path problem
in graphs with nonnegative edge costs.
For a graph with $n$ nodes and $m$ edges and real-valued edge costs,
the best known theoretical bound for the algorithm is $O(m+n\log n)$.
This bound is attained by using Fibonacci heaps to implement
priority queues.
If the edge costs are integers in the range $[0\ldots C]$, then using
our implementation of radix heaps for Dijkstra's algorithm
leads to a running time of $O(m+n\log C)$.
We compare our implementation of radix heaps with an existing implementation
of Fibonacci heaps in the framework of Dijkstra's algorithm. Our
experiments exhibit a tangible advantage for radix heaps over Fibonacci heaps
and confirm the positive influence of small edge costs on the running time.
Export
BibTeX
@techreport{KoenemannSchmitzSchwarz95,
TITLE = {Radix heaps an efficient implementation for priority queues},
AUTHOR = {K{\"o}nemann, Jochen and Schmitz, Christoph and Schwarz, Christian},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-009},
NUMBER = {MPI-I-1995-1-009},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1995},
DATE = {1995},
ABSTRACT = {We describe the implementation of a data structure called radix heap, which is a priority queue with restricted functionality. Its restrictions are observed by Dijkstra's algorithm, which uses priority queues to solve the single source shortest path problem in graphs with nonnegative edge costs. For a graph with $n$ nodes and $m$ edges and real-valued edge costs, the best known theoretical bound for the algorithm is $O(m+n\log n)$. This bound is attained by using Fibonacci heaps to implement priority queues. If the edge costs are integers in the range $[0\ldots C]$, then using our implementation of radix heaps for Dijkstra's algorithm leads to a running time of $O(m+n\log C)$. We compare our implementation of radix heaps with an existing implementation of Fibonacci heaps in the framework of Dijkstra's algorithm. Our experiments exhibit a tangible advantage for radix heaps over Fibonacci heaps and confirm the positive influence of small edge costs on the running time.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Könemann, Jochen
%A Schmitz, Christoph
%A Schwarz, Christian
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Radix heaps an efficient implementation for priority queues :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A759-3
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-009
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1995
%P 27 p.
%X We describe the implementation of a data structure called radix heap,
which is a priority queue with restricted functionality.
Its restrictions are observed by Dijkstra's algorithm, which uses
priority queues to solve the single source shortest path problem
in graphs with nonnegative edge costs.
For a graph with $n$ nodes and $m$ edges and real-valued edge costs,
the best known theoretical bound for the algorithm is $O(m+n\log n)$.
This bound is attained by using Fibonacci heaps to implement
priority queues.
If the edge costs are integers in the range $[0\ldots C]$, then using
our implementation of radix heaps for Dijkstra's algorithm
leads to a running time of $O(m+n\log C)$.
We compare our implementation of radix heaps with an existing implementation
of Fibonacci heaps in the framework of Dijkstra's algorithm. Our
experiments exhibit a tangible advantage for radix heaps over Fibonacci heaps
and confirm the positive influence of small edge costs on the running time.
%B Research Report / Max-Planck-Institut für Informatik
An algorithm for the protein docking problem
H.-P. Lenhof
Technical Report, 1995
H.-P. Lenhof
Technical Report, 1995
Abstract
We have implemented a parallel distributed geometric docking
algorithm that uses a new measure for the size of the
contact area of two molecules. The measure is a potential function
that counts the ``van der Waals contacts'' between the atoms of the
two molecules (the algorithm does not compute
the Lennard-Jones potential).
An integer constant $c_a$ is added to the potential for
each pair of atoms whose
distance is in a certain interval. For each pair whose distance is
smaller than the lower bound of the interval an integer constant
$c_s$ is subtracted from the potential ($c_a <c_s$).
The number of allowed overlapping atom pairs is handled by
a third parameter $N$.
Conformations where more than $N$ atom pairs overlap are
ignored. In our ``real world'' experiments we have used
a small parameter $N$ that allows small local penetration.
Among the best five dockings found by the algorithm there was
almost always a good (rms) approximation of the real
conformation.
In 42 of 52 test examples
the best conformation with respect to the potential function
was an approximation of the real conformation.
The running time of our sequential algorithm is in the order
of the running time of the algorithm of
Norel {\it et al.}[NLW+].
The parallel version of the algorithm has a reasonable
speedup and modest communication requirements.
Export
BibTeX
@techreport{Lenhof95,
TITLE = {An algorithm for the protein docking problem},
AUTHOR = {Lenhof, Hans-Peter},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-023},
NUMBER = {MPI-I-1995-1-023},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1995},
DATE = {1995},
ABSTRACT = {We have implemented a parallel distributed geometric docking algorithm that uses a new measure for the size of the contact area of two molecules. The measure is a potential function that counts the ``van der Waals contacts'' between the atoms of the two molecules (the algorithm does not compute the Lennard-Jones potential). An integer constant $c_a$ is added to the potential for each pair of atoms whose distance is in a certain interval. For each pair whose distance is smaller than the lower bound of the interval an integer constant $c_s$ is subtracted from the potential ($c_a <c_s$). The number of allowed overlapping atom pairs is handled by a third parameter $N$. Conformations where more than $N$ atom pairs overlap are ignored. In our ``real world'' experiments we have used a small parameter $N$ that allows small local penetration. Among the best five dockings found by the algorithm there was almost always a good (rms) approximation of the real conformation. In 42 of 52 test examples the best conformation with respect to the potential function was an approximation of the real conformation. The running time of our sequential algorithm is in the order of the running time of the algorithm of Norel {\it et al.}[NLW+]. The parallel version of the algorithm has a reasonable speedup and modest communication requirements.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Lenhof, Hans-Peter
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T An algorithm for the protein docking problem :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A1E1-3
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-023
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1995
%P 11 p.
%X We have implemented a parallel distributed geometric docking
algorithm that uses a new measure for the size of the
contact area of two molecules. The measure is a potential function
that counts the ``van der Waals contacts'' between the atoms of the
two molecules (the algorithm does not compute
the Lennard-Jones potential).
An integer constant $c_a$ is added to the potential for
each pair of atoms whose
distance is in a certain interval. For each pair whose distance is
smaller than the lower bound of the interval an integer constant
$c_s$ is subtracted from the potential ($c_a <c_s$).
The number of allowed overlapping atom pairs is handled by
a third parameter $N$.
Conformations where more than $N$ atom pairs overlap are
ignored. In our ``real world'' experiments we have used
a small parameter $N$ that allows small local penetration.
Among the best five dockings found by the algorithm there was
almost always a good (rms) approximation of the real
conformation.
In 42 of 52 test examples
the best conformation with respect to the potential function
was an approximation of the real conformation.
The running time of our sequential algorithm is in the order
of the running time of the algorithm of
Norel {\it et al.}[NLW+].
The parallel version of the algorithm has a reasonable
speedup and modest communication requirements.
%B Research Report / Max-Planck-Institut für Informatik
LEDA : A Platform for Combinatorial and Geometric Computing
K. Mehlhorn and S. Näher
Technical Report, 1995
K. Mehlhorn and S. Näher
Technical Report, 1995
Abstract
LEDA is a library of efficient data types and algorithms in combinatorial and
geometric computing. The main features of the library are its wide collection
of data types and algorithms, the precise and readable specification of these
types, its efficiency, its extendibility, and its ease of use.
Export
BibTeX
@techreport{mehlhorn95z,
TITLE = {{LEDA} : A Platform for Combinatorial and Geometric Computing},
AUTHOR = {Mehlhorn, Kurt and N{\"a}her, Stefan},
LANGUAGE = {eng},
LOCALID = {Local-ID: C1256428004B93B8-2798FB10F98B1B2CC12571F60051D6CF-mehlhorn95z},
INSTITUTION = {Universit{\"a}t Halle},
ADDRESS = {Halle},
YEAR = {1995},
DATE = {1995},
ABSTRACT = {LEDA is a library of efficient data types and algorithms in combinatorial and geometric computing. The main features of the library are its wide collection of data types and algorithms, the precise and readable specification of these types, its efficiency, its extendibility, and its ease of use.},
TYPE = {Report},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%A Näher, Stefan
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T LEDA : A Platform for Combinatorial and Geometric Computing :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AC49-1
%F EDOC: 344745
%F OTHER: Local-ID: C1256428004B93B8-2798FB10F98B1B2CC12571F60051D6CF-mehlhorn95z
%Y Universität Halle
%C Halle
%D 1995
%X LEDA is a library of efficient data types and algorithms in combinatorial and
geometric computing. The main features of the library are its wide collection
of data types and algorithms, the precise and readable specification of these
types, its efficiency, its extendibility, and its ease of use.
%B Report
Automatisiertes Zeichnen von Diagrammen
P. Mutzel
Technical Report, 1995a
P. Mutzel
Technical Report, 1995a
Abstract
Dieser Artikel wurde für das Jahrbuch 1995 der
Max-Planck-Gesellschaft geschrieben. Er beinhaltet eine
allgemein verständliche Einführung in das automatisierte
Zeichnen von Diagrammen sowie eine kurze Übersicht üuber die
aktuellen Forschungsschwerpunkte am MPI.
Export
BibTeX
@techreport{Mutzel95a,
TITLE = {{Automatisiertes Zeichnen von Diagrammen}},
AUTHOR = {Mutzel, Petra},
LANGUAGE = {deu},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-011},
NUMBER = {MPI-I-1995-1-011},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1995},
DATE = {1995},
ABSTRACT = {Dieser Artikel wurde f{\"u}r das Jahrbuch 1995 der Max-Planck-Gesellschaft geschrieben. Er beinhaltet eine allgemein verst{\"a}ndliche Einf{\"u}hrung in das automatisierte Zeichnen von Diagrammen sowie eine kurze {\"U}bersicht {\"u}uber die aktuellen Forschungsschwerpunkte am MPI.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Mutzel, Petra
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Automatisiertes Zeichnen von Diagrammen :
%G deu
%U http://hdl.handle.net/11858/00-001M-0000-0014-A70B-4
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-011
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1995
%P 5 p.
%X Dieser Artikel wurde für das Jahrbuch 1995 der
Max-Planck-Gesellschaft geschrieben. Er beinhaltet eine
allgemein verständliche Einführung in das automatisierte
Zeichnen von Diagrammen sowie eine kurze Übersicht üuber die
aktuellen Forschungsschwerpunkte am MPI.
%B Research Report / Max-Planck-Institut für Informatik
A polyhedral approach to planar augmentation and related problems
P. Mutzel
Technical Report, 1995b
P. Mutzel
Technical Report, 1995b
Abstract
Given a planar graph $G$, the planar (biconnectivity)
augmentation problem is to add the minimum number of edges to $G$
such that the resulting graph is still planar and biconnected.
Given a nonplanar and biconnected graph, the maximum planar biconnected
subgraph problem consists of removing the minimum number of edges so
that planarity is achieved and biconnectivity is maintained.
Both problems are important in Automatic Graph Drawing.
In [JM95], the minimum planarizing
$k$-augmentation problem has been introduced, that links the planarization
step and the augmentation step together. Here, we are given a graph which is
not necessarily planar and not necessarily $k$-connected, and we want to delete
some set of edges $D$ and to add some set of edges $A$ such that $|D|+|A|$
is minimized and the resulting graph is planar, $k$-connected and spanning.
For all three problems, we have given a polyhedral formulation
by defining three different linear objective functions over the same polytope,
namely the $2$-node connected planar spanning subgraph polytope $\2NCPLS(K_n)$.
We investigate the facial structure of this polytope for $k=2$,
which we will make use of in a branch and cut algorithm.
Here, we give the dimension of the planar, biconnected, spanning subgraph
polytope for $G=K_n$ and we show that all facets of the planar subgraph
polytope $\PLS(K_n)$ are also facets of the new polytope $\2NCPLS(K_n)$.
Furthermore, we show that the node-cut constraints arising in the
biconnectivity spanning subgraph polytope, are facet-defining inequalities
for $\2NCPLS(K_n)$.
We give first computational results for all three problems, the planar
$2$-augmentation problem, the minimum planarizing $2$-augmentation problem
and the maximum planar biconnected (spanning) subgraph problem.
This is the first time that instances of any of these three problems can
be solved to optimality.
Export
BibTeX
@techreport{Mutzel95b,
TITLE = {A polyhedral approach to planar augmentation and related problems},
AUTHOR = {Mutzel, Petra},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-014},
NUMBER = {MPI-I-1995-1-014},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1995},
DATE = {1995},
ABSTRACT = {Given a planar graph $G$, the planar (biconnectivity) augmentation problem is to add the minimum number of edges to $G$ such that the resulting graph is still planar and biconnected. Given a nonplanar and biconnected graph, the maximum planar biconnected subgraph problem consists of removing the minimum number of edges so that planarity is achieved and biconnectivity is maintained. Both problems are important in Automatic Graph Drawing. In [JM95], the minimum planarizing $k$-augmentation problem has been introduced, that links the planarization step and the augmentation step together. Here, we are given a graph which is not necessarily planar and not necessarily $k$-connected, and we want to delete some set of edges $D$ and to add some set of edges $A$ such that $|D|+|A|$ is minimized and the resulting graph is planar, $k$-connected and spanning. For all three problems, we have given a polyhedral formulation by defining three different linear objective functions over the same polytope, namely the $2$-node connected planar spanning subgraph polytope $\2NCPLS(K_n)$. We investigate the facial structure of this polytope for $k=2$, which we will make use of in a branch and cut algorithm. Here, we give the dimension of the planar, biconnected, spanning subgraph polytope for $G=K_n$ and we show that all facets of the planar subgraph polytope $\PLS(K_n)$ are also facets of the new polytope $\2NCPLS(K_n)$. Furthermore, we show that the node-cut constraints arising in the biconnectivity spanning subgraph polytope, are facet-defining inequalities for $\2NCPLS(K_n)$. We give first computational results for all three problems, the planar $2$-augmentation problem, the minimum planarizing $2$-augmentation problem and the maximum planar biconnected (spanning) subgraph problem. This is the first time that instances of any of these three problems can be solved to optimality.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Mutzel, Petra
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A polyhedral approach to planar augmentation and related problems :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A700-9
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-014
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1995
%P 14 p.
%X Given a planar graph $G$, the planar (biconnectivity)
augmentation problem is to add the minimum number of edges to $G$
such that the resulting graph is still planar and biconnected.
Given a nonplanar and biconnected graph, the maximum planar biconnected
subgraph problem consists of removing the minimum number of edges so
that planarity is achieved and biconnectivity is maintained.
Both problems are important in Automatic Graph Drawing.
In [JM95], the minimum planarizing
$k$-augmentation problem has been introduced, that links the planarization
step and the augmentation step together. Here, we are given a graph which is
not necessarily planar and not necessarily $k$-connected, and we want to delete
some set of edges $D$ and to add some set of edges $A$ such that $|D|+|A|$
is minimized and the resulting graph is planar, $k$-connected and spanning.
For all three problems, we have given a polyhedral formulation
by defining three different linear objective functions over the same polytope,
namely the $2$-node connected planar spanning subgraph polytope $\2NCPLS(K_n)$.
We investigate the facial structure of this polytope for $k=2$,
which we will make use of in a branch and cut algorithm.
Here, we give the dimension of the planar, biconnected, spanning subgraph
polytope for $G=K_n$ and we show that all facets of the planar subgraph
polytope $\PLS(K_n)$ are also facets of the new polytope $\2NCPLS(K_n)$.
Furthermore, we show that the node-cut constraints arising in the
biconnectivity spanning subgraph polytope, are facet-defining inequalities
for $\2NCPLS(K_n)$.
We give first computational results for all three problems, the planar
$2$-augmentation problem, the minimum planarizing $2$-augmentation problem
and the maximum planar biconnected (spanning) subgraph problem.
This is the first time that instances of any of these three problems can
be solved to optimality.
%B Research Report / Max-Planck-Institut für Informatik
LEDA user manual (version R 3.2)
S. Näher and C. Uhrig
Technical Report, 1995
S. Näher and C. Uhrig
Technical Report, 1995
Abstract
No abstract available.
Export
BibTeX
@techreport{,
TITLE = {{LEDA} user manual (version R 3.2)},
AUTHOR = {N{\"a}her, Stefan and Uhrig, Christian},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-002},
NUMBER = {MPI-I-1995-1-002},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1995},
DATE = {1995},
ABSTRACT = {No abstract available.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Näher, Stefan
%A Uhrig, Christian
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T LEDA user manual (version R 3.2) :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B7C6-E
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-002
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1995
%P 184 p.
%X No abstract available.
%B Research Report / Max-Planck-Institut für Informatik
Wait-free consensus in “in-phase” multiprocessor systems
M. Papatriantafilou and P. Tsigas
Technical Report, 1995
M. Papatriantafilou and P. Tsigas
Technical Report, 1995
Abstract
In the {\em consensus} problem in a system with $n$ processes, each process
starts with a private input value and runs until it chooses irrevocably a
decision value, which was the input value of some process of the system;
moreover, all processes have to decide on the same value.
This work deals with the problem of {\em wait-free} ---fully resilient
to processor crash and napping failures--- consensus
of $n$ processes in an ``in-phase" multiprocessor system.
It proves the existence of a solution to the problem in this system by
presenting a protocol which ensures that a process will
reach decision within at most $n(n-3)/2 +3$ steps of its own in the worst case,
or within $n$ steps if no process fails.
Export
BibTeX
@techreport{PapatriantafilouTsigas95,
TITLE = {Wait-free consensus in "in-phase" multiprocessor systems},
AUTHOR = {Papatriantafilou, Marina and Tsigas, Philippas},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-016},
NUMBER = {MPI-I-1995-1-016},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1995},
DATE = {1995},
ABSTRACT = {In the {\em consensus} problem in a system with $n$ processes, each process starts with a private input value and runs until it chooses irrevocably a decision value, which was the input value of some process of the system; moreover, all processes have to decide on the same value. This work deals with the problem of {\em wait-free} ---fully resilient to processor crash and napping failures--- consensus of $n$ processes in an ``in-phase" multiprocessor system. It proves the existence of a solution to the problem in this system by presenting a protocol which ensures that a process will reach decision within at most $n(n-3)/2 +3$ steps of its own in the worst case, or within $n$ steps if no process fails.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Papatriantafilou, Marina
%A Tsigas, Philippas
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Wait-free consensus in "in-phase" multiprocessor systems :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A44D-E
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-016
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1995
%P 12 p.
%X In the {\em consensus} problem in a system with $n$ processes, each process
starts with a private input value and runs until it chooses irrevocably a
decision value, which was the input value of some process of the system;
moreover, all processes have to decide on the same value.
This work deals with the problem of {\em wait-free} ---fully resilient
to processor crash and napping failures--- consensus
of $n$ processes in an ``in-phase" multiprocessor system.
It proves the existence of a solution to the problem in this system by
presenting a protocol which ensures that a process will
reach decision within at most $n(n-3)/2 +3$ steps of its own in the worst case,
or within $n$ steps if no process fails.
%B Research Report / Max-Planck-Institut für Informatik
Interactive Proof Systems
J. Radhakrishnan and S. Saluja
Technical Report, 1995
J. Radhakrishnan and S. Saluja
Technical Report, 1995
Abstract
The report is a compilation of lecture notes that were prepared during
the course ``Interactive Proof Systems'' given by the authors at Tata
Institute of Fundamental Research, Bombay. These notes were also used
for a short course ``Interactive Proof Systems'' given by the second
author at MPI, Saarbruecken. The objective of the course was to study
the recent developments in complexity theory about interactive proof
systems, which led to some surprising consequences on
nonapproximability of NP hard problems.
We start the course with an introduction to complexity theory and
covered some classical results related with circuit complexity,
randomizations and counting classes, notions which are either part of
the definitions of interactive proof systems or are used in proving
the above results.
We define arthur merlin games and interactive proof systems, which are
equivalent formulations of the notion of interactive proofs and show
their equivalence to each other and to the complexity class PSPACE.
We introduce probabilistically checkable proofs, which are special
forms of interactive proofs and show through sequence of intermediate
results that the class NP has probabilistically checkable proofs of
very special form and very small complexity. Using this we conclude
that several NP hard problems are not even weakly approximable in
polynomial time unless P = NP.
Export
BibTeX
@techreport{RadhakrishnanSaluja95,
TITLE = {Interactive Proof Systems},
AUTHOR = {Radhakrishnan, Jaikumar and Saluja, Sanjeev},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-007},
NUMBER = {MPI-I-1995-1-007},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1995},
DATE = {1995},
ABSTRACT = {The report is a compilation of lecture notes that were prepared during the course ``Interactive Proof Systems'' given by the authors at Tata Institute of Fundamental Research, Bombay. These notes were also used for a short course ``Interactive Proof Systems'' given by the second author at MPI, Saarbruecken. The objective of the course was to study the recent developments in complexity theory about interactive proof systems, which led to some surprising consequences on nonapproximability of NP hard problems. We start the course with an introduction to complexity theory and covered some classical results related with circuit complexity, randomizations and counting classes, notions which are either part of the definitions of interactive proof systems or are used in proving the above results. We define arthur merlin games and interactive proof systems, which are equivalent formulations of the notion of interactive proofs and show their equivalence to each other and to the complexity class PSPACE. We introduce probabilistically checkable proofs, which are special forms of interactive proofs and show through sequence of intermediate results that the class NP has probabilistically checkable proofs of very special form and very small complexity. Using this we conclude that several NP hard problems are not even weakly approximable in polynomial time unless P = NP.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Radhakrishnan, Jaikumar
%A Saluja, Sanjeev
%+ External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Interactive Proof Systems :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A75C-E
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-007
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1995
%P 121 p.
%X The report is a compilation of lecture notes that were prepared during
the course ``Interactive Proof Systems'' given by the authors at Tata
Institute of Fundamental Research, Bombay. These notes were also used
for a short course ``Interactive Proof Systems'' given by the second
author at MPI, Saarbruecken. The objective of the course was to study
the recent developments in complexity theory about interactive proof
systems, which led to some surprising consequences on
nonapproximability of NP hard problems.
We start the course with an introduction to complexity theory and
covered some classical results related with circuit complexity,
randomizations and counting classes, notions which are either part of
the definitions of interactive proof systems or are used in proving
the above results.
We define arthur merlin games and interactive proof systems, which are
equivalent formulations of the notion of interactive proofs and show
their equivalence to each other and to the complexity class PSPACE.
We introduce probabilistically checkable proofs, which are special
forms of interactive proofs and show through sequence of intermediate
results that the class NP has probabilistically checkable proofs of
very special form and very small complexity. Using this we conclude
that several NP hard problems are not even weakly approximable in
polynomial time unless P = NP.
%B Research Report / Max-Planck-Institut für Informatik
On the average running time of odd-even merge sort
C. Rüb
Technical Report, 1995
C. Rüb
Technical Report, 1995
Abstract
This paper is concerned with the average running time of Batcher's
odd-even merge sort when implemented on a collection of processors.
We consider the case where $n$, the size of the input,
is an arbitrary multiple of the number $p$ of processors used.
We show that Batcher's odd-even merge (for two sorted lists of length $n$ each)
can be implemented to run in time $O((n/p)(\log (2+p^2/n)))$ on the average,
and that odd-even merge sort can be implemented to run in time
$O((n/p)(\log n+\log p\log (2+p^2/n)))$ on the average.
In the case of merging (sorting), the average is taken over all possible outcomes
of the merging (all possible permutations of $n$ elements).
That means that odd-even merge and odd-even merge sort have an optimal
average running time if $n\geq p^2$. The constants involved are also
quite small.
Export
BibTeX
@techreport{,
TITLE = {On the average running time of odd-even merge sort},
AUTHOR = {R{\"u}b, Christine},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-010},
NUMBER = {MPI-I-1995-1-010},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1995},
DATE = {1995},
ABSTRACT = {This paper is concerned with the average running time of Batcher's odd-even merge sort when implemented on a collection of processors. We consider the case where $n$, the size of the input, is an arbitrary multiple of the number $p$ of processors used. We show that Batcher's odd-even merge (for two sorted lists of length $n$ each) can be implemented to run in time $O((n/p)(\log (2+p^2/n)))$ on the average, and that odd-even merge sort can be implemented to run in time $O((n/p)(\log n+\log p\log (2+p^2/n)))$ on the average. In the case of merging (sorting), the average is taken over all possible outcomes of the merging (all possible permutations of $n$ elements). That means that odd-even merge and odd-even merge sort have an optimal average running time if $n\geq p^2$. The constants involved are also quite small.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Rüb, Christine
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On the average running time of odd-even merge sort :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B6D4-5
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-010
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1995
%P 16 p.
%X This paper is concerned with the average running time of Batcher's
odd-even merge sort when implemented on a collection of processors.
We consider the case where $n$, the size of the input,
is an arbitrary multiple of the number $p$ of processors used.
We show that Batcher's odd-even merge (for two sorted lists of length $n$ each)
can be implemented to run in time $O((n/p)(\log (2+p^2/n)))$ on the average,
and that odd-even merge sort can be implemented to run in time
$O((n/p)(\log n+\log p\log (2+p^2/n)))$ on the average.
In the case of merging (sorting), the average is taken over all possible outcomes
of the merging (all possible permutations of $n$ elements).
That means that odd-even merge and odd-even merge sort have an optimal
average running time if $n\geq p^2$. The constants involved are also
quite small.
%B Research Report / Max-Planck-Institut für Informatik
Sample sort on meshes
J. F. Sibeyn
Technical Report, 1995a
J. F. Sibeyn
Technical Report, 1995a
Abstract
This paper provides an overview of lower and upper bounds for
mesh-connected processor networks. Most attention goes to
routing and sorting problems, but other problems are mentioned as
well. Results from 1977 to 1995 are covered. We provide numerous
results, references and open problems. The text is completed with
an index. This is a worked-out version of the author's contribution
to a joint paper with Grammatikakis, Hsu and Kraetzl on multicomputer
routing, submitted to JPDC.
Export
BibTeX
@techreport{Sibeyn95a,
TITLE = {Sample sort on meshes},
AUTHOR = {Sibeyn, Jop Frederic},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-012},
NUMBER = {MPI-I-1995-1-012},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1995},
DATE = {1995},
ABSTRACT = {This paper provides an overview of lower and upper bounds for mesh-connected processor networks. Most attention goes to routing and sorting problems, but other problems are mentioned as well. Results from 1977 to 1995 are covered. We provide numerous results, references and open problems. The text is completed with an index. This is a worked-out version of the author's contribution to a joint paper with Grammatikakis, Hsu and Kraetzl on multicomputer routing, submitted to JPDC.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Sibeyn, Jop Frederic
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Sample sort on meshes :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A708-A
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-012
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1995
%P 14 p.
%X This paper provides an overview of lower and upper bounds for
mesh-connected processor networks. Most attention goes to
routing and sorting problems, but other problems are mentioned as
well. Results from 1977 to 1995 are covered. We provide numerous
results, references and open problems. The text is completed with
an index. This is a worked-out version of the author's contribution
to a joint paper with Grammatikakis, Hsu and Kraetzl on multicomputer
routing, submitted to JPDC.
%B Research Report / Max-Planck-Institut für Informatik
Overview of mesh results
J. F. Sibeyn
Technical Report, 1995b
J. F. Sibeyn
Technical Report, 1995b
Abstract
This paper provides an overview of lower and upper bounds for
algorithms for mesh-connected processor networks. Most of our
attention goes to routing and sorting problems, but other problems
are mentioned as well. Results from 1977 to 1995 are covered. We
provide numerous results, references and open problems. The text
is completed with an index.
This is a worked-out version of the author's contribution
to a joint paper with Miltos D. Grammatikakis, D. Frank Hsu and
Miro Kraetzl on multicomputer routing, submitted to the Journal
of Parallel and Distributed Computing.
Export
BibTeX
@techreport{Sibeyn95b,
TITLE = {Overview of mesh results},
AUTHOR = {Sibeyn, Jop Frederic},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-018},
NUMBER = {MPI-I-1995-1-018},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1995},
DATE = {1995},
ABSTRACT = {This paper provides an overview of lower and upper bounds for algorithms for mesh-connected processor networks. Most of our attention goes to routing and sorting problems, but other problems are mentioned as well. Results from 1977 to 1995 are covered. We provide numerous results, references and open problems. The text is completed with an index. This is a worked-out version of the author's contribution to a joint paper with Miltos D. Grammatikakis, D. Frank Hsu and Miro Kraetzl on multicomputer routing, submitted to the Journal of Parallel and Distributed Computing.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Sibeyn, Jop Frederic
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Overview of mesh results :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A43A-7
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-018
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1995
%P 22 p.
%X This paper provides an overview of lower and upper bounds for
algorithms for mesh-connected processor networks. Most of our
attention goes to routing and sorting problems, but other problems
are mentioned as well. Results from 1977 to 1995 are covered. We
provide numerous results, references and open problems. The text
is completed with an index.
This is a worked-out version of the author's contribution
to a joint paper with Miltos D. Grammatikakis, D. Frank Hsu and
Miro Kraetzl on multicomputer routing, submitted to the Journal
of Parallel and Distributed Computing.
%B Research Report / Max-Planck-Institut für Informatik
Computing a largest empty anchored cylinder, and related problems
M. Smid, C. Thiel, F. Follert, E. Schömer and J. Sellen
Technical Report, 1995
M. Smid, C. Thiel, F. Follert, E. Schömer and J. Sellen
Technical Report, 1995
Abstract
Let $S$ be a set of $n$ points in $R^d$, and let each point
$p$ of $S$ have a positive weight $w(p)$. We consider the
problem of computing a ray $R$ emanating from the origin
(resp.\ a line $l$ through the origin) such that
$\min_{p\in S} w(p) \cdot d(p,R)$ (resp.
$\min_{p\in S} w(p) \cdot d(p,l)$) is maximal. If all weights
are one, this corresponds to computing a silo emanating
from the origin (resp.\ a cylinder whose axis contains the
origin) that does not contain any point of $S$ and whose
radius is maximal.
For $d=2$, we show how to solve these problems in $O(n \log n)$
time, which is optimal in the algebraic computation tree
model. For $d=3$, we give algorithms that are based on the
parametric search technique and run in $O(n \log^5 n)$ time.
The previous best known algorithms for these three-dimensional
problems had almost quadratic running time.
In the final part of the paper, we consider some related problems
Export
BibTeX
@techreport{SmidThielFollertSchoemerSellen95,
TITLE = {Computing a largest empty anchored cylinder, and related problems},
AUTHOR = {Smid, Michiel and Thiel, Christian and Follert, F. and Sch{\"o}mer, Elmar and Sellen, J.},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-001},
NUMBER = {MPI-I-1995-1-001},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1995},
DATE = {1995},
ABSTRACT = {Let $S$ be a set of $n$ points in $R^d$, and let each point $p$ of $S$ have a positive weight $w(p)$. We consider the problem of computing a ray $R$ emanating from the origin (resp.\ a line $l$ through the origin) such that $\min_{p\in S} w(p) \cdot d(p,R)$ (resp. $\min_{p\in S} w(p) \cdot d(p,l)$) is maximal. If all weights are one, this corresponds to computing a silo emanating from the origin (resp.\ a cylinder whose axis contains the origin) that does not contain any point of $S$ and whose radius is maximal. For $d=2$, we show how to solve these problems in $O(n \log n)$ time, which is optimal in the algebraic computation tree model. For $d=3$, we give algorithms that are based on the parametric search technique and run in $O(n \log^5 n)$ time. The previous best known algorithms for these three-dimensional problems had almost quadratic running time. In the final part of the paper, we consider some related problems},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Smid, Michiel
%A Thiel, Christian
%A Follert, F.
%A Schömer, Elmar
%A Sellen, J.
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T Computing a largest empty anchored cylinder, and related problems :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A83F-8
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1995-1-001
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1995
%P 17 p.
%X Let $S$ be a set of $n$ points in $R^d$, and let each point
$p$ of $S$ have a positive weight $w(p)$. We consider the
problem of computing a ray $R$ emanating from the origin
(resp.\ a line $l$ through the origin) such that
$\min_{p\in S} w(p) \cdot d(p,R)$ (resp.
$\min_{p\in S} w(p) \cdot d(p,l)$) is maximal. If all weights
are one, this corresponds to computing a silo emanating
from the origin (resp.\ a cylinder whose axis contains the
origin) that does not contain any point of $S$ and whose
radius is maximal.
For $d=2$, we show how to solve these problems in $O(n \log n)$
time, which is optimal in the algebraic computation tree
model. For $d=3$, we give algorithms that are based on the
parametric search technique and run in $O(n \log^5 n)$ time.
The previous best known algorithms for these three-dimensional
problems had almost quadratic running time.
In the final part of the paper, we consider some related problems
%B Research Report / Max-Planck-Institut für Informatik
Closest point problems in computational geometry
M. Smid
Technical Report, 1995
M. Smid
Technical Report, 1995
Abstract
This is the preliminary version of a chapter that will
appear in the {\em Handbook on Computational Geometry},
edited by J.-R.\ Sack and J.\ Urrutia.
A comprehensive overview is given of algorithms and data
structures for proximity problems on point sets in $\IR^D$.
In particular, the closest pair problem, the exact and
approximate post-office problem, and the problem of
constructing spanners are discussed in detail.
Export
BibTeX
@techreport{Smid95,
TITLE = {Closest point problems in computational geometry},
AUTHOR = {Smid, Michiel},
LANGUAGE = {eng},
NUMBER = {MPI-I-1995-1-026},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1995},
DATE = {1995},
ABSTRACT = {This is the preliminary version of a chapter that will appear in the {\em Handbook on Computational Geometry}, edited by J.-R.\ Sack and J.\ Urrutia. A comprehensive overview is given of algorithms and data structures for proximity problems on point sets in $\IR^D$. In particular, the closest pair problem, the exact and approximate post-office problem, and the problem of constructing spanners are discussed in detail.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Smid, Michiel
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Closest point problems in computational geometry :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-A1D8-A
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1995
%P 62 p.
%X This is the preliminary version of a chapter that will
appear in the {\em Handbook on Computational Geometry},
edited by J.-R.\ Sack and J.\ Urrutia.
A comprehensive overview is given of algorithms and data
structures for proximity problems on point sets in $\IR^D$.
In particular, the closest pair problem, the exact and
approximate post-office problem, and the problem of
constructing spanners are discussed in detail.
%B Research Report / Max-Planck-Institut für Informatik
1994
New on-line algorithms for the page replication problem
S. Albers and J. Koga
Technical Report, 1994
S. Albers and J. Koga
Technical Report, 1994
Abstract
The page replication problem arises in the memory management of large
multiprocessor systems. Given a network of processors, each of which
has its local memory, the problem consists of deciding
which local memories should contain copies of pages of data so that
a sequence of memory accesses can be accomplished efficiently. We present
new competitive on-line algorithms for the page replication problem and
concentrate on important network topologies for which algorithms with
a constant competitive factor can be given. We develop the first
optimal randomized on-line replication algorithm for trees and
uniform networks; its competitive factor is
approximately 1.58. Furthermore we consider on-line
replication algorithms for rings and present general techniques that
transform large classes of $c$-competitive algorithms for trees into
$2c$-competitive algorithms for rings. As a result we obtain a randomized
on-line algorithm for rings that is 3.16-competitive. We also derive
two 4-competitive on-line algorithms for rings which are either deterministic
or memoryless. All our algorithms improve the previously best
competitive factors for the respective topologies.
Export
BibTeX
@techreport{,
TITLE = {New on-line algorithms for the page replication problem},
AUTHOR = {Albers, Susanne and Koga, Joachim},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-106},
NUMBER = {MPI-I-94-106},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {The page replication problem arises in the memory management of large multiprocessor systems. Given a network of processors, each of which has its local memory, the problem consists of deciding which local memories should contain copies of pages of data so that a sequence of memory accesses can be accomplished efficiently. We present new competitive on-line algorithms for the page replication problem and concentrate on important network topologies for which algorithms with a constant competitive factor can be given. We develop the first optimal randomized on-line replication algorithm for trees and uniform networks; its competitive factor is approximately 1.58. Furthermore we consider on-line replication algorithms for rings and present general techniques that transform large classes of $c$-competitive algorithms for trees into $2c$-competitive algorithms for rings. As a result we obtain a randomized on-line algorithm for rings that is 3.16-competitive. We also derive two 4-competitive on-line algorithms for rings which are either deterministic or memoryless. All our algorithms improve the previously best competitive factors for the respective topologies.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Albers, Susanne
%A Koga, Joachim
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T New on-line algorithms for the page replication problem :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B788-B
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-106
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 18 p.
%X The page replication problem arises in the memory management of large
multiprocessor systems. Given a network of processors, each of which
has its local memory, the problem consists of deciding
which local memories should contain copies of pages of data so that
a sequence of memory accesses can be accomplished efficiently. We present
new competitive on-line algorithms for the page replication problem and
concentrate on important network topologies for which algorithms with
a constant competitive factor can be given. We develop the first
optimal randomized on-line replication algorithm for trees and
uniform networks; its competitive factor is
approximately 1.58. Furthermore we consider on-line
replication algorithms for rings and present general techniques that
transform large classes of $c$-competitive algorithms for trees into
$2c$-competitive algorithms for rings. As a result we obtain a randomized
on-line algorithm for rings that is 3.16-competitive. We also derive
two 4-competitive on-line algorithms for rings which are either deterministic
or memoryless. All our algorithms improve the previously best
competitive factors for the respective topologies.
%B Research Report / Max-Planck-Institut für Informatik
Improved parallel integer sorting without concurrent writing
S. Albers and T. Hagerup
Technical Report, 1994
S. Albers and T. Hagerup
Technical Report, 1994
Abstract
We show that $n$ integers in the range
$1 \twodots n$ can be stably sorted on an
\linebreak EREW PRAM
using \nolinebreak $O(t)$ time \linebreak and
$O(n(\sqrt{\log n\log\log n}+{{(\log n)^2}/t}))$
operations, for arbitrary given \linebreak
$t\ge\log n\log\log n$, and on a CREW PRAM using
%$O(\log n\log\log n)$ time and $O(n\sqrt{\log n})$
$O(t)$ time and
$O(n(\sqrt{\log n}+{{\log n}/{2^{{t/{\log n}}}}}))$
operations, for arbitrary given $t\ge\log n$.
In addition, we are able to sort $n$ arbitrary
integers on a randomized CREW PRAM
% using
%$O(\log n\log\log n)$ time and $O(n\sqrt{\log n})$ operations
within the same resource bounds with high probability.
In each case our algorithm is
a factor of almost $\Theta(\sqrt{\log n})$
closer to optimality
than all previous algorithms for the stated problem
in the stated model, and our third result matches the operation
count of the best known sequential algorithm.
We also show that $n$ integers in the range $1 \twodots m$
can be sorted in $O((\log n)^2)$
time with $O(n)$ operations on an EREW PRAM using a nonstandard word
length of $O(\log n \log\log n \log m)$ bits,
thereby greatly improving the upper
bound on the word length necessary to sort integers
with a linear time-processor product,
even sequentially.
Our algorithms were inspired by, and in one case directly
use, the fusion trees of
Fredman and Willard.
Export
BibTeX
@techreport{,
TITLE = {Improved parallel integer sorting without concurrent writing},
AUTHOR = {Albers, Susanne and Hagerup, Torben},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-137},
NUMBER = {MPI-I-94-137},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {We show that $n$ integers in the range $1 \twodots n$ can be stably sorted on an \linebreak EREW PRAM using \nolinebreak $O(t)$ time \linebreak and $O(n(\sqrt{\log n\log\log n}+{{(\log n)^2}/t}))$ operations, for arbitrary given \linebreak $t\ge\log n\log\log n$, and on a CREW PRAM using %$O(\log n\log\log n)$ time and $O(n\sqrt{\log n})$ $O(t)$ time and $O(n(\sqrt{\log n}+{{\log n}/{2^{{t/{\log n}}}}}))$ operations, for arbitrary given $t\ge\log n$. In addition, we are able to sort $n$ arbitrary integers on a randomized CREW PRAM % using %$O(\log n\log\log n)$ time and $O(n\sqrt{\log n})$ operations within the same resource bounds with high probability. In each case our algorithm is a factor of almost $\Theta(\sqrt{\log n})$ closer to optimality than all previous algorithms for the stated problem in the stated model, and our third result matches the operation count of the best known sequential algorithm. We also show that $n$ integers in the range $1 \twodots m$ can be sorted in $O((\log n)^2)$ time with $O(n)$ operations on an EREW PRAM using a nonstandard word length of $O(\log n \log\log n \log m)$ bits, thereby greatly improving the upper bound on the word length necessary to sort integers with a linear time-processor product, even sequentially. Our algorithms were inspired by, and in one case directly use, the fusion trees of Fredman and Willard.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Albers, Susanne
%A Hagerup, Torben
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Improved parallel integer sorting without concurrent writing :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B799-5
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-137
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%X We show that $n$ integers in the range
$1 \twodots n$ can be stably sorted on an
\linebreak EREW PRAM
using \nolinebreak $O(t)$ time \linebreak and
$O(n(\sqrt{\log n\log\log n}+{{(\log n)^2}/t}))$
operations, for arbitrary given \linebreak
$t\ge\log n\log\log n$, and on a CREW PRAM using
%$O(\log n\log\log n)$ time and $O(n\sqrt{\log n})$
$O(t)$ time and
$O(n(\sqrt{\log n}+{{\log n}/{2^{{t/{\log n}}}}}))$
operations, for arbitrary given $t\ge\log n$.
In addition, we are able to sort $n$ arbitrary
integers on a randomized CREW PRAM
% using
%$O(\log n\log\log n)$ time and $O(n\sqrt{\log n})$ operations
within the same resource bounds with high probability.
In each case our algorithm is
a factor of almost $\Theta(\sqrt{\log n})$
closer to optimality
than all previous algorithms for the stated problem
in the stated model, and our third result matches the operation
count of the best known sequential algorithm.
We also show that $n$ integers in the range $1 \twodots m$
can be sorted in $O((\log n)^2)$
time with $O(n)$ operations on an EREW PRAM using a nonstandard word
length of $O(\log n \log\log n \log m)$ bits,
thereby greatly improving the upper
bound on the word length necessary to sort integers
with a linear time-processor product,
even sequentially.
Our algorithms were inspired by, and in one case directly
use, the fusion trees of
Fredman and Willard.
%B Research Report / Max-Planck-Institut für Informatik
On the parallel complexity of degree sequence problems
S. Arikati
Technical Report, 1994
S. Arikati
Technical Report, 1994
Abstract
We describe a robust and efficient implementation of the Bentley-Ottmann
sweep line algorithm based on the LEDA library
of efficient data types and algorithms. The program
computes the planar graph $G$ induced by a set $S$ of straight line segments
in the plane. The nodes of $G$ are all endpoints and all proper
intersection
points of segments in $S$. The edges of $G$ are the maximal
relatively open
subsegments of segments in $S$ that contain no node of $G$. All edges
are
directed from left to right or upwards.
The algorithm runs in time $O((n+s) log n)$ where $n$ is the number of
segments and $s$ is the number of vertices of the graph $G$. The implementation
uses exact arithmetic for the reliable realization of the geometric
primitives and it uses floating point filters to reduce the overhead of
exact arithmetic.
Export
BibTeX
@techreport{Arikati94MPII94-162,
TITLE = {On the parallel complexity of degree sequence problems},
AUTHOR = {Arikati, Srinivasa},
LANGUAGE = {eng},
NUMBER = {MPI-I-1994-162},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {We describe a robust and efficient implementation of the Bentley-Ottmann sweep line algorithm based on the LEDA library of efficient data types and algorithms. The program computes the planar graph $G$ induced by a set $S$ of straight line segments in the plane. The nodes of $G$ are all endpoints and all proper intersection points of segments in $S$. The edges of $G$ are the maximal relatively open subsegments of segments in $S$ that contain no node of $G$. All edges are directed from left to right or upwards. The algorithm runs in time $O((n+s) log n)$ where $n$ is the number of segments and $s$ is the number of vertices of the graph $G$. The implementation uses exact arithmetic for the reliable realization of the geometric primitives and it uses floating point filters to reduce the overhead of exact arithmetic.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Arikati, Srinivasa
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On the parallel complexity of degree sequence problems :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B695-6
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 12 p.
%X We describe a robust and efficient implementation of the Bentley-Ottmann
sweep line algorithm based on the LEDA library
of efficient data types and algorithms. The program
computes the planar graph $G$ induced by a set $S$ of straight line segments
in the plane. The nodes of $G$ are all endpoints and all proper
intersection
points of segments in $S$. The edges of $G$ are the maximal
relatively open
subsegments of segments in $S$ that contain no node of $G$. All edges
are
directed from left to right or upwards.
The algorithm runs in time $O((n+s) log n)$ where $n$ is the number of
segments and $s$ is the number of vertices of the graph $G$. The implementation
uses exact arithmetic for the reliable realization of the geometric
primitives and it uses floating point filters to reduce the overhead of
exact arithmetic.
%B Research Report / Max-Planck-Institut für Informatik
Realizing degree sequences in parallel
S. Arikati and A. Maheshwari
Technical Report, 1994
S. Arikati and A. Maheshwari
Technical Report, 1994
Abstract
A sequence $d$ of integers is a degree sequence if there exists
a (simple) graph $G$ such that the components of $d$ are equal to
the degrees of the vertices of $G$. The graph $G$ is said to be a
realization of $d$. We provide an efficient
parallel algorithm to realize $d$.
Before our result, it was not known if the problem of
realizing $d$ is in $NC$.
Export
BibTeX
@techreport{ArikatiMaheshwari94,
TITLE = {Realizing degree sequences in parallel},
AUTHOR = {Arikati, Srinivasa and Maheshwari, Anil},
LANGUAGE = {eng},
NUMBER = {MPI-I-1994-122},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {A sequence $d$ of integers is a degree sequence if there exists a (simple) graph $G$ such that the components of $d$ are equal to the degrees of the vertices of $G$. The graph $G$ is said to be a realization of $d$. We provide an efficient parallel algorithm to realize $d$. Before our result, it was not known if the problem of realizing $d$ is in $NC$.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Arikati, Srinivasa
%A Maheshwari, Anil
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Realizing degree sequences in parallel :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B69C-7
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 27 p.
%X A sequence $d$ of integers is a degree sequence if there exists
a (simple) graph $G$ such that the components of $d$ are equal to
the degrees of the vertices of $G$. The graph $G$ is said to be a
realization of $d$. We provide an efficient
parallel algorithm to realize $d$.
Before our result, it was not known if the problem of
realizing $d$ is in $NC$.
%B Research Report / Max-Planck-Institut für Informatik
Efficient computation of compact representations of sparse graphs
S. R. Arikati, A. Maheshwari and C. Zaroliagis
Technical Report, 1994
S. R. Arikati, A. Maheshwari and C. Zaroliagis
Technical Report, 1994
Abstract
Sparse graphs (e.g.~trees, planar graphs, relative neighborhood graphs)
are among the commonly used data-structures in computational geometry.
The problem of finding a compact representation for sparse
graphs such that vertex adjacency can be tested quickly is fundamental to
several geometric and graph algorithms.
We provide here simple and optimal algorithms for constructing
a compact representation of $O(n)$ size for an $n$-vertex sparse
graph such that the adjacency can be
tested in $O(1)$ time. Our sequential algorithm
runs in $O(n)$ time, while the parallel one runs in $O(\log n)$ time using
$O(n/{\log n})$ CRCW PRAM processors. Previous results for this problem
are based on matroid partitioning and thus have a high complexity.
Export
BibTeX
@techreport{ArikatiMaheshwariZaroliagis,
TITLE = {Efficient computation of compact representations of sparse graphs},
AUTHOR = {Arikati, Srinivasa R. and Maheshwari, Anil and Zaroliagis, Christos},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-148},
NUMBER = {MPI-I-94-148},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {Sparse graphs (e.g.~trees, planar graphs, relative neighborhood graphs) are among the commonly used data-structures in computational geometry. The problem of finding a compact representation for sparse graphs such that vertex adjacency can be tested quickly is fundamental to several geometric and graph algorithms. We provide here simple and optimal algorithms for constructing a compact representation of $O(n)$ size for an $n$-vertex sparse graph such that the adjacency can be tested in $O(1)$ time. Our sequential algorithm runs in $O(n)$ time, while the parallel one runs in $O(\log n)$ time using $O(n/{\log n})$ CRCW PRAM processors. Previous results for this problem are based on matroid partitioning and thus have a high complexity.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Arikati, Srinivasa R.
%A Maheshwari, Anil
%A Zaroliagis, Christos
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Efficient computation of compact representations of sparse graphs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B530-E
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-148
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 10 p.
%X Sparse graphs (e.g.~trees, planar graphs, relative neighborhood graphs)
are among the commonly used data-structures in computational geometry.
The problem of finding a compact representation for sparse
graphs such that vertex adjacency can be tested quickly is fundamental to
several geometric and graph algorithms.
We provide here simple and optimal algorithms for constructing
a compact representation of $O(n)$ size for an $n$-vertex sparse
graph such that the adjacency can be
tested in $O(1)$ time. Our sequential algorithm
runs in $O(n)$ time, while the parallel one runs in $O(\log n)$ time using
$O(n/{\log n})$ CRCW PRAM processors. Previous results for this problem
are based on matroid partitioning and thus have a high complexity.
%B Research Report / Max-Planck-Institut für Informatik
Accounting for Boundary Effects in Nearest Neighbor Searching
S. Arya, D. Mount and O. Narayan
Technical Report, 1994
S. Arya, D. Mount and O. Narayan
Technical Report, 1994
Export
BibTeX
@techreport{Arya94159,
TITLE = {Accounting for Boundary Effects in Nearest Neighbor Searching},
AUTHOR = {Arya, Sunil and Mount, David and Narayan, Onuttom},
LANGUAGE = {eng},
NUMBER = {MPI-I-94-159},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Arya, Sunil
%A Mount, David
%A Narayan, Onuttom
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
%T Accounting for Boundary Effects in Nearest Neighbor Searching :
%G eng
%U http://hdl.handle.net/21.11116/0000-0009-DB49-F
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 15 p.
%B Research Report / Max-Planck-Institut für Informatik
Dynamic algorithms for geometric spanners of small diameter: randomized solutions
S. Arya, D. Mount and M. Smid
Technical Report, 1994
S. Arya, D. Mount and M. Smid
Technical Report, 1994
Abstract
Let $S$ be a set of $n$ points in $\IR^d$ and let $t>1$ be
a real number. A $t$-spanner for $S$ is a directed graph
having the points of $S$ as its vertices, such that for any
pair $p$ and $q$ of points there is a path from $p$ to $q$
of length at most $t$ times the Euclidean distance between
$p$ and $q$. Such a path is called a $t$-spanner path.
The spanner diameter of such a spanner is defined as the
smallest integer $D$ such that for any pair $p$ and $q$ of
points there is a $t$-spanner path from $p$ to $q$ containing
at most $D$ edges.
A randomized algorithm is given for constructing a
$t$-spanner that, with high probability, contains $O(n)$
edges and has spanner diameter $O(\log n)$.
A data structure of size $O(n \log^d n)$ is given that
maintains this $t$-spanner in $O(\log^d n \log\log n)$
expected amortized time per insertion and deletion, in the
model of random updates, as introduced by Mulmuley.
Previously, no results were known for spanners with low
spanner diameter and for maintaining spanners under insertions
and deletions.
Export
BibTeX
@techreport{,
TITLE = {Dynamic algorithms for geometric spanners of small diameter: randomized solutions},
AUTHOR = {Arya, Sunil and Mount, David and Smid, Michiel},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-156},
NUMBER = {MPI-I-94-156},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {Let $S$ be a set of $n$ points in $\IR^d$ and let $t>1$ be a real number. A $t$-spanner for $S$ is a directed graph having the points of $S$ as its vertices, such that for any pair $p$ and $q$ of points there is a path from $p$ to $q$ of length at most $t$ times the Euclidean distance between $p$ and $q$. Such a path is called a $t$-spanner path. The spanner diameter of such a spanner is defined as the smallest integer $D$ such that for any pair $p$ and $q$ of points there is a $t$-spanner path from $p$ to $q$ containing at most $D$ edges. A randomized algorithm is given for constructing a $t$-spanner that, with high probability, contains $O(n)$ edges and has spanner diameter $O(\log n)$. A data structure of size $O(n \log^d n)$ is given that maintains this $t$-spanner in $O(\log^d n \log\log n)$ expected amortized time per insertion and deletion, in the model of random updates, as introduced by Mulmuley. Previously, no results were known for spanners with low spanner diameter and for maintaining spanners under insertions and deletions.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Arya, Sunil
%A Mount, David
%A Smid, Michiel
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Dynamic algorithms for geometric spanners of small diameter: randomized solutions :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B7A2-F
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-156
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%X Let $S$ be a set of $n$ points in $\IR^d$ and let $t>1$ be
a real number. A $t$-spanner for $S$ is a directed graph
having the points of $S$ as its vertices, such that for any
pair $p$ and $q$ of points there is a path from $p$ to $q$
of length at most $t$ times the Euclidean distance between
$p$ and $q$. Such a path is called a $t$-spanner path.
The spanner diameter of such a spanner is defined as the
smallest integer $D$ such that for any pair $p$ and $q$ of
points there is a $t$-spanner path from $p$ to $q$ containing
at most $D$ edges.
A randomized algorithm is given for constructing a
$t$-spanner that, with high probability, contains $O(n)$
edges and has spanner diameter $O(\log n)$.
A data structure of size $O(n \log^d n)$ is given that
maintains this $t$-spanner in $O(\log^d n \log\log n)$
expected amortized time per insertion and deletion, in the
model of random updates, as introduced by Mulmuley.
Previously, no results were known for spanners with low
spanner diameter and for maintaining spanners under insertions
and deletions.
%B Research Report / Max-Planck-Institut für Informatik
Efficient construction of a bounded degree spanner with low weight
S. Arya and M. Smid
Technical Report, 1994
S. Arya and M. Smid
Technical Report, 1994
Abstract
Let $S$ be a set of $n$ points in $\IR^d$ and let $t>1$ be
a real number. A $t$-spanner for $S$ is a graph having the
points of $S$ as its vertices such that for any pair $p,q$ of
points there is a path between them of length at most $t$
times the euclidean distance between $p$ and $q$.
An efficient implementation of a greedy algorithm is given
that constructs a $t$-spanner having bounded degree such
that the total length of all its edges is bounded by
$O(\log n)$ times the length of a minimum spanning tree
for $S$. The algorithm has running time $O(n \log^d n)$.
Also, an application to the problem of distance enumeration
is given.
Export
BibTeX
@techreport{AryaSmid94,
TITLE = {Efficient construction of a bounded degree spanner with low weight},
AUTHOR = {Arya, Sunil and Smid, Michiel},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-115},
NUMBER = {MPI-I-94-115},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {Let $S$ be a set of $n$ points in $\IR^d$ and let $t>1$ be a real number. A $t$-spanner for $S$ is a graph having the points of $S$ as its vertices such that for any pair $p,q$ of points there is a path between them of length at most $t$ times the euclidean distance between $p$ and $q$. An efficient implementation of a greedy algorithm is given that constructs a $t$-spanner having bounded degree such that the total length of all its edges is bounded by $O(\log n)$ times the length of a minimum spanning tree for $S$. The algorithm has running time $O(n \log^d n)$. Also, an application to the problem of distance enumeration is given.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Arya, Sunil
%A Smid, Michiel
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Efficient construction of a bounded degree spanner with low weight :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B518-6
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-115
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 25 p.
%X Let $S$ be a set of $n$ points in $\IR^d$ and let $t>1$ be
a real number. A $t$-spanner for $S$ is a graph having the
points of $S$ as its vertices such that for any pair $p,q$ of
points there is a path between them of length at most $t$
times the euclidean distance between $p$ and $q$.
An efficient implementation of a greedy algorithm is given
that constructs a $t$-spanner having bounded degree such
that the total length of all its edges is bounded by
$O(\log n)$ times the length of a minimum spanning tree
for $S$. The algorithm has running time $O(n \log^d n)$.
Also, an application to the problem of distance enumeration
is given.
%B Research Report / Max-Planck-Institut für Informatik
Short random walks on graphs
G. Barnes and U. Feige
Technical Report, 1994
G. Barnes and U. Feige
Technical Report, 1994
Abstract
We study the short term behavior of random walks on graphs,<br>in particular, the rate at which a random walk<br>discovers new vertices and edges. <br>We prove a conjecture by<br>Linial that the expected time to find $\cal N$ distinct vertices is $O({\cal N} ^ 3)$.<br>We also prove an upper bound of<br>$O({\cal M} ^ 2)$ on the expected time to traverse $\cal M$ edges, and<br>$O(\cal M\cal N)$ on the expected time to either visit $\cal N$ vertices or<br>traverse $\cal M$ edges (whichever comes first).
Export
BibTeX
@techreport{,
TITLE = {Short random walks on graphs},
AUTHOR = {Barnes, Greg and Feige, Uriel},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-121},
NUMBER = {MPI-I-94-121},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {We study the short term behavior of random walks on graphs,<br>in particular, the rate at which a random walk<br>discovers new vertices and edges. <br>We prove a conjecture by<br>Linial that the expected time to find $\cal N$ distinct vertices is $O({\cal N} ^ 3)$.<br>We also prove an upper bound of<br>$O({\cal M} ^ 2)$ on the expected time to traverse $\cal M$ edges, and<br>$O(\cal M\cal N)$ on the expected time to either visit $\cal N$ vertices or<br>traverse $\cal M$ edges (whichever comes first).},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Barnes, Greg
%A Feige, Uriel
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Short random walks on graphs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B790-8
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-121
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 14 p.
%X We study the short term behavior of random walks on graphs,<br>in particular, the rate at which a random walk<br>discovers new vertices and edges. <br>We prove a conjecture by<br>Linial that the expected time to find $\cal N$ distinct vertices is $O({\cal N} ^ 3)$.<br>We also prove an upper bound of<br>$O({\cal M} ^ 2)$ on the expected time to traverse $\cal M$ edges, and<br>$O(\cal M\cal N)$ on the expected time to either visit $\cal N$ vertices or<br>traverse $\cal M$ edges (whichever comes first).
%B Research Report / Max-Planck-Institut für Informatik
A method for implementing lock-free shared data structures
G. Barnes
Technical Report, 1994
G. Barnes
Technical Report, 1994
Abstract
We are interested in implementing data structures on shared memory <br>multiprocessors. A natural model for these machines is an <br>asynchronous parallel machine, in which the processors are subject to <br>arbitrary delays. On such machines, it is desirable for algorithms to<br>be {\em lock-free}, that is, they must allow concurrent access to data<br>without using mutual exclusion.<br>Efficient lock-free implementations are known for some specific data<br>structures, but these algorithms do not generalize well to other<br>structures. For most data structures, the only previously known lock-free<br>algorithm is due to Herlihy. Herlihy presents a<br>simple methodology to create a lock-free implementation of a general<br>data structure, but his approach can be very expensive.<br><br>We present a technique that provides the semantics of<br>exclusive access to data without using mutual exclusion.<br>Using <br>this technique, <br>we devise the {\em caching method}, <br>a general method of implementing lock-free data <br>structures that is provably <br>better than Herlihy's methodology for many <br>well-known data structures.<br>The cost of one operation using the caching method<br>is proportional to $T \log T$, where $T$ is the sequential cost of the<br>operation. Under Herlihy's methodology, the <br>cost is proportional to $T + C$,<br>where $C$ is the time needed to make a logical copy of the data structure.<br>For many data structures, such as arrays and<br>{\em well connected} pointer-based structures (e.g., a doubly linked<br>list), the best known value for $C$ is <br>proportional to the size of the structure, making the copying time<br>much larger than the sequential cost of an operation.<br>The new method can also allow {\em concurrent updates} to the data <br>structure; Herlihy's methodology cannot.<br>A correct lock-free implementation can be derived <br>from a correct sequential implementation in a straightforward manner<br>using this<br>method. <br>The method is also flexible; a programmer can change many of the details<br>of the default implementation to optimize for a particular pattern of data<br>structure use.
Export
BibTeX
@techreport{,
TITLE = {A method for implementing lock-free shared data structures},
AUTHOR = {Barnes, Greg},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-120},
NUMBER = {MPI-I-94-120},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {We are interested in implementing data structures on shared memory <br>multiprocessors. A natural model for these machines is an <br>asynchronous parallel machine, in which the processors are subject to <br>arbitrary delays. On such machines, it is desirable for algorithms to<br>be {\em lock-free}, that is, they must allow concurrent access to data<br>without using mutual exclusion.<br>Efficient lock-free implementations are known for some specific data<br>structures, but these algorithms do not generalize well to other<br>structures. For most data structures, the only previously known lock-free<br>algorithm is due to Herlihy. Herlihy presents a<br>simple methodology to create a lock-free implementation of a general<br>data structure, but his approach can be very expensive.<br><br>We present a technique that provides the semantics of<br>exclusive access to data without using mutual exclusion.<br>Using <br>this technique, <br>we devise the {\em caching method}, <br>a general method of implementing lock-free data <br>structures that is provably <br>better than Herlihy's methodology for many <br>well-known data structures.<br>The cost of one operation using the caching method<br>is proportional to $T \log T$, where $T$ is the sequential cost of the<br>operation. Under Herlihy's methodology, the <br>cost is proportional to $T + C$,<br>where $C$ is the time needed to make a logical copy of the data structure.<br>For many data structures, such as arrays and<br>{\em well connected} pointer-based structures (e.g., a doubly linked<br>list), the best known value for $C$ is <br>proportional to the size of the structure, making the copying time<br>much larger than the sequential cost of an operation.<br>The new method can also allow {\em concurrent updates} to the data <br>structure; Herlihy's methodology cannot.<br>A correct lock-free implementation can be derived <br>from a correct sequential implementation in a straightforward manner<br>using this<br>method. <br>The method is also flexible; a programmer can change many of the details<br>of the default implementation to optimize for a particular pattern of data<br>structure use.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Barnes, Greg
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A method for implementing lock-free shared data structures :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B78E-0
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-120
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 15 p.
%X We are interested in implementing data structures on shared memory <br>multiprocessors. A natural model for these machines is an <br>asynchronous parallel machine, in which the processors are subject to <br>arbitrary delays. On such machines, it is desirable for algorithms to<br>be {\em lock-free}, that is, they must allow concurrent access to data<br>without using mutual exclusion.<br>Efficient lock-free implementations are known for some specific data<br>structures, but these algorithms do not generalize well to other<br>structures. For most data structures, the only previously known lock-free<br>algorithm is due to Herlihy. Herlihy presents a<br>simple methodology to create a lock-free implementation of a general<br>data structure, but his approach can be very expensive.<br><br>We present a technique that provides the semantics of<br>exclusive access to data without using mutual exclusion.<br>Using <br>this technique, <br>we devise the {\em caching method}, <br>a general method of implementing lock-free data <br>structures that is provably <br>better than Herlihy's methodology for many <br>well-known data structures.<br>The cost of one operation using the caching method<br>is proportional to $T \log T$, where $T$ is the sequential cost of the<br>operation. Under Herlihy's methodology, the <br>cost is proportional to $T + C$,<br>where $C$ is the time needed to make a logical copy of the data structure.<br>For many data structures, such as arrays and<br>{\em well connected} pointer-based structures (e.g., a doubly linked<br>list), the best known value for $C$ is <br>proportional to the size of the structure, making the copying time<br>much larger than the sequential cost of an operation.<br>The new method can also allow {\em concurrent updates} to the data <br>structure; Herlihy's methodology cannot.<br>A correct lock-free implementation can be derived <br>from a correct sequential implementation in a straightforward manner<br>using this<br>method. <br>The method is also flexible; a programmer can change many of the details<br>of the default implementation to optimize for a particular pattern of data<br>structure use.
%B Research Report / Max-Planck-Institut für Informatik
Time-space lower bounds for directed s-t connectivity on JAG models
G. Barnes and J. A. Edmonds
Technical Report, 1994
G. Barnes and J. A. Edmonds
Technical Report, 1994
Abstract
Directed $s$-$t$ connectivity is the problem of detecting whether there<br>is a path from a distinguished vertex $s$ to a distinguished<br>vertex $t$ in a directed graph.<br>We prove time-space lower bounds of $ST = \Omega(n^{2}/\log n)$<br>and $S^{1/2}T = \Omega(m n^{1/2})$<br>for Cook and Rackoff's JAG model, where $n$ is the number of<br>vertices and $m$ the number of edges in the input graph, and<br>$S$ is the space and $T$ the time used by the JAG.<br>We also prove a time-space lower bound of <br>$S^{1/3}T = \Omega(m^{2/3}n^{2/3})$<br>on the more powerful<br>node-named JAG model of Poon.<br>These bounds approach the known upper bound <br>of $T = O(m)$<br>when $S = \Theta(n \log n)$.
Export
BibTeX
@techreport{,
TITLE = {Time-space lower bounds for directed s-t connectivity on {JAG} models},
AUTHOR = {Barnes, Greg and Edmonds, Jeff A.},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-119},
NUMBER = {MPI-I-94-119},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {Directed $s$-$t$ connectivity is the problem of detecting whether there<br>is a path from a distinguished vertex $s$ to a distinguished<br>vertex $t$ in a directed graph.<br>We prove time-space lower bounds of $ST = \Omega(n^{2}/\log n)$<br>and $S^{1/2}T = \Omega(m n^{1/2})$<br>for Cook and Rackoff's JAG model, where $n$ is the number of<br>vertices and $m$ the number of edges in the input graph, and<br>$S$ is the space and $T$ the time used by the JAG.<br>We also prove a time-space lower bound of <br>$S^{1/3}T = \Omega(m^{2/3}n^{2/3})$<br>on the more powerful<br>node-named JAG model of Poon.<br>These bounds approach the known upper bound <br>of $T = O(m)$<br>when $S = \Theta(n \log n)$.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Barnes, Greg
%A Edmonds, Jeff A.
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T Time-space lower bounds for directed s-t connectivity on JAG models :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B78C-3
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-119
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%X Directed $s$-$t$ connectivity is the problem of detecting whether there<br>is a path from a distinguished vertex $s$ to a distinguished<br>vertex $t$ in a directed graph.<br>We prove time-space lower bounds of $ST = \Omega(n^{2}/\log n)$<br>and $S^{1/2}T = \Omega(m n^{1/2})$<br>for Cook and Rackoff's JAG model, where $n$ is the number of<br>vertices and $m$ the number of edges in the input graph, and<br>$S$ is the space and $T$ the time used by the JAG.<br>We also prove a time-space lower bound of <br>$S^{1/3}T = \Omega(m^{2/3}n^{2/3})$<br>on the more powerful<br>node-named JAG model of Poon.<br>These bounds approach the known upper bound <br>of $T = O(m)$<br>when $S = \Theta(n \log n)$.
%B Research Report / Max-Planck-Institut für Informatik
On the intellectual terrain around NP
S. Chari and J. Hartmanis
Technical Report, 1994
S. Chari and J. Hartmanis
Technical Report, 1994
Abstract
In this paper we view $P\stackrel{?}{=}NP$ as the problem which symbolizes<br>the attempt to understand what is and is not feasibly computable.<br>The paper shortly reviews the history of the developments<br>from G"odel's 1956 letter asking for the computational<br>complexity of finding proofs of theorems, through <br>computational complexity, the exploration of complete problems for NP<br>and PSPACE, through the results of structural complexity to the<br>recent insights about interactive proofs.
Export
BibTeX
@techreport{,
TITLE = {On the intellectual terrain around {NP}},
AUTHOR = {Chari, Suresh and Hartmanis, Juris},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-103},
NUMBER = {MPI-I-94-103},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {In this paper we view $P\stackrel{?}{=}NP$ as the problem which symbolizes<br>the attempt to understand what is and is not feasibly computable.<br>The paper shortly reviews the history of the developments<br>from G"odel's 1956 letter asking for the computational<br>complexity of finding proofs of theorems, through <br>computational complexity, the exploration of complete problems for NP<br>and PSPACE, through the results of structural complexity to the<br>recent insights about interactive proofs.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Chari, Suresh
%A Hartmanis, Juris
%+ External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On the intellectual terrain around NP :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B782-8
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-103
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 11 p.
%X In this paper we view $P\stackrel{?}{=}NP$ as the problem which symbolizes<br>the attempt to understand what is and is not feasibly computable.<br>The paper shortly reviews the history of the developments<br>from G"odel's 1956 letter asking for the computational<br>complexity of finding proofs of theorems, through <br>computational complexity, the exploration of complete problems for NP<br>and PSPACE, through the results of structural complexity to the<br>recent insights about interactive proofs.
%B Research Report / Max-Planck-Institut für Informatik
Prefix graphs and their applications
S. Chaudhuri and T. Hagerup
Technical Report, 1994
S. Chaudhuri and T. Hagerup
Technical Report, 1994
Abstract
The \Tstress{range product problem} is, for a given
set $S$ equipped with an associative operator
$\circ$, to preprocess a sequence $a_1,\ldots,a_n$
of elements from $S$ so as to enable efficient
subsequent processing of queries of the form:
Given a pair $(s,t)$ of integers with
$1\le s\le t\le n$, return
$a_s\circ a_{s+1}\circ\cdots\circ a_t$.
The generic range product problem
and special cases thereof,
usually with $\circ$ computing the maximum
of its arguments according to some linear
order on $S$, have been extensively studied.
We show that a large number of previous sequential
and parallel algorithms for these problems can
be unified and simplified by means of prefix graphs.
Export
BibTeX
@techreport{,
TITLE = {Prefix graphs and their applications},
AUTHOR = {Chaudhuri, Shiva and Hagerup, Torben},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-145},
NUMBER = {MPI-I-94-145},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {The \Tstress{range product problem} is, for a given set $S$ equipped with an associative operator $\circ$, to preprocess a sequence $a_1,\ldots,a_n$ of elements from $S$ so as to enable efficient subsequent processing of queries of the form: Given a pair $(s,t)$ of integers with $1\le s\le t\le n$, return $a_s\circ a_{s+1}\circ\cdots\circ a_t$. The generic range product problem and special cases thereof, usually with $\circ$ computing the maximum of its arguments according to some linear order on $S$, have been extensively studied. We show that a large number of previous sequential and parallel algorithms for these problems can be unified and simplified by means of prefix graphs.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Chaudhuri, Shiva
%A Hagerup, Torben
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Prefix graphs and their applications :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B7A0-4
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-145
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 13 p.
%X The \Tstress{range product problem} is, for a given
set $S$ equipped with an associative operator
$\circ$, to preprocess a sequence $a_1,\ldots,a_n$
of elements from $S$ so as to enable efficient
subsequent processing of queries of the form:
Given a pair $(s,t)$ of integers with
$1\le s\le t\le n$, return
$a_s\circ a_{s+1}\circ\cdots\circ a_t$.
The generic range product problem
and special cases thereof,
usually with $\circ$ computing the maximum
of its arguments according to some linear
order on $S$, have been extensively studied.
We show that a large number of previous sequential
and parallel algorithms for these problems can
be unified and simplified by means of prefix graphs.
%B Research Report / Max-Planck-Institut für Informatik
On characteristic points and approximate decision algorithms for the minimum Hausdorff distance
L. P. Chew, K. Kedem and S. Schirra
Technical Report, 1994
L. P. Chew, K. Kedem and S. Schirra
Technical Report, 1994
Abstract
We investigate {\em approximate decision algorithms} for determining
whether the minimum Hausdorff distance between two points sets (or
between two sets of nonintersecting line segments) is at most
$\varepsilon$.\def\eg{(\varepsilon/\gamma)}
An approximate decision algorithm is a standard decision algorithm
that answers {\sc yes} or {\sc no} except when $\varepsilon$ is in
an {\em indecision interval}
where the algorithm is allowed to answer {\sc don't know}.
We present algorithms with indecision interval
$[\delta-\gamma,\delta+\gamma]$ where $\delta$ is the minimum
Hausdorff distance and $\gamma$ can be chosen by the user.
In other words, we can make our
algorithm as accurate as desired by choosing an appropriate $\gamma$.
For two sets of points (or two sets of nonintersecting lines) with
respective
cardinalities $m$ and $n$ our approximate decision algorithms run in
time $O(\eg^2(m+n)\log(mn))$ for Hausdorff distance under translation,
and in time $O(\eg^2mn\log(mn))$ for Hausdorff distance under
Euclidean motion.
Export
BibTeX
@techreport{ChewKedernSchirra94,
TITLE = {On characteristic points and approximate decision algorithms for the minimum Hausdorff distance},
AUTHOR = {Chew, L. P. and Kedem, K. and Schirra, Stefan},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-150},
NUMBER = {MPI-I-94-150},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {We investigate {\em approximate decision algorithms} for determining whether the minimum Hausdorff distance between two points sets (or between two sets of nonintersecting line segments) is at most $\varepsilon$.\def\eg{(\varepsilon/\gamma)} An approximate decision algorithm is a standard decision algorithm that answers {\sc yes} or {\sc no} except when $\varepsilon$ is in an {\em indecision interval} where the algorithm is allowed to answer {\sc don't know}. We present algorithms with indecision interval $[\delta-\gamma,\delta+\gamma]$ where $\delta$ is the minimum Hausdorff distance and $\gamma$ can be chosen by the user. In other words, we can make our algorithm as accurate as desired by choosing an appropriate $\gamma$. For two sets of points (or two sets of nonintersecting lines) with respective cardinalities $m$ and $n$ our approximate decision algorithms run in time $O(\eg^2(m+n)\log(mn))$ for Hausdorff distance under translation, and in time $O(\eg^2mn\log(mn))$ for Hausdorff distance under Euclidean motion.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Chew, L. P.
%A Kedem, K.
%A Schirra, Stefan
%+ External Organizations
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On characteristic points and approximate decision algorithms for the minimum Hausdorff distance :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B53B-7
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-150
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 10 p.
%X We investigate {\em approximate decision algorithms} for determining
whether the minimum Hausdorff distance between two points sets (or
between two sets of nonintersecting line segments) is at most
$\varepsilon$.\def\eg{(\varepsilon/\gamma)}
An approximate decision algorithm is a standard decision algorithm
that answers {\sc yes} or {\sc no} except when $\varepsilon$ is in
an {\em indecision interval}
where the algorithm is allowed to answer {\sc don't know}.
We present algorithms with indecision interval
$[\delta-\gamma,\delta+\gamma]$ where $\delta$ is the minimum
Hausdorff distance and $\gamma$ can be chosen by the user.
In other words, we can make our
algorithm as accurate as desired by choosing an appropriate $\gamma$.
For two sets of points (or two sets of nonintersecting lines) with
respective
cardinalities $m$ and $n$ our approximate decision algorithms run in
time $O(\eg^2(m+n)\log(mn))$ for Hausdorff distance under translation,
and in time $O(\eg^2mn\log(mn))$ for Hausdorff distance under
Euclidean motion.
%B Research Report / Max-Planck-Institut für Informatik
Revenge of the dog: queries on Voronoi diagrams of moving points
O. Devillers, M. J. Golin, S. Schirra and K. Kedem
Technical Report, 1994
O. Devillers, M. J. Golin, S. Schirra and K. Kedem
Technical Report, 1994
Abstract
Suppose we are given $n$ moving postmen described by
their motion equations $p_i(t) = s_i + v_it,$ $i=1,\ldots, n$,
where $s_i \in \R^2$ is the position of the $i$'th postman
at time $t=0$, and $v_i \in \R^2$ is his velocity.
The problem we address is how to preprocess the postmen data so as
to be able to efficientMailly answer two types of nearest neighbor queries.
The first one asks ``who is the nearest postman at time $t_q$ to a dog
located at point $s_q$. In the second type
a fast query dog is located a point $s_q$ at time $t_q$, its
velocity is $v_q$ where $v_q > |v_i|$ for all $i = 1,\ldots,n$, and we want
to know which postman the dog
can catch first. We present two solutions to these problems.
Both solutions use deterministic data structures.
Export
BibTeX
@techreport{DevillersGolinSchirraKedern94,
TITLE = {Revenge of the dog: queries on Voronoi diagrams of moving points},
AUTHOR = {Devillers, O. and Golin, Mordecai J. and Schirra, Stefan and Kedem, K.},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-149},
NUMBER = {MPI-I-94-149},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {Suppose we are given $n$ moving postmen described by their motion equations $p_i(t) = s_i + v_it,$ $i=1,\ldots, n$, where $s_i \in \R^2$ is the position of the $i$'th postman at time $t=0$, and $v_i \in \R^2$ is his velocity. The problem we address is how to preprocess the postmen data so as to be able to efficientMailly answer two types of nearest neighbor queries. The first one asks ``who is the nearest postman at time $t_q$ to a dog located at point $s_q$. In the second type a fast query dog is located a point $s_q$ at time $t_q$, its velocity is $v_q$ where $v_q > |v_i|$ for all $i = 1,\ldots,n$, and we want to know which postman the dog can catch first. We present two solutions to these problems. Both solutions use deterministic data structures.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Devillers, O.
%A Golin, Mordecai J.
%A Schirra, Stefan
%A Kedem, K.
%+ External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T Revenge of the dog: queries on Voronoi diagrams of moving points :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B535-4
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-149
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 14 p.
%X Suppose we are given $n$ moving postmen described by
their motion equations $p_i(t) = s_i + v_it,$ $i=1,\ldots, n$,
where $s_i \in \R^2$ is the position of the $i$'th postman
at time $t=0$, and $v_i \in \R^2$ is his velocity.
The problem we address is how to preprocess the postmen data so as
to be able to efficientMailly answer two types of nearest neighbor queries.
The first one asks ``who is the nearest postman at time $t_q$ to a dog
located at point $s_q$. In the second type
a fast query dog is located a point $s_q$ at time $t_q$, its
velocity is $v_q$ where $v_q > |v_i|$ for all $i = 1,\ldots,n$, and we want
to know which postman the dog
can catch first. We present two solutions to these problems.
Both solutions use deterministic data structures.
%B Research Report / Max-Planck-Institut für Informatik
On-line and Dynamic Shortest Paths through Graph Decompositions (Preliminary Version)
H. Djidjev, G. E. Pantziou and C. Zaroliagis
Technical Report, 1994a
H. Djidjev, G. E. Pantziou and C. Zaroliagis
Technical Report, 1994a
Abstract
We describe algorithms for finding shortest paths and distances in a
planar digraph which exploit the particular topology of the input graph.
We give both sequential and parallel algorithms that
work on a dynamic environment, where the cost of any edge
can be changed or the edge can be deleted.
For outerplanar digraphs, for instance, the data
structures can be updated after any such change in only $O(\log n)$
time, where $n$ is the number of vertices of the digraph.
The parallel algorithms presented here are the first known ones
for solving this problem. Our results can be extended to hold for
digraphs of genus $o(n)$.
Export
BibTeX
@techreport{DjidjevPantziouZaroliagis94a,
TITLE = {On-line and Dynamic Shortest Paths through Graph Decompositions (Preliminary Version)},
AUTHOR = {Djidjev, Hristo and Pantziou, Grammati E. and Zaroliagis, Christos},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-112},
NUMBER = {MPI-I-94-112},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {We describe algorithms for finding shortest paths and distances in a planar digraph which exploit the particular topology of the input graph. We give both sequential and parallel algorithms that work on a dynamic environment, where the cost of any edge can be changed or the edge can be deleted. For outerplanar digraphs, for instance, the data structures can be updated after any such change in only $O(\log n)$ time, where $n$ is the number of vertices of the digraph. The parallel algorithms presented here are the first known ones for solving this problem. Our results can be extended to hold for digraphs of genus $o(n)$.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Djidjev, Hristo
%A Pantziou, Grammati E.
%A Zaroliagis, Christos
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On-line and Dynamic Shortest Paths through Graph Decompositions (Preliminary Version) :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B50A-6
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-112
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 14 p.
%X We describe algorithms for finding shortest paths and distances in a
planar digraph which exploit the particular topology of the input graph.
We give both sequential and parallel algorithms that
work on a dynamic environment, where the cost of any edge
can be changed or the edge can be deleted.
For outerplanar digraphs, for instance, the data
structures can be updated after any such change in only $O(\log n)$
time, where $n$ is the number of vertices of the digraph.
The parallel algorithms presented here are the first known ones
for solving this problem. Our results can be extended to hold for
digraphs of genus $o(n)$.
%B Research Report / Max-Planck-Institut für Informatik
On-line and dynamic algorithms for shortest path problems
H. Djidjev, G. E. Pantziou and C. Zaroliagis
Technical Report, 1994b
H. Djidjev, G. E. Pantziou and C. Zaroliagis
Technical Report, 1994b
Abstract
We describe algorithms for finding shortest paths and distances in a planar digraph which exploit the particular topology of the input graph. An important feature of our algorithms is that they can work in a dynamic environment, where the cost of any edge can be changed or the edge can be deleted. For outerplanar digraphs, for instance, the data structures can be updated after any such change in only $O(\log n)$ time, where $n$ is the number of vertices of the digraph. We also describe the first parallel algorithms for solving the dynamic version of the shortest path problem. Our results can be extended to hold for digraphs of genus $o(n)$.
Export
BibTeX
@techreport{DjidjevPantziouZaroliagis94b,
TITLE = {On-line and dynamic algorithms for shortest path problems},
AUTHOR = {Djidjev, Hristo and Pantziou, Grammati E. and Zaroliagis, Christos},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-114},
NUMBER = {MPI-I-94-114},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {We describe algorithms for finding shortest paths and distances in a planar digraph which exploit the particular topology of the input graph. An important feature of our algorithms is that they can work in a dynamic environment, where the cost of any edge can be changed or the edge can be deleted. For outerplanar digraphs, for instance, the data structures can be updated after any such change in only $O(\log n)$ time, where $n$ is the number of vertices of the digraph. We also describe the first parallel algorithms for solving the dynamic version of the shortest path problem. Our results can be extended to hold for digraphs of genus $o(n)$.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Djidjev, Hristo
%A Pantziou, Grammati E.
%A Zaroliagis, Christos
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On-line and dynamic algorithms for shortest path problems :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B514-E
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-114
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 20 p.
%X We describe algorithms for finding shortest paths and distances in a planar digraph which exploit the particular topology of the input graph. An important feature of our algorithms is that they can work in a dynamic environment, where the cost of any edge can be changed or the edge can be deleted. For outerplanar digraphs, for instance, the data structures can be updated after any such change in only $O(\log n)$ time, where $n$ is the number of vertices of the digraph. We also describe the first parallel algorithms for solving the dynamic version of the shortest path problem. Our results can be extended to hold for digraphs of genus $o(n)$.
%B Research Report / Max-Planck-Institut für Informatik
Some correlation inequalities for probabilistic analysis of algorithms
D. P. Dubhashi and D. Ranjan
Technical Report, 1994a
D. P. Dubhashi and D. Ranjan
Technical Report, 1994a
Abstract
The analysis of many randomized algorithms, for example in dynamic load
balancing, probabilistic divide-and-conquer paradigm and distributed
edge-coloring, requires ascertaining the precise nature of the correlation
between the random variables arising in the following prototypical
``balls-and-bins'' experiment.
Suppose a certain number of balls are thrown uniformly and
independently at random into $n$ bins. Let $X_i$ be the random
variable denoting the number of balls in the $i$th bin, $i \in
[n]$. These variables are clearly not independent and are intuitively
negatively related.
We make this mathematically precise by proving the following type of
correlation inequalities:
\begin{itemize}
\item For index sets $I,J \subseteq [n]$ such that $I \cap J =
\emptyset$ or $I \cup J = [n]$, and any non--negative
integers $t_I,t_J$,
\[ \prob[\sum_{i \in I} X_i \geq t_I \mid \sum_{j \in J} X_j \geq t_J]
\]
\\[-5mm]
\[\leq
\prob[\sum_{i \in I} X_i \geq t_I] .\]
\item For any disjoint index sets $I,J \subseteq [n]$, any $I' \subseteq I,
J' \subseteq J$ and any non--negative integers $t_i, i \in I$ and $t_j, j \in J$,
\[ \prob[\bigwedge_{i \in I}X_i \geq t_i \mid \bigwedge_{j \in J} X_j
\geq t_j]\]\\[-5mm]\[ \leq
\prob[\bigwedge_{i \in I'}X_i \geq t_i \mid \bigwedge_{j \in J'} X_j \geq t_j] . \]
\end{itemize}
Although these inequalities are intuitively appealing, establishing
them is non--trivial; in particular, direct counting arguments become
intractable very fast. We prove the inequalities of the first type by
an application of the celebrated FKG Correlation Inequality. The proof
for the second uses only elementary methods and hinges on some
{\em monotonicity} properties.
More importantly, we then introduce a
general methodology that may be applicable whenever the random variables
involved are negatively related. Precisely, we invoke a general notion
of {\em negative assocation\/} of random variables and show that:
\begin{itemize}
\item The variables $X_i$ are negatively associated. This yields most
of the previous results in a uniform way.
\item For a set of negatively associated variables, one can apply the
Chernoff-Hoeffding bounds to the sum of these variables. This provides
a tool that facilitates analysis of many randomized algorithms, for
example, the ones mentioned above.
Export
BibTeX
@techreport{,
TITLE = {Some correlation inequalities for probabilistic analysis of algorithms},
AUTHOR = {Dubhashi, Devdatt P. and Ranjan, Desh},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-143},
NUMBER = {MPI-I-94-143},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {The analysis of many randomized algorithms, for example in dynamic load balancing, probabilistic divide-and-conquer paradigm and distributed edge-coloring, requires ascertaining the precise nature of the correlation between the random variables arising in the following prototypical ``balls-and-bins'' experiment. Suppose a certain number of balls are thrown uniformly and independently at random into $n$ bins. Let $X_i$ be the random variable denoting the number of balls in the $i$th bin, $i \in [n]$. These variables are clearly not independent and are intuitively negatively related. We make this mathematically precise by proving the following type of correlation inequalities: \begin{itemize} \item For index sets $I,J \subseteq [n]$ such that $I \cap J = \emptyset$ or $I \cup J = [n]$, and any non--negative integers $t_I,t_J$, \[ \prob[\sum_{i \in I} X_i \geq t_I \mid \sum_{j \in J} X_j \geq t_J] \] \\[-5mm] \[\leq \prob[\sum_{i \in I} X_i \geq t_I] .\] \item For any disjoint index sets $I,J \subseteq [n]$, any $I' \subseteq I, J' \subseteq J$ and any non--negative integers $t_i, i \in I$ and $t_j, j \in J$, \[ \prob[\bigwedge_{i \in I}X_i \geq t_i \mid \bigwedge_{j \in J} X_j \geq t_j]\]\\[-5mm]\[ \leq \prob[\bigwedge_{i \in I'}X_i \geq t_i \mid \bigwedge_{j \in J'} X_j \geq t_j] . \] \end{itemize} Although these inequalities are intuitively appealing, establishing them is non--trivial; in particular, direct counting arguments become intractable very fast. We prove the inequalities of the first type by an application of the celebrated FKG Correlation Inequality. The proof for the second uses only elementary methods and hinges on some {\em monotonicity} properties. More importantly, we then introduce a general methodology that may be applicable whenever the random variables involved are negatively related. Precisely, we invoke a general notion of {\em negative assocation\/} of random variables and show that: \begin{itemize} \item The variables $X_i$ are negatively associated. This yields most of the previous results in a uniform way. \item For a set of negatively associated variables, one can apply the Chernoff-Hoeffding bounds to the sum of these variables. This provides a tool that facilitates analysis of many randomized algorithms, for example, the ones mentioned above.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Dubhashi, Devdatt P.
%A Ranjan, Desh
%+ External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Some correlation inequalities for probabilistic analysis of algorithms :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B79B-1
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-143
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 16 p.
%X The analysis of many randomized algorithms, for example in dynamic load
balancing, probabilistic divide-and-conquer paradigm and distributed
edge-coloring, requires ascertaining the precise nature of the correlation
between the random variables arising in the following prototypical
``balls-and-bins'' experiment.
Suppose a certain number of balls are thrown uniformly and
independently at random into $n$ bins. Let $X_i$ be the random
variable denoting the number of balls in the $i$th bin, $i \in
[n]$. These variables are clearly not independent and are intuitively
negatively related.
We make this mathematically precise by proving the following type of
correlation inequalities:
\begin{itemize}
\item For index sets $I,J \subseteq [n]$ such that $I \cap J =
\emptyset$ or $I \cup J = [n]$, and any non--negative
integers $t_I,t_J$,
\[ \prob[\sum_{i \in I} X_i \geq t_I \mid \sum_{j \in J} X_j \geq t_J]
\]
\\[-5mm]
\[\leq
\prob[\sum_{i \in I} X_i \geq t_I] .\]
\item For any disjoint index sets $I,J \subseteq [n]$, any $I' \subseteq I,
J' \subseteq J$ and any non--negative integers $t_i, i \in I$ and $t_j, j \in J$,
\[ \prob[\bigwedge_{i \in I}X_i \geq t_i \mid \bigwedge_{j \in J} X_j
\geq t_j]\]\\[-5mm]\[ \leq
\prob[\bigwedge_{i \in I'}X_i \geq t_i \mid \bigwedge_{j \in J'} X_j \geq t_j] . \]
\end{itemize}
Although these inequalities are intuitively appealing, establishing
them is non--trivial; in particular, direct counting arguments become
intractable very fast. We prove the inequalities of the first type by
an application of the celebrated FKG Correlation Inequality. The proof
for the second uses only elementary methods and hinges on some
{\em monotonicity} properties.
More importantly, we then introduce a
general methodology that may be applicable whenever the random variables
involved are negatively related. Precisely, we invoke a general notion
of {\em negative assocation\/} of random variables and show that:
\begin{itemize}
\item The variables $X_i$ are negatively associated. This yields most
of the previous results in a uniform way.
\item For a set of negatively associated variables, one can apply the
Chernoff-Hoeffding bounds to the sum of these variables. This provides
a tool that facilitates analysis of many randomized algorithms, for
example, the ones mentioned above.
%B Research Report / Max-Planck-Institut für Informatik
Near-optimal distributed edge coloring
D. P. Dubhashi and A. Panconesi
Technical Report, 1994
D. P. Dubhashi and A. Panconesi
Technical Report, 1994
Abstract
We give a distributed randomized algorithm to edge color a
network. Given a graph $G$ with $n$ nodes and maximum degree
$\Delta$, the algorithm,
\begin{itemize}
\item For any fixed $\lambda >0$, colours $G$ with $(1+ \lambda)
\Delta$ colours in time $O(\log n)$.
\item For any fixed positive integer $s$, colours $G$ with
$\Delta + \frac {\Delta} {(\log \Delta)^s}=(1 + o(1)) \Delta $
colours in time $O (\log n + \log ^{2s} \Delta \log \log
\Delta $.
\end{itemize}
Both results hold with probability arbitrarily close to 1
as long as $\Delta (G) = \Omega (\log^{1+d}
n)$, for some $d>0$.\\
The algorithm is based on the R"odl Nibble, a probabilistic strategy
introduced by Vojtech R"odl. The analysis involves a certain
pseudo--random phenomenon involving sets at the
vertices
Export
BibTeX
@techreport{dubhashi94136,
TITLE = {Near-optimal distributed edge coloring},
AUTHOR = {Dubhashi, Devdatt P. and Panconesi, Alessandro},
LANGUAGE = {eng},
NUMBER = {MPI-I-94-136},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {We give a distributed randomized algorithm to edge color a network. Given a graph $G$ with $n$ nodes and maximum degree $\Delta$, the algorithm, \begin{itemize} \item For any fixed $\lambda >0$, colours $G$ with $(1+ \lambda) \Delta$ colours in time $O(\log n)$. \item For any fixed positive integer $s$, colours $G$ with $\Delta + \frac {\Delta} {(\log \Delta)^s}=(1 + o(1)) \Delta $ colours in time $O (\log n + \log ^{2s} \Delta \log \log \Delta $. \end{itemize} Both results hold with probability arbitrarily close to 1 as long as $\Delta (G) = \Omega (\log^{1+d} n)$, for some $d>0$.\\ The algorithm is based on the R"odl Nibble, a probabilistic strategy introduced by Vojtech R"odl. The analysis involves a certain pseudo--random phenomenon involving sets at the vertices},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Dubhashi, Devdatt P.
%A Panconesi, Alessandro
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Near-optimal distributed edge coloring :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B794-F
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 12 p.
%X We give a distributed randomized algorithm to edge color a
network. Given a graph $G$ with $n$ nodes and maximum degree
$\Delta$, the algorithm,
\begin{itemize}
\item For any fixed $\lambda >0$, colours $G$ with $(1+ \lambda)
\Delta$ colours in time $O(\log n)$.
\item For any fixed positive integer $s$, colours $G$ with
$\Delta + \frac {\Delta} {(\log \Delta)^s}=(1 + o(1)) \Delta $
colours in time $O (\log n + \log ^{2s} \Delta \log \log
\Delta $.
\end{itemize}
Both results hold with probability arbitrarily close to 1
as long as $\Delta (G) = \Omega (\log^{1+d}
n)$, for some $d>0$.\\
The algorithm is based on the R"odl Nibble, a probabilistic strategy
introduced by Vojtech R"odl. The analysis involves a certain
pseudo--random phenomenon involving sets at the
vertices
%B Research Report / Max-Planck-Institut für Informatik
Stochastic majorisation: exploding some myths
D. P. Dubhashi and D. Ranjan
Technical Report, 1994b
D. P. Dubhashi and D. Ranjan
Technical Report, 1994b
Abstract
The analysis of many randomised algorithms involves random variables
that are not independent, and hence many of the standard tools from
classical probability theory that would be useful in the analysis,
such as the Chernoff--Hoeffding bounds are rendered
inapplicable. However, in many instances, the random variables
involved are, nevertheless {\em negatively related\/} in
the intuitive sense that when one of the variables is ``large'',
another is likely to be ``small''. (this notion is made precise and
analysed in [1].) In such situations, one is tempted to
conjecture that these variables are in some sense {\em stochastically
dominated\/} by a set of {\em independent\/} random variables with the
same marginals. Thereby, one hopes to salvage tools such as the
Chernoff--Hoeffding bound also for analysis involving the dependent
set of variables. The analysis in [6, 7, 8] seems to strongly
hint in this direction. In this note, we explode myths of this kind, and
argue that stochastic majorisation in conjunction with an independent
set of variables is actually much less useful a notion than it might
have appeared.
Export
BibTeX
@techreport{,
TITLE = {Stochastic majorisation: exploding some myths},
AUTHOR = {Dubhashi, Devdatt P. and Ranjan, Desh},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-144},
NUMBER = {MPI-I-94-144},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {The analysis of many randomised algorithms involves random variables that are not independent, and hence many of the standard tools from classical probability theory that would be useful in the analysis, such as the Chernoff--Hoeffding bounds are rendered inapplicable. However, in many instances, the random variables involved are, nevertheless {\em negatively related\/} in the intuitive sense that when one of the variables is ``large'', another is likely to be ``small''. (this notion is made precise and analysed in [1].) In such situations, one is tempted to conjecture that these variables are in some sense {\em stochastically dominated\/} by a set of {\em independent\/} random variables with the same marginals. Thereby, one hopes to salvage tools such as the Chernoff--Hoeffding bound also for analysis involving the dependent set of variables. The analysis in [6, 7, 8] seems to strongly hint in this direction. In this note, we explode myths of this kind, and argue that stochastic majorisation in conjunction with an independent set of variables is actually much less useful a notion than it might have appeared.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Dubhashi, Devdatt P.
%A Ranjan, Desh
%+ External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Stochastic majorisation: exploding some myths :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B79D-E
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-144
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 5 p.
%X The analysis of many randomised algorithms involves random variables
that are not independent, and hence many of the standard tools from
classical probability theory that would be useful in the analysis,
such as the Chernoff--Hoeffding bounds are rendered
inapplicable. However, in many instances, the random variables
involved are, nevertheless {\em negatively related\/} in
the intuitive sense that when one of the variables is ``large'',
another is likely to be ``small''. (this notion is made precise and
analysed in [1].) In such situations, one is tempted to
conjecture that these variables are in some sense {\em stochastically
dominated\/} by a set of {\em independent\/} random variables with the
same marginals. Thereby, one hopes to salvage tools such as the
Chernoff--Hoeffding bound also for analysis involving the dependent
set of variables. The analysis in [6, 7, 8] seems to strongly
hint in this direction. In this note, we explode myths of this kind, and
argue that stochastic majorisation in conjunction with an independent
set of variables is actually much less useful a notion than it might
have appeared.
%B Research Report / Max-Planck-Institut für Informatik
22. Workshop Komplexitätstheorie und effiziente Algorithmen
R. Fleischer
Technical Report, 1994
R. Fleischer
Technical Report, 1994
Abstract
his publication contains abstracts of the 22nd workshop on complexity
theory and efficient algorithms. The workshop was held on February 8, 1994,
at the Max-Planck-Institut für Informatik, Saarbrücken, Germany.
Export
BibTeX
@techreport{MPI-I-94-104,
TITLE = {22. Workshop Komplexit{\"a}tstheorie und effiziente Algorithmen},
AUTHOR = {Fleischer, Rudolf},
LANGUAGE = {eng},
LANGUAGE = {deu},
NUMBER = {MPI-I-94-104},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {his publication contains abstracts of the 22nd workshop on complexity theory and efficient algorithms. The workshop was held on February 8, 1994, at the Max-Planck-Institut f{\"u}r Informatik, Saarbr{\"u}cken, Germany.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Fleischer, Rudolf
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T 22. Workshop Komplexitätstheorie und effiziente Algorithmen :
%G eng deu
%U http://hdl.handle.net/11858/00-001M-0000-0014-B786-F
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 30 p.
%X his publication contains abstracts of the 22nd workshop on complexity
theory and efficient algorithms. The workshop was held on February 8, 1994,
at the Max-Planck-Institut für Informatik, Saarbrücken, Germany.
%B Research Report / Max-Planck-Institut für Informatik
The rectangle enclosure and point-dominance problems revisited
P. Gupta, R. Janardan, M. Smid and B. Dasgupta
Technical Report, 1994
P. Gupta, R. Janardan, M. Smid and B. Dasgupta
Technical Report, 1994
Abstract
We consider the problem of reporting the pairwise enclosures
among a set of $n$ axes-parallel rectangles in $\IR^2$,
which is equivalent to reporting dominance pairs in a set
of $n$ points in $\IR^4$. For more than ten years, it has been
an open problem whether these problems can be solved faster than
in $O(n \log^2 n +k)$ time, where $k$ denotes the number of
reported pairs. First, we give a divide-and-conquer algorithm
that matches the $O(n)$ space and $O(n \log^2 n +k)$ time
bounds of the algorithm of Lee and Preparata,
but is simpler.
Then we give another algorithm that uses $O(n)$ space and runs
in $O(n \log n \log\log n + k \log\log n)$ time. For the
special case where the rectangles have at most $\alpha$
different aspect ratios, we give an algorithm tha
Export
BibTeX
@techreport{GuptaJanardanSmidDasgupta94,
TITLE = {The rectangle enclosure and point-dominance problems revisited},
AUTHOR = {Gupta, Prosenjit and Janardan, Ravi and Smid, Michiel and Dasgupta, Bhaskar},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-142},
NUMBER = {MPI-I-94-142},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {We consider the problem of reporting the pairwise enclosures among a set of $n$ axes-parallel rectangles in $\IR^2$, which is equivalent to reporting dominance pairs in a set of $n$ points in $\IR^4$. For more than ten years, it has been an open problem whether these problems can be solved faster than in $O(n \log^2 n +k)$ time, where $k$ denotes the number of reported pairs. First, we give a divide-and-conquer algorithm that matches the $O(n)$ space and $O(n \log^2 n +k)$ time bounds of the algorithm of Lee and Preparata, but is simpler. Then we give another algorithm that uses $O(n)$ space and runs in $O(n \log n \log\log n + k \log\log n)$ time. For the special case where the rectangles have at most $\alpha$ different aspect ratios, we give an algorithm tha},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Gupta, Prosenjit
%A Janardan, Ravi
%A Smid, Michiel
%A Dasgupta, Bhaskar
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T The rectangle enclosure and point-dominance problems revisited :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B525-8
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-142
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 16 p.
%X We consider the problem of reporting the pairwise enclosures
among a set of $n$ axes-parallel rectangles in $\IR^2$,
which is equivalent to reporting dominance pairs in a set
of $n$ points in $\IR^4$. For more than ten years, it has been
an open problem whether these problems can be solved faster than
in $O(n \log^2 n +k)$ time, where $k$ denotes the number of
reported pairs. First, we give a divide-and-conquer algorithm
that matches the $O(n)$ space and $O(n \log^2 n +k)$ time
bounds of the algorithm of Lee and Preparata,
but is simpler.
Then we give another algorithm that uses $O(n)$ space and runs
in $O(n \log n \log\log n + k \log\log n)$ time. For the
special case where the rectangles have at most $\alpha$
different aspect ratios, we give an algorithm tha
%B Research Report / Max-Planck-Institut für Informatik
Fast algorithms for collision and proximity problems involving moving geometric objects
P. Gupta, J. Janardan and M. Smid
Technical Report, 1994
P. Gupta, J. Janardan and M. Smid
Technical Report, 1994
Abstract
Consider a set of geometric objects, such as points, line
segments, or axes-parallel hyperrectangles in $\IR^d$, that
move with constant but possibly different velocities along
linear trajectories. Efficient algorithms are presented for
several problems defined on such objects, such as determining
whether any two objects ever collide and computing the minimum
inter-point separation or minimum diameter that ever occurs.
The strategy used involves reducing the given
problem on moving objects to a different
problem on a set of static objects, and then
solving the latter problem using
techniques based on sweeping, orthogonal range
searching, simplex composition, and parametric search.
Export
BibTeX
@techreport{GuptaJanardanSmid94,
TITLE = {Fast algorithms for collision and proximity problems involving moving geometric objects},
AUTHOR = {Gupta, Prosenjit and Janardan, Janardan and Smid, Michiel},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-113},
NUMBER = {MPI-I-94-113},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {Consider a set of geometric objects, such as points, line segments, or axes-parallel hyperrectangles in $\IR^d$, that move with constant but possibly different velocities along linear trajectories. Efficient algorithms are presented for several problems defined on such objects, such as determining whether any two objects ever collide and computing the minimum inter-point separation or minimum diameter that ever occurs. The strategy used involves reducing the given problem on moving objects to a different problem on a set of static objects, and then solving the latter problem using techniques based on sweeping, orthogonal range searching, simplex composition, and parametric search.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Gupta, Prosenjit
%A Janardan, Janardan
%A Smid, Michiel
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Fast algorithms for collision and proximity problems involving moving geometric objects :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B50E-D
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-113
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 22 p.
%X Consider a set of geometric objects, such as points, line
segments, or axes-parallel hyperrectangles in $\IR^d$, that
move with constant but possibly different velocities along
linear trajectories. Efficient algorithms are presented for
several problems defined on such objects, such as determining
whether any two objects ever collide and computing the minimum
inter-point separation or minimum diameter that ever occurs.
The strategy used involves reducing the given
problem on moving objects to a different
problem on a set of static objects, and then
solving the latter problem using
techniques based on sweeping, orthogonal range
searching, simplex composition, and parametric search.
%B Research Report / Max-Planck-Institut für Informatik
Quickest paths: faster algorithms and dynamization
D. Kargaris, G. E. Pantziou, S. Tragoudas and C. Zaroliagis
Technical Report, 1994
D. Kargaris, G. E. Pantziou, S. Tragoudas and C. Zaroliagis
Technical Report, 1994
Abstract
Given a network $N=(V,E,{c},{l})$, where
$G=(V,E)$, $|V|=n$ and $|E|=m$,
is a directed graph, ${c}(e) > 0$ is the capacity
and ${l}(e) \ge 0$ is the lead time (or delay) for each edge $e\in E$,
the quickest path problem is to find a path for a
given source--destination pair such that the total lead time plus the
inverse of the minimum edge capacity of the path
is minimal. The problem has applications to fast data
transmissions in communication networks. The best previous algorithm for the
single pair quickest path problem runs in time $O(r m+r n \log n)$,
where $r$ is the number of distinct capacities of $N$.
In this paper,
we present algorithms for general, sparse and planar
networks that have significantly lower running times.
For general networks, we show that the time complexity can be
reduced to $O(r^{\ast} m+r^{\ast} n \log n)$,
where $r^{\ast}$ is at most the number of capacities greater than the
capacity of the shortest (with respect to lead time) path in $N$.
For sparse networks, we present an algorithm with time complexity
$O(n \log n + r^{\ast} n + r^{\ast} \tilde{\gamma} \log \tilde{\gamma})$,
where $\tilde{\gamma}$ is a topological measure of $N$.
Since for sparse networks $\tilde{\gamma}$ ranges from $1$
up to $\Theta(n)$,
this constitutes an improvement over the previously known bound of
$O(r n \log n)$ in all cases that $\tilde{\gamma}=o(n)$.
For planar networks, the complexity becomes $O(n \log n +
n\log^3 \tilde{\gamma}+ r^{\ast} \tilde{\gamma})$.
Similar improvements are obtained
for the all--pairs quickest path problem.
We also give the first algorithm for solving the dynamic quickest path problem.
Export
BibTeX
@techreport{KargarisPantziouTragoudasZaroliagis94,
TITLE = {Quickest paths: faster algorithms and dynamization},
AUTHOR = {Kargaris, Dimitrios and Pantziou, Grammati E. and Tragoudas, Spyros and Zaroliagis, Christos},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-110},
NUMBER = {MPI-I-94-110},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {Given a network $N=(V,E,{c},{l})$, where $G=(V,E)$, $|V|=n$ and $|E|=m$, is a directed graph, ${c}(e) > 0$ is the capacity and ${l}(e) \ge 0$ is the lead time (or delay) for each edge $e\in E$, the quickest path problem is to find a path for a given source--destination pair such that the total lead time plus the inverse of the minimum edge capacity of the path is minimal. The problem has applications to fast data transmissions in communication networks. The best previous algorithm for the single pair quickest path problem runs in time $O(r m+r n \log n)$, where $r$ is the number of distinct capacities of $N$. In this paper, we present algorithms for general, sparse and planar networks that have significantly lower running times. For general networks, we show that the time complexity can be reduced to $O(r^{\ast} m+r^{\ast} n \log n)$, where $r^{\ast}$ is at most the number of capacities greater than the capacity of the shortest (with respect to lead time) path in $N$. For sparse networks, we present an algorithm with time complexity $O(n \log n + r^{\ast} n + r^{\ast} \tilde{\gamma} \log \tilde{\gamma})$, where $\tilde{\gamma}$ is a topological measure of $N$. Since for sparse networks $\tilde{\gamma}$ ranges from $1$ up to $\Theta(n)$, this constitutes an improvement over the previously known bound of $O(r n \log n)$ in all cases that $\tilde{\gamma}=o(n)$. For planar networks, the complexity becomes $O(n \log n + n\log^3 \tilde{\gamma}+ r^{\ast} \tilde{\gamma})$. Similar improvements are obtained for the all--pairs quickest path problem. We also give the first algorithm for solving the dynamic quickest path problem.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Kargaris, Dimitrios
%A Pantziou, Grammati E.
%A Tragoudas, Spyros
%A Zaroliagis, Christos
%+ External Organizations
External Organizations
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Quickest paths: faster algorithms and dynamization :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B505-0
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-110
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 15 p.
%X Given a network $N=(V,E,{c},{l})$, where
$G=(V,E)$, $|V|=n$ and $|E|=m$,
is a directed graph, ${c}(e) > 0$ is the capacity
and ${l}(e) \ge 0$ is the lead time (or delay) for each edge $e\in E$,
the quickest path problem is to find a path for a
given source--destination pair such that the total lead time plus the
inverse of the minimum edge capacity of the path
is minimal. The problem has applications to fast data
transmissions in communication networks. The best previous algorithm for the
single pair quickest path problem runs in time $O(r m+r n \log n)$,
where $r$ is the number of distinct capacities of $N$.
In this paper,
we present algorithms for general, sparse and planar
networks that have significantly lower running times.
For general networks, we show that the time complexity can be
reduced to $O(r^{\ast} m+r^{\ast} n \log n)$,
where $r^{\ast}$ is at most the number of capacities greater than the
capacity of the shortest (with respect to lead time) path in $N$.
For sparse networks, we present an algorithm with time complexity
$O(n \log n + r^{\ast} n + r^{\ast} \tilde{\gamma} \log \tilde{\gamma})$,
where $\tilde{\gamma}$ is a topological measure of $N$.
Since for sparse networks $\tilde{\gamma}$ ranges from $1$
up to $\Theta(n)$,
this constitutes an improvement over the previously known bound of
$O(r n \log n)$ in all cases that $\tilde{\gamma}=o(n)$.
For planar networks, the complexity becomes $O(n \log n +
n\log^3 \tilde{\gamma}+ r^{\ast} \tilde{\gamma})$.
Similar improvements are obtained
for the all--pairs quickest path problem.
We also give the first algorithm for solving the dynamic quickest path problem.
%B Research Report / Max-Planck-Institut für Informatik
Further improvements of Steiner tree approximations
M. Karpinksi and A. Zelikovsky
Technical Report, 1994
M. Karpinksi and A. Zelikovsky
Technical Report, 1994
Abstract
The Steiner tree problem requires to find a shortest tree
connecting a given set of terminal points in a metric space.
We suggest a better and fast heuristic for the Steiner problem
in graphs and in rectilinear plane. This heuristic finds a Steiner
tree at most 1.757 and 1.267 times longer than the optimal solution
in graphs and rectilinear plane, respectively.
Export
BibTeX
@techreport{,
TITLE = {Further improvements of Steiner tree approximations},
AUTHOR = {Karpinksi, M. and Zelikovsky, Alexander},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-158},
NUMBER = {MPI-I-94-158},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {The Steiner tree problem requires to find a shortest tree connecting a given set of terminal points in a metric space. We suggest a better and fast heuristic for the Steiner problem in graphs and in rectilinear plane. This heuristic finds a Steiner tree at most 1.757 and 1.267 times longer than the optimal solution in graphs and rectilinear plane, respectively.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Karpinksi, M.
%A Zelikovsky, Alexander
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Further improvements of Steiner tree approximations :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B7A4-B
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-158
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 10 p.
%X The Steiner tree problem requires to find a shortest tree
connecting a given set of terminal points in a metric space.
We suggest a better and fast heuristic for the Steiner problem
in graphs and in rectilinear plane. This heuristic finds a Steiner
tree at most 1.757 and 1.267 times longer than the optimal solution
in graphs and rectilinear plane, respectively.
%B Research Report / Max-Planck-Institut für Informatik
Towards practical permutation routing on meshes
M. Kaufmann, U. Meyer and J. F. Sibeyn
Technical Report, 1994
M. Kaufmann, U. Meyer and J. F. Sibeyn
Technical Report, 1994
Abstract
We consider the permutation routing problem on two-dimensional $n
\times n$ meshes. To be practical, a routing algorithm is required
to ensure very small queue sizes $Q$, and very low running time $T$,
not only asymptotically but particularly also for the practically
important $n$ up to $1000$. With a technique inspired by a
scheme of Kaklamanis/Krizanc/Rao, we obtain a near-optimal result:
$T = 2 \cdot n + {\cal O}(1)$ with $Q = 2$. Although $Q$ is very
attractive now, the lower order terms in $T$ make this algorithm
highly impractical. Therefore we present simple schemes which are
asymptotically slower, but have $T$ around $3 \cdot n$ for {\em all}
$n$ and $Q$ between 2 and 8.
Export
BibTeX
@techreport{KaufmannMeyerSibeyn94,
TITLE = {Towards practical permutation routing on meshes},
AUTHOR = {Kaufmann, Michael and Meyer, Ulrich and Sibeyn, Jop Frederic},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-153},
NUMBER = {MPI-I-94-153},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {We consider the permutation routing problem on two-dimensional $n \times n$ meshes. To be practical, a routing algorithm is required to ensure very small queue sizes $Q$, and very low running time $T$, not only asymptotically but particularly also for the practically important $n$ up to $1000$. With a technique inspired by a scheme of Kaklamanis/Krizanc/Rao, we obtain a near-optimal result: $T = 2 \cdot n + {\cal O}(1)$ with $Q = 2$. Although $Q$ is very attractive now, the lower order terms in $T$ make this algorithm highly impractical. Therefore we present simple schemes which are asymptotically slower, but have $T$ around $3 \cdot n$ for {\em all} $n$ and $Q$ between 2 and 8.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Kaufmann, Michael
%A Meyer, Ulrich
%A Sibeyn, Jop Frederic
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Towards practical permutation routing on meshes :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B53F-0
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-153
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 11 p.
%X We consider the permutation routing problem on two-dimensional $n
\times n$ meshes. To be practical, a routing algorithm is required
to ensure very small queue sizes $Q$, and very low running time $T$,
not only asymptotically but particularly also for the practically
important $n$ up to $1000$. With a technique inspired by a
scheme of Kaklamanis/Krizanc/Rao, we obtain a near-optimal result:
$T = 2 \cdot n + {\cal O}(1)$ with $Q = 2$. Although $Q$ is very
attractive now, the lower order terms in $T$ make this algorithm
highly impractical. Therefore we present simple schemes which are
asymptotically slower, but have $T$ around $3 \cdot n$ for {\em all}
$n$ and $Q$ between 2 and 8.
%B Research Report / Max-Planck-Institut für Informatik
Hammock-on-ears decomposition: a technique for the efficient parallel solution of shortest paths and other problems
G. Kavvadias, G. E. Pantziou, P. G. Spirakis and C. Zaroliagis
Technical Report, 1994
G. Kavvadias, G. E. Pantziou, P. G. Spirakis and C. Zaroliagis
Technical Report, 1994
Abstract
We show how to decompose efficiently in parallel
{\em any} graph into a number,
$\tilde{\gamma}$, of outerplanar subgraphs
(called {\em hammocks}) satisfying
certain separator properties. Our work combines and extends
the sequential hammock decomposition technique
introduced by G.~Frederickson and
the parallel ear decomposition
technique, thus we call it the {\em hammock-on-ears decomposition}.
We mention that hammock-on-ears
decomposition also draws from techniques in computational
geometry and that an embedding of the graph does not need to
be provided with the input. We achieve this decomposition
in $O(\log n\log\log n)$ time using $O(n+m)$ CREW PRAM
processors, for an $n$-vertex, $m$-edge graph or digraph.
The hammock-on-ears decomposition implies a general framework
for solving graph problems efficiently. Its value is
demonstrated by a variety of applications on a
significant class of (di)graphs, namely that of
{\em sparse (di)graphs}. This class consists of all (di)graphs
which have a $\tilde{\gamma}$ between $1$ and $\Theta(n)$,
and includes planar graphs and graphs with genus $o(n)$.
We improve previous bounds for certain instances of shortest paths
and related problems, in this class of graphs. These problems include
all pairs shortest paths, all pairs reachability,
Export
BibTeX
@techreport{KavvadiasPantziouSpirakisZaroliagis94,
TITLE = {Hammock-on-ears decomposition: a technique for the efficient parallel solution of shortest paths and other problems},
AUTHOR = {Kavvadias, G. and Pantziou, Grammati E. and Spirakis, P. G. and Zaroliagis, Christos},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-131},
NUMBER = {MPI-I-94-131},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {We show how to decompose efficiently in parallel {\em any} graph into a number, $\tilde{\gamma}$, of outerplanar subgraphs (called {\em hammocks}) satisfying certain separator properties. Our work combines and extends the sequential hammock decomposition technique introduced by G.~Frederickson and the parallel ear decomposition technique, thus we call it the {\em hammock-on-ears decomposition}. We mention that hammock-on-ears decomposition also draws from techniques in computational geometry and that an embedding of the graph does not need to be provided with the input. We achieve this decomposition in $O(\log n\log\log n)$ time using $O(n+m)$ CREW PRAM processors, for an $n$-vertex, $m$-edge graph or digraph. The hammock-on-ears decomposition implies a general framework for solving graph problems efficiently. Its value is demonstrated by a variety of applications on a significant class of (di)graphs, namely that of {\em sparse (di)graphs}. This class consists of all (di)graphs which have a $\tilde{\gamma}$ between $1$ and $\Theta(n)$, and includes planar graphs and graphs with genus $o(n)$. We improve previous bounds for certain instances of shortest paths and related problems, in this class of graphs. These problems include all pairs shortest paths, all pairs reachability,},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Kavvadias, G.
%A Pantziou, Grammati E.
%A Spirakis, P. G.
%A Zaroliagis, Christos
%+ External Organizations
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Hammock-on-ears decomposition: a technique for the efficient parallel solution of shortest paths and other problems :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B521-0
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-131
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 38 p.
%X We show how to decompose efficiently in parallel
{\em any} graph into a number,
$\tilde{\gamma}$, of outerplanar subgraphs
(called {\em hammocks}) satisfying
certain separator properties. Our work combines and extends
the sequential hammock decomposition technique
introduced by G.~Frederickson and
the parallel ear decomposition
technique, thus we call it the {\em hammock-on-ears decomposition}.
We mention that hammock-on-ears
decomposition also draws from techniques in computational
geometry and that an embedding of the graph does not need to
be provided with the input. We achieve this decomposition
in $O(\log n\log\log n)$ time using $O(n+m)$ CREW PRAM
processors, for an $n$-vertex, $m$-edge graph or digraph.
The hammock-on-ears decomposition implies a general framework
for solving graph problems efficiently. Its value is
demonstrated by a variety of applications on a
significant class of (di)graphs, namely that of
{\em sparse (di)graphs}. This class consists of all (di)graphs
which have a $\tilde{\gamma}$ between $1$ and $\Theta(n)$,
and includes planar graphs and graphs with genus $o(n)$.
We improve previous bounds for certain instances of shortest paths
and related problems, in this class of graphs. These problems include
all pairs shortest paths, all pairs reachability,
%B Research Report / Max-Planck-Institut für Informatik
Implementation of a sweep line algorithm for the Straight \& Line Segment Intersection Problem
K. Mehlhorn and S. Näher
Technical Report, 1994
K. Mehlhorn and S. Näher
Technical Report, 1994
Abstract
We describe a robust and efficient implementation of the Bentley-Ottmann
sweep line algorithm based on the LEDA library
of efficient data types and algorithms. The program
computes the planar graph $G$ induced by a set $S$ of straight line segments
in the plane. The nodes of $G$ are all endpoints and all proper
intersection
points of segments in $S$. The edges of $G$ are the maximal
relatively open
subsegments of segments in $S$ that contain no node of $G$. All edges
are
directed from left to right or upwards.
The algorithm runs in time $O((n+s) log n)$ where $n$ is the number of
segments and $s$ is the number of vertices of the graph $G$. The implementation
uses exact arithmetic for the reliable realization of the geometric
primitives and it uses floating point filters to reduce the overhead of
exact arithmetic.
Export
BibTeX
@techreport{,
TITLE = {Implementation of a sweep line algorithm for the Straight {\textbackslash}\& Line Segment Intersection Problem},
AUTHOR = {Mehlhorn, Kurt and N{\"a}her, Stefan},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-160},
NUMBER = {MPI-I-94-160},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {We describe a robust and efficient implementation of the Bentley-Ottmann sweep line algorithm based on the LEDA library of efficient data types and algorithms. The program computes the planar graph $G$ induced by a set $S$ of straight line segments in the plane. The nodes of $G$ are all endpoints and all proper intersection points of segments in $S$. The edges of $G$ are the maximal relatively open subsegments of segments in $S$ that contain no node of $G$. All edges are directed from left to right or upwards. The algorithm runs in time $O((n+s) log n)$ where $n$ is the number of segments and $s$ is the number of vertices of the graph $G$. The implementation uses exact arithmetic for the reliable realization of the geometric primitives and it uses floating point filters to reduce the overhead of exact arithmetic.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%A Näher, Stefan
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Implementation of a sweep line algorithm for the Straight \& Line Segment Intersection Problem :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B7A7-5
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-160
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 41 p.
%X We describe a robust and efficient implementation of the Bentley-Ottmann
sweep line algorithm based on the LEDA library
of efficient data types and algorithms. The program
computes the planar graph $G$ induced by a set $S$ of straight line segments
in the plane. The nodes of $G$ are all endpoints and all proper
intersection
points of segments in $S$. The edges of $G$ are the maximal
relatively open
subsegments of segments in $S$ that contain no node of $G$. All edges
are
directed from left to right or upwards.
The algorithm runs in time $O((n+s) log n)$ where $n$ is the number of
segments and $s$ is the number of vertices of the graph $G$. The implementation
uses exact arithmetic for the reliable realization of the geometric
primitives and it uses floating point filters to reduce the overhead of
exact arithmetic.
%B Research Report / Max-Planck-Institut für Informatik
On the embedding phase of the Hopcroft and Tarjan planarity testing algorithm
K. Mehlhorn and P. Mutzel
Technical Report, 1994
K. Mehlhorn and P. Mutzel
Technical Report, 1994
Abstract
We give a detailed description of the embedding phase of the Hopcroft
and Tarjan planarity testing algorithm. The embedding phase runs in
linear time. An implementation based on this paper can be found in
[Mehlhorn-Mutzel-Naeher-94].
Export
BibTeX
@techreport{MehlhornMutzel94,
TITLE = {On the embedding phase of the Hopcroft and Tarjan planarity testing algorithm},
AUTHOR = {Mehlhorn, Kurt and Mutzel, Petra},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-117},
NUMBER = {MPI-I-94-117},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {We give a detailed description of the embedding phase of the Hopcroft and Tarjan planarity testing algorithm. The embedding phase runs in linear time. An implementation based on this paper can be found in [Mehlhorn-Mutzel-Naeher-94].},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%A Mutzel, Petra
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On the embedding phase of the Hopcroft and Tarjan planarity testing algorithm :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B51D-B
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-117
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 8 p.
%X We give a detailed description of the embedding phase of the Hopcroft
and Tarjan planarity testing algorithm. The embedding phase runs in
linear time. An implementation based on this paper can be found in
[Mehlhorn-Mutzel-Naeher-94].
%B Research Report / Max-Planck-Institut für Informatik
An implementation of a Convex Hull Algorithm, Version 1.0
M. Müller and J. Ziegler
Technical Report, 1994
M. Müller and J. Ziegler
Technical Report, 1994
Abstract
We give an implementation of an incremental construction
algorithm for convex hulls in $\IR^d$ using {\em Literate Programming}
and {\em LEDA} in C++.
We treat convex hulls in arbitrary dimensions without any
non-degeneracy assumption.
The main goal of this paper is to demonstrate the benefits of
the literate programming approach.
We find that the time we spent for the documentation parts
is well invested.
It leads to a much better understanding of the program and to
much better code.
Besides being easier to understand and thus being much easier to modify,
it is first at all much more likely to be correct.
In particular, a literate program takes much less time to debug.
The difference between traditional straight forward programming
and literate programming is somewhat like the difference between
having the idea to a proof of some theorem in mind versus actually
writing it down accurately (and thereby often recognizing that the proof
is not as easy as one thought).
Export
BibTeX
@techreport{MuellerZiegler94,
TITLE = {An implementation of a Convex Hull Algorithm, Version 1.0},
AUTHOR = {M{\"u}ller, Michael and Ziegler, Joachim},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-105},
NUMBER = {MPI-I-94-105},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {We give an implementation of an incremental construction algorithm for convex hulls in $\IR^d$ using {\em Literate Programming} and {\em LEDA} in C++. We treat convex hulls in arbitrary dimensions without any non-degeneracy assumption. The main goal of this paper is to demonstrate the benefits of the literate programming approach. We find that the time we spent for the documentation parts is well invested. It leads to a much better understanding of the program and to much better code. Besides being easier to understand and thus being much easier to modify, it is first at all much more likely to be correct. In particular, a literate program takes much less time to debug. The difference between traditional straight forward programming and literate programming is somewhat like the difference between having the idea to a proof of some theorem in mind versus actually writing it down accurately (and thereby often recognizing that the proof is not as easy as one thought).},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Müller, Michael
%A Ziegler, Joachim
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T An implementation of a Convex Hull Algorithm, Version 1.0 :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B4FF-E
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-105
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 63 p.
%X We give an implementation of an incremental construction
algorithm for convex hulls in $\IR^d$ using {\em Literate Programming}
and {\em LEDA} in C++.
We treat convex hulls in arbitrary dimensions without any
non-degeneracy assumption.
The main goal of this paper is to demonstrate the benefits of
the literate programming approach.
We find that the time we spent for the documentation parts
is well invested.
It leads to a much better understanding of the program and to
much better code.
Besides being easier to understand and thus being much easier to modify,
it is first at all much more likely to be correct.
In particular, a literate program takes much less time to debug.
The difference between traditional straight forward programming
and literate programming is somewhat like the difference between
having the idea to a proof of some theorem in mind versus actually
writing it down accurately (and thereby often recognizing that the proof
is not as easy as one thought).
%B Research Report / Max-Planck-Institut für Informatik
Efficient collision detection for moving polyhedra
E. Schömer and C. Thiel
Technical Report, 1994
E. Schömer and C. Thiel
Technical Report, 1994
Abstract
In this paper we consider the following problem: given two general polyhedra
of complexity $n$, one of which is moving translationally or rotating about a fixed axis, determine the first collision (if any) between them. We present an
algorithm with running time $O(n^{8/5 + \epsilon})$ for the case of
translational movements and running time $O(n^{5/3 + \epsilon})$ for
rotational movements, where $\epsilon$ is an arbitrary positive constant.
This is the first known algorithm with sub-quadratic running time.
Export
BibTeX
@techreport{SchoernerThiel94,
TITLE = {Efficient collision detection for moving polyhedra},
AUTHOR = {Sch{\"o}mer, Elmar and Thiel, Christian},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-147},
NUMBER = {MPI-I-94-147},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {In this paper we consider the following problem: given two general polyhedra of complexity $n$, one of which is moving translationally or rotating about a fixed axis, determine the first collision (if any) between them. We present an algorithm with running time $O(n^{8/5 + \epsilon})$ for the case of translational movements and running time $O(n^{5/3 + \epsilon})$ for rotational movements, where $\epsilon$ is an arbitrary positive constant. This is the first known algorithm with sub-quadratic running time.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Schömer, Elmar
%A Thiel, Christian
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Efficient collision detection for moving polyhedra :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B52A-D
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-147
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 24 p.
%X In this paper we consider the following problem: given two general polyhedra
of complexity $n$, one of which is moving translationally or rotating about a fixed axis, determine the first collision (if any) between them. We present an
algorithm with running time $O(n^{8/5 + \epsilon})$ for the case of
translational movements and running time $O(n^{5/3 + \epsilon})$ for
rotational movements, where $\epsilon$ is an arbitrary positive constant.
This is the first known algorithm with sub-quadratic running time.
%B Research Report / Max-Planck-Institut für Informatik
Desnakification of mesh sorting algorithms
J. F. Sibeyn
Technical Report, 1994
J. F. Sibeyn
Technical Report, 1994
Abstract
In all recent near-optimal sorting algorithms for meshes, the
packets are sorted with respect to some snake-like indexing. Such
algorithms are useless in many practical applications. In this
paper we present deterministic algorithms for sorting with respect
to the more natural row-major indexing.
For 1-1 sorting on an $n \times n$ mesh, we give an algorithm that
runs in $2 \cdot n + o(n)$ steps, with maximal queue size five. It
is considerably simpler than earlier algorithms. Another algorithm
performs $k$-$k$ sorting in $k \cdot n / 2 + o(k \cdot n)$ steps.
Furthermore, we present {\em uni-axial} algorithms for row-major
sorting. Uni-axial algorithms have clear practical and theoretical
advantages over bi-axial algorithms. We show that 1-1 sorting can
be performed in $2\frac{1}{2} \cdot n + o(n)$ steps.
Alternatively, this problem is solved in $4\frac{1}{3} \cdot n$
steps for {\em all $n$}. For the practically important values of
$n$, this algorithm is much faster than any algorithm with good
{\em asymptotical} performance.
Export
BibTeX
@techreport{Sibeyn94,
TITLE = {Desnakification of mesh sorting algorithms},
AUTHOR = {Sibeyn, Jop Frederic},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-102},
NUMBER = {MPI-I-94-102},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {In all recent near-optimal sorting algorithms for meshes, the packets are sorted with respect to some snake-like indexing. Such algorithms are useless in many practical applications. In this paper we present deterministic algorithms for sorting with respect to the more natural row-major indexing. For 1-1 sorting on an $n \times n$ mesh, we give an algorithm that runs in $2 \cdot n + o(n)$ steps, with maximal queue size five. It is considerably simpler than earlier algorithms. Another algorithm performs $k$-$k$ sorting in $k \cdot n / 2 + o(k \cdot n)$ steps. Furthermore, we present {\em uni-axial} algorithms for row-major sorting. Uni-axial algorithms have clear practical and theoretical advantages over bi-axial algorithms. We show that 1-1 sorting can be performed in $2\frac{1}{2} \cdot n + o(n)$ steps. Alternatively, this problem is solved in $4\frac{1}{3} \cdot n$ steps for {\em all $n$}. For the practically important values of $n$, this algorithm is much faster than any algorithm with good {\em asymptotical} performance.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Sibeyn, Jop Frederic
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Desnakification of mesh sorting algorithms :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B4FC-3
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-102
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 21 p.
%X In all recent near-optimal sorting algorithms for meshes, the
packets are sorted with respect to some snake-like indexing. Such
algorithms are useless in many practical applications. In this
paper we present deterministic algorithms for sorting with respect
to the more natural row-major indexing.
For 1-1 sorting on an $n \times n$ mesh, we give an algorithm that
runs in $2 \cdot n + o(n)$ steps, with maximal queue size five. It
is considerably simpler than earlier algorithms. Another algorithm
performs $k$-$k$ sorting in $k \cdot n / 2 + o(k \cdot n)$ steps.
Furthermore, we present {\em uni-axial} algorithms for row-major
sorting. Uni-axial algorithms have clear practical and theoretical
advantages over bi-axial algorithms. We show that 1-1 sorting can
be performed in $2\frac{1}{2} \cdot n + o(n)$ steps.
Alternatively, this problem is solved in $4\frac{1}{3} \cdot n$
steps for {\em all $n$}. For the practically important values of
$n$, this algorithm is much faster than any algorithm with good
{\em asymptotical} performance.
%B Research Report / Max-Planck-Institut für Informatik
Lecture notes selected topics in data structures
M. Smid
Technical Report, 1994
M. Smid
Technical Report, 1994
Abstract
This text contains the lecture notes for the course
{\em Ausgew\"{a}hlte Kapitel aus Datenstrukturen},
which was given by the author at the Universit\"{a}t des
Saarlandes during the winter semester 1993/94.
The course was intended for 3rd/4th year students having some
basic knowledge in the field of algorithm design.
The following topics are covered: Skip Lists, the Union-Find
Problem, Range Trees and the Post-Office Problem, and
Maintaining Order in List.
Export
BibTeX
@techreport{Smid94,
TITLE = {Lecture notes selected topics in data structures},
AUTHOR = {Smid, Michiel},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-155},
NUMBER = {MPI-I-94-155},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {This text contains the lecture notes for the course {\em Ausgew\"{a}hlte Kapitel aus Datenstrukturen}, which was given by the author at the Universit\"{a}t des Saarlandes during the winter semester 1993/94. The course was intended for 3rd/4th year students having some basic knowledge in the field of algorithm design. The following topics are covered: Skip Lists, the Union-Find Problem, Range Trees and the Post-Office Problem, and Maintaining Order in List.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Smid, Michiel
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Lecture notes selected topics in data structures :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B543-4
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-155
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 76 p.
%X This text contains the lecture notes for the course
{\em Ausgew\"{a}hlte Kapitel aus Datenstrukturen},
which was given by the author at the Universit\"{a}t des
Saarlandes during the winter semester 1993/94.
The course was intended for 3rd/4th year students having some
basic knowledge in the field of algorithm design.
The following topics are covered: Skip Lists, the Union-Find
Problem, Range Trees and the Post-Office Problem, and
Maintaining Order in List.
%B Research Report / Max-Planck-Institut für Informatik
On the Width and Roundness of a Set of Points in the Plane
M. Smid and R. Janardan
Technical Report, 1994
M. Smid and R. Janardan
Technical Report, 1994
Abstract
Let $S$ be a set of points in the plane. The width (resp.\
roundness) of $S$ is defined as the minimum width of any
slab (resp.\ annulus) that contains all points of $S$.
We give a new characterization of the width of a point set.
Also, we give a {\em rigorous} proof of the fact that either the
roundness of $S$ is equal to the width of $S$, or the center
of the minimum-width annulus is a vertex of the closest-point
Voronoi diagram of $S$, the furthest-point Voronoi diagram
of $S$, or an intersection point of these two diagrams.
This proof corrects the characterization of roundness used
extensively in the literature.
Export
BibTeX
@techreport{,
TITLE = {On the Width and Roundness of a Set of Points in the Plane},
AUTHOR = {Smid, Michiel and Janardan, Ravi},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-111},
NUMBER = {MPI-I-94-111},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1994},
DATE = {1994},
ABSTRACT = {Let $S$ be a set of points in the plane. The width (resp.\ roundness) of $S$ is defined as the minimum width of any slab (resp.\ annulus) that contains all points of $S$. We give a new characterization of the width of a point set. Also, we give a {\em rigorous} proof of the fact that either the roundness of $S$ is equal to the width of $S$, or the center of the minimum-width annulus is a vertex of the closest-point Voronoi diagram of $S$, the furthest-point Voronoi diagram of $S$, or an intersection point of these two diagrams. This proof corrects the characterization of roundness used extensively in the literature.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Smid, Michiel
%A Janardan, Ravi
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On the Width and Roundness of a Set of Points in the Plane :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B78A-7
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/94-111
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1994
%P 14 p.
%X Let $S$ be a set of points in the plane. The width (resp.\
roundness) of $S$ is defined as the minimum width of any
slab (resp.\ annulus) that contains all points of $S$.
We give a new characterization of the width of a point set.
Also, we give a {\em rigorous} proof of the fact that either the
roundness of $S$ is equal to the width of $S$, or the center
of the minimum-width annulus is a vertex of the closest-point
Voronoi diagram of $S$, the furthest-point Voronoi diagram
of $S$, or an intersection point of these two diagrams.
This proof corrects the characterization of roundness used
extensively in the literature.
%B Research Report
1993
Basic paramodulation
L. Bachmair, H. Ganzinger, C. Lynch and W. Snyder
Technical Report, 1993
L. Bachmair, H. Ganzinger, C. Lynch and W. Snyder
Technical Report, 1993
Abstract
We introduce a class of restrictions for the ordered paramodulation and superposition calculi (inspired by the {\em basic\/} strategy for narrowing), in which paramodulation inferences are forbidden at terms introduced by substitutions from previous inference steps. In addition we introduce restrictions based on term selection rules and redex orderings, which are general criteria for delimiting the terms which are available for inferences. These refinements are compatible with standard ordering restrictions and are complete without paramodulation into variables or using functional reflexivity axioms. We prove refutational completeness in the context of deletion rules, such as simplification by rewriting (demodulation) and subsumption, and of techniques for eliminating redundant inferences.
Export
BibTeX
@techreport{Bachmair-et-el-93-mpii236,
TITLE = {Basic paramodulation},
AUTHOR = {Bachmair, Leo and Ganzinger, Harald and Lynch, Christopher and Snyder, Wayne},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-236},
NUMBER = {MPI-I-93-236},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {We introduce a class of restrictions for the ordered paramodulation and superposition calculi (inspired by the {\em basic\/} strategy for narrowing), in which paramodulation inferences are forbidden at terms introduced by substitutions from previous inference steps. In addition we introduce restrictions based on term selection rules and redex orderings, which are general criteria for delimiting the terms which are available for inferences. These refinements are compatible with standard ordering restrictions and are complete without paramodulation into variables or using functional reflexivity axioms. We prove refutational completeness in the context of deletion rules, such as simplification by rewriting (demodulation) and subsumption, and of techniques for eliminating redundant inferences.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Bachmair, Leo
%A Ganzinger, Harald
%A Lynch, Christopher
%A Snyder, Wayne
%+ Programming Logics, MPI for Informatics, Max Planck Society
Programming Logics, MPI for Informatics, Max Planck Society
Automation of Logic, MPI for Informatics, Max Planck Society
External Organizations
%T Basic paramodulation :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AAD5-4
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-236
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 36 p.
%X We introduce a class of restrictions for the ordered paramodulation and superposition calculi (inspired by the {\em basic\/} strategy for narrowing), in which paramodulation inferences are forbidden at terms introduced by substitutions from previous inference steps. In addition we introduce restrictions based on term selection rules and redex orderings, which are general criteria for delimiting the terms which are available for inferences. These refinements are compatible with standard ordering restrictions and are complete without paramodulation into variables or using functional reflexivity axioms. We prove refutational completeness in the context of deletion rules, such as simplification by rewriting (demodulation) and subsumption, and of techniques for eliminating redundant inferences.
%B Research Report / Max-Planck-Institut für Informatik
Fast parallel space allocation, estimation and integer sorting (revised)
H. Bast and T. Hagerup
Technical Report, 1993
H. Bast and T. Hagerup
Technical Report, 1993
Abstract
The following problems are shown to be solvable
in $O(\log^{\ast }\! n)$ time
with optimal speedup with high probability on a
randomized CRCW PRAM using
$O(n)$ space:
\begin{itemize}
\item
Space allocation: Given $n$ nonnegative integers
$x_1,\ldots,x_n$, allocate $n$ nonoverlapping
blocks of consecutive
memory cells of sizes $x_1,\ldots,x_n$ from a base
segment of $O(\sum_{j=1}^n x_j)$ consecutive
memory cells;
\item
Estimation: Given $n$ integers %$x_1,\ldots,x_n$
in the range $ 1.. n $, compute ``good'' estimates
of the number of occurrences of each value
in the range $1.. n$;
\item
Semisorting: Given $n$ integers $x_1,\ldots,x_n$
in the range $1.. n$,
store the integers $1,\ldots,n$ in an array of $O(n)$
cells such that for all $i\in\{1,\ldots,n\}$,
all elements of $\{j:1\le j\le n$ and $x_j=i\}$
occur together, separated only by empty cells;
\item
Integer chain-sorting: Given $n$ integers $x_1,\ldots,x_n$
in the range $1.. n$, construct a linked list
containing the integers $1,\ldots,n$ such that for all
$i,j\in\{1,\ldots,n\}$, if $i$ precedes $j$ in the
list, then $x_i\le x_j$.
\end{itemize}
\noindent
Moreover, given slightly superlinear processor
and space bounds, these problems or variations
of them can be solved in
constant time with high probability.
As a corollary of the integer chain-sorting result,
it follows that $n$ integers in the range $1.. n$
can be sorted in $O({{\log n}/{\log\log n}})$ time
with optimal speedup with high probability.
Export
BibTeX
@techreport{MPI-I-93-123,
TITLE = {Fast parallel space allocation, estimation and integer sorting (revised)},
AUTHOR = {Bast, Holger and Hagerup, Torben},
LANGUAGE = {eng},
NUMBER = {MPI-I-93-123},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {The following problems are shown to be solvable in $O(\log^{\ast }\! n)$ time with optimal speedup with high probability on a randomized CRCW PRAM using $O(n)$ space: \begin{itemize} \item Space allocation: Given $n$ nonnegative integers $x_1,\ldots,x_n$, allocate $n$ nonoverlapping blocks of consecutive memory cells of sizes $x_1,\ldots,x_n$ from a base segment of $O(\sum_{j=1}^n x_j)$ consecutive memory cells; \item Estimation: Given $n$ integers %$x_1,\ldots,x_n$ in the range $ 1.. n $, compute ``good'' estimates of the number of occurrences of each value in the range $1.. n$; \item Semisorting: Given $n$ integers $x_1,\ldots,x_n$ in the range $1.. n$, store the integers $1,\ldots,n$ in an array of $O(n)$ cells such that for all $i\in\{1,\ldots,n\}$, all elements of $\{j:1\le j\le n$ and $x_j=i\}$ occur together, separated only by empty cells; \item Integer chain-sorting: Given $n$ integers $x_1,\ldots,x_n$ in the range $1.. n$, construct a linked list containing the integers $1,\ldots,n$ such that for all $i,j\in\{1,\ldots,n\}$, if $i$ precedes $j$ in the list, then $x_i\le x_j$. \end{itemize} \noindent Moreover, given slightly superlinear processor and space bounds, these problems or variations of them can be solved in constant time with high probability. As a corollary of the integer chain-sorting result, it follows that $n$ integers in the range $1.. n$ can be sorted in $O({{\log n}/{\log\log n}})$ time with optimal speedup with high probability.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Bast, Holger
%A Hagerup, Torben
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Fast parallel space allocation, estimation and integer sorting (revised) :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B74E-F
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 85 p.
%X The following problems are shown to be solvable
in $O(\log^{\ast }\! n)$ time
with optimal speedup with high probability on a
randomized CRCW PRAM using
$O(n)$ space:
\begin{itemize}
\item
Space allocation: Given $n$ nonnegative integers
$x_1,\ldots,x_n$, allocate $n$ nonoverlapping
blocks of consecutive
memory cells of sizes $x_1,\ldots,x_n$ from a base
segment of $O(\sum_{j=1}^n x_j)$ consecutive
memory cells;
\item
Estimation: Given $n$ integers %$x_1,\ldots,x_n$
in the range $ 1.. n $, compute ``good'' estimates
of the number of occurrences of each value
in the range $1.. n$;
\item
Semisorting: Given $n$ integers $x_1,\ldots,x_n$
in the range $1.. n$,
store the integers $1,\ldots,n$ in an array of $O(n)$
cells such that for all $i\in\{1,\ldots,n\}$,
all elements of $\{j:1\le j\le n$ and $x_j=i\}$
occur together, separated only by empty cells;
\item
Integer chain-sorting: Given $n$ integers $x_1,\ldots,x_n$
in the range $1.. n$, construct a linked list
containing the integers $1,\ldots,n$ such that for all
$i,j\in\{1,\ldots,n\}$, if $i$ precedes $j$ in the
list, then $x_i\le x_j$.
\end{itemize}
\noindent
Moreover, given slightly superlinear processor
and space bounds, these problems or variations
of them can be solved in
constant time with high probability.
As a corollary of the integer chain-sorting result,
it follows that $n$ integers in the range $1.. n$
can be sorted in $O({{\log n}/{\log\log n}})$ time
with optimal speedup with high probability.
%B Research Report / Max-Planck-Institut für Informatik
A lower bound for area-universal graphs
G. Bilardi, S. Chaudhuri, D. P. Dubhashi and K. Mehlhorn
Technical Report, 1993
G. Bilardi, S. Chaudhuri, D. P. Dubhashi and K. Mehlhorn
Technical Report, 1993
Abstract
We establish a lower bound on the efficiency of area--universal circuits. The area $A_u$ of every graph $H$ that can host
any graph $G$ of area (at most) $A$ with dilation $d$,
and congestion $c \leq \sqrt{A}/\log\log A$ satisfies the tradeoff
$$
A_u = \Omega ( A \log A / (c^2 \log (2d)) ).
$$
In particular, if $A_u = O(A)$ then $\max(c,d) = \Omega(\sqrt{\log A} / \log\log A)$.
Export
BibTeX
@techreport{MPI-I-93-144,
TITLE = {A lower bound for area-universal graphs},
AUTHOR = {Bilardi, G. and Chaudhuri, Shiva and Dubhashi, Devdatt P. and Mehlhorn, Kurt},
LANGUAGE = {eng},
NUMBER = {MPI-I-93-144},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {We establish a lower bound on the efficiency of area--universal circuits. The area $A_u$ of every graph $H$ that can host any graph $G$ of area (at most) $A$ with dilation $d$, and congestion $c \leq \sqrt{A}/\log\log A$ satisfies the tradeoff $$ A_u = \Omega ( A \log A / (c^2 \log (2d)) ). $$ In particular, if $A_u = O(A)$ then $\max(c,d) = \Omega(\sqrt{\log A} / \log\log A)$.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Bilardi, G.
%A Chaudhuri, Shiva
%A Dubhashi, Devdatt P.
%A Mehlhorn, Kurt
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A lower bound for area-universal graphs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B75A-4
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 7 p.
%X We establish a lower bound on the efficiency of area--universal circuits. The area $A_u$ of every graph $H$ that can host
any graph $G$ of area (at most) $A$ with dilation $d$,
and congestion $c \leq \sqrt{A}/\log\log A$ satisfies the tradeoff
$$
A_u = \Omega ( A \log A / (c^2 \log (2d)) ).
$$
In particular, if $A_u = O(A)$ then $\max(c,d) = \Omega(\sqrt{\log A} / \log\log A)$.
%B Research Report
The circuit subfunction relations are $sum^P_2$-complete
B. Borchert and D. Ranjan
Technical Report, 1993
B. Borchert and D. Ranjan
Technical Report, 1993
Abstract
We show that given two Boolean
circuits $f$ and $g$ the following three problems are $\Sigma^p_2$-complete:
(1) Is $f$ a c-subfunction of $g$, i.e.\ can one set some of the variables
of $g$ to 0 or 1 so that the remaining circuit computes the same function
as $f$?
(2) Is $f$ a v-subfunction of $g$, i.e. can one change the names of the
variables of $g$ so that the resulting circuit computes the same function
as $f$?
(3) Is $f$ a cv-subfunction of $g$, i.e.\ can one
set some variables of $g$ to 0 or 1 and simultanously
change some names of the other variables of $g$ so that the new circuit
computes the same function as $f$?
Additionally we give some bounds for the complexity of the following
problem: Is $f$ isomorphic to $g$, i.e. can one change the names of the
variables bijectively so that the circuit resulting from $g$ computes the
same function as $f$?
Export
BibTeX
@techreport{,
TITLE = {The circuit subfunction relations are \$sum{\textasciicircum}P{\textunderscore}2\$-complete},
AUTHOR = {Borchert, Bernd and Ranjan, Desh},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-121},
NUMBER = {MPI-I-93-121},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {We show that given two Boolean circuits $f$ and $g$ the following three problems are $\Sigma^p_2$-complete: (1) Is $f$ a c-subfunction of $g$, i.e.\ can one set some of the variables of $g$ to 0 or 1 so that the remaining circuit computes the same function as $f$? (2) Is $f$ a v-subfunction of $g$, i.e. can one change the names of the variables of $g$ so that the resulting circuit computes the same function as $f$? (3) Is $f$ a cv-subfunction of $g$, i.e.\ can one set some variables of $g$ to 0 or 1 and simultanously change some names of the other variables of $g$ so that the new circuit computes the same function as $f$? Additionally we give some bounds for the complexity of the following problem: Is $f$ isomorphic to $g$, i.e. can one change the names of the variables bijectively so that the circuit resulting from $g$ computes the same function as $f$?},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Borchert, Bernd
%A Ranjan, Desh
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T The circuit subfunction relations are $sum^P_2$-complete :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B74C-4
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-121
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 14 p.
%X We show that given two Boolean
circuits $f$ and $g$ the following three problems are $\Sigma^p_2$-complete:
(1) Is $f$ a c-subfunction of $g$, i.e.\ can one set some of the variables
of $g$ to 0 or 1 so that the remaining circuit computes the same function
as $f$?
(2) Is $f$ a v-subfunction of $g$, i.e. can one change the names of the
variables of $g$ so that the resulting circuit computes the same function
as $f$?
(3) Is $f$ a cv-subfunction of $g$, i.e.\ can one
set some variables of $g$ to 0 or 1 and simultanously
change some names of the other variables of $g$ so that the new circuit
computes the same function as $f$?
Additionally we give some bounds for the complexity of the following
problem: Is $f$ isomorphic to $g$, i.e. can one change the names of the
variables bijectively so that the circuit resulting from $g$ computes the
same function as $f$?
%B Research Report / Max-Planck-Institut für Informatik
A lower bound for linear approximate compaction
S. Chaudhuri
Technical Report, 1993a
S. Chaudhuri
Technical Report, 1993a
Abstract
The {\em $\lambda$-approximate compaction} problem is: given an input
array of $n$ values, each
either 0 or 1, place each value in an output array so that all the 1's
are in the first $(1+\lambda)k$ array locations, where $k$ is the number of 1's
in the input. $\lambda$ is an accuracy parameter. This problem is
of fundamental importance in parallel
computation because of its applications to processor
allocation and approximate counting.
When $\lambda$ is a constant, the problem is called
{\em Linear Approximate Compaction} (LAC). On the CRCW PRAM model,
%there is an algorithm that solves approximate compaction in $\order{(\log\log n)^3}$
time for $\lambda = \frac{1}{\log\log n}$, using $\frac{n}{(\log\log
n)^3}$ processors. Our main result shows that this is close to the
best possible. Specifically, we prove that LAC requires
%$\Omega(\log\log n)$ time using $\order{n}$ processors.
We also give a tradeoff between $\lambda$
and the processing time. For $\epsilon < 1$, and $\lambda =
n^{\epsilon}$, the time required is $\Omega(\log \frac{1}{\epsilon})$.
Export
BibTeX
@techreport{,
TITLE = {A lower bound for linear approximate compaction},
AUTHOR = {Chaudhuri, Shiva},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-146},
NUMBER = {MPI-I-93-146},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {The {\em $\lambda$-approximate compaction} problem is: given an input array of $n$ values, each either 0 or 1, place each value in an output array so that all the 1's are in the first $(1+\lambda)k$ array locations, where $k$ is the number of 1's in the input. $\lambda$ is an accuracy parameter. This problem is of fundamental importance in parallel computation because of its applications to processor allocation and approximate counting. When $\lambda$ is a constant, the problem is called {\em Linear Approximate Compaction} (LAC). On the CRCW PRAM model, %there is an algorithm that solves approximate compaction in $\order{(\log\log n)^3}$ time for $\lambda = \frac{1}{\log\log n}$, using $\frac{n}{(\log\log n)^3}$ processors. Our main result shows that this is close to the best possible. Specifically, we prove that LAC requires %$\Omega(\log\log n)$ time using $\order{n}$ processors. We also give a tradeoff between $\lambda$ and the processing time. For $\epsilon < 1$, and $\lambda = n^{\epsilon}$, the time required is $\Omega(\log \frac{1}{\epsilon})$.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Chaudhuri, Shiva
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A lower bound for linear approximate compaction :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B761-1
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-146
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 12 p.
%X The {\em $\lambda$-approximate compaction} problem is: given an input
array of $n$ values, each
either 0 or 1, place each value in an output array so that all the 1's
are in the first $(1+\lambda)k$ array locations, where $k$ is the number of 1's
in the input. $\lambda$ is an accuracy parameter. This problem is
of fundamental importance in parallel
computation because of its applications to processor
allocation and approximate counting.
When $\lambda$ is a constant, the problem is called
{\em Linear Approximate Compaction} (LAC). On the CRCW PRAM model,
%there is an algorithm that solves approximate compaction in $\order{(\log\log n)^3}$
time for $\lambda = \frac{1}{\log\log n}$, using $\frac{n}{(\log\log
n)^3}$ processors. Our main result shows that this is close to the
best possible. Specifically, we prove that LAC requires
%$\Omega(\log\log n)$ time using $\order{n}$ processors.
We also give a tradeoff between $\lambda$
and the processing time. For $\epsilon < 1$, and $\lambda =
n^{\epsilon}$, the time required is $\Omega(\log \frac{1}{\epsilon})$.
%B Research Report / Max-Planck-Institut für Informatik
Sensitive functions and approximate problems
S. Chaudhuri
Technical Report, 1993b
S. Chaudhuri
Technical Report, 1993b
Abstract
We investigate properties of functions that are good measures of the
CRCW PRAM complexity of computing them. While the {\em block
sensitivity} is known to be a good measure of the CREW PRAM
complexity, no such measure is known for CRCW PRAMs. We show that the
complexity of computing a function is related to its {\em everywhere
sensitivity}, introduced by Vishkin and Wigderson. Specifically we
show that the time required to compute a function $f:D^n \rightarrow R$ of everywhere
sensitivity
$ \es
(f)$ with $P \geq n$ processors and unbounded memory
is $
\Omega
(\log [\log \es(f)/(\log 4P|D| - \log \es(f))])$.
This
improves previous results of Azar, and Vishkin and Wigderson. We use
this lower bound to derive new lower bounds for some {\em approximate
problems}. These problems can often be solved faster than their exact
counterparts and for many applications, it is sufficient to solve the
approximate problem. We show
that {\em approximate selection} requires time
$\Omega(\log [\log n/\log k])$ with $kn$ processors and {\em
approximate counting} with accuracy $\lambda \geq 2$ requires time
$\Omega(\log [\log n/(\log k + \log \lambda)])$ with $kn$ processors.
In particular, for constant accuracy, no lower bounds were known for
these problems.
Export
BibTeX
@techreport{MPI-I-93-145,
TITLE = {Sensitive functions and approximate problems},
AUTHOR = {Chaudhuri, Shiva},
LANGUAGE = {eng},
NUMBER = {MPI-I-93-145},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {We investigate properties of functions that are good measures of the CRCW PRAM complexity of computing them. While the {\em block sensitivity} is known to be a good measure of the CREW PRAM complexity, no such measure is known for CRCW PRAMs. We show that the complexity of computing a function is related to its {\em everywhere sensitivity}, introduced by Vishkin and Wigderson. Specifically we show that the time required to compute a function $f:D^n \rightarrow R$ of everywhere sensitivity $ \es (f)$ with $P \geq n$ processors and unbounded memory is $ \Omega (\log [\log \es(f)/(\log 4P|D| -- \log \es(f))])$. This improves previous results of Azar, and Vishkin and Wigderson. We use this lower bound to derive new lower bounds for some {\em approximate problems}. These problems can often be solved faster than their exact counterparts and for many applications, it is sufficient to solve the approximate problem. We show that {\em approximate selection} requires time $\Omega(\log [\log n/\log k])$ with $kn$ processors and {\em approximate counting} with accuracy $\lambda \geq 2$ requires time $\Omega(\log [\log n/(\log k + \log \lambda)])$ with $kn$ processors. In particular, for constant accuracy, no lower bounds were known for these problems.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Chaudhuri, Shiva
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Sensitive functions and approximate problems :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B75D-D
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 8 p.
%X We investigate properties of functions that are good measures of the
CRCW PRAM complexity of computing them. While the {\em block
sensitivity} is known to be a good measure of the CREW PRAM
complexity, no such measure is known for CRCW PRAMs. We show that the
complexity of computing a function is related to its {\em everywhere
sensitivity}, introduced by Vishkin and Wigderson. Specifically we
show that the time required to compute a function $f:D^n \rightarrow R$ of everywhere
sensitivity
$ \es
(f)$ with $P \geq n$ processors and unbounded memory
is $
\Omega
(\log [\log \es(f)/(\log 4P|D| - \log \es(f))])$.
This
improves previous results of Azar, and Vishkin and Wigderson. We use
this lower bound to derive new lower bounds for some {\em approximate
problems}. These problems can often be solved faster than their exact
counterparts and for many applications, it is sufficient to solve the
approximate problem. We show
that {\em approximate selection} requires time
$\Omega(\log [\log n/\log k])$ with $kn$ processors and {\em
approximate counting} with accuracy $\lambda \geq 2$ requires time
$\Omega(\log [\log n/(\log k + \log \lambda)])$ with $kn$ processors.
In particular, for constant accuracy, no lower bounds were known for
these problems.
%B Research Report / Max-Planck-Institut für Informatik
Approximate and exact deterministic parallel selection
S. Chaudhuri, T. Hagerup and R. Raman
Technical Report, 1993
S. Chaudhuri, T. Hagerup and R. Raman
Technical Report, 1993
Abstract
The selection problem of size $n$ is,
given a set of $n$ elements drawn from an ordered
universe and an integer $r$ with $1\le r\le n$, to
identify the $r$th smallest element in the set.
We study approximate and exact selection on
deterministic concurrent-read concurrent-write
parallel RAMs, where approximate selection with
relative accuracy $\lambda>0$ asks for any element
whose true rank differs from $r$ by at most $\lambda n$.
Our main results are:
(1) For all $t\ge(\log\log n)^4$, approximate
selection problems of size $n$ can be solved in
$O(t)$ time with optimal speedup with relative accuracy
$2^{-{t/{(\log\log n)^4}}}$;
no deterministic PRAM algorithm for approximate
selection with a running time below
$\Theta({{\log n}/{\log\log n}})$
was previously known.
(2) Exact selection problems of size $n$ can be solved
in $O({{\log n}/{\log\log n}})$ time with
$O({{n\log\log n}/{\log n}})$ processors.
This running time is the best possible
(using only a polynomial number of processors),
and the number of processors is optimal for the
given running time (optimal speedup);
the best previous algorithm achieves optimal speedup
with a running time of $O({{\log n\log^*\! n}/{\log\log n}})$.
Export
BibTeX
@techreport{MPI-I-93-118,
TITLE = {Approximate and exact deterministic parallel selection},
AUTHOR = {Chaudhuri, Shiva and Hagerup, Torben and Raman, Rajeev},
LANGUAGE = {eng},
NUMBER = {MPI-I-93-118},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {The selection problem of size $n$ is, given a set of $n$ elements drawn from an ordered universe and an integer $r$ with $1\le r\le n$, to identify the $r$th smallest element in the set. We study approximate and exact selection on deterministic concurrent-read concurrent-write parallel RAMs, where approximate selection with relative accuracy $\lambda>0$ asks for any element whose true rank differs from $r$ by at most $\lambda n$. Our main results are: (1) For all $t\ge(\log\log n)^4$, approximate selection problems of size $n$ can be solved in $O(t)$ time with optimal speedup with relative accuracy $2^{-{t/{(\log\log n)^4}}}$; no deterministic PRAM algorithm for approximate selection with a running time below $\Theta({{\log n}/{\log\log n}})$ was previously known. (2) Exact selection problems of size $n$ can be solved in $O({{\log n}/{\log\log n}})$ time with $O({{n\log\log n}/{\log n}})$ processors. This running time is the best possible (using only a polynomial number of processors), and the number of processors is optimal for the given running time (optimal speedup); the best previous algorithm achieves optimal speedup with a running time of $O({{\log n\log^*\! n}/{\log\log n}})$.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Chaudhuri, Shiva
%A Hagerup, Torben
%A Raman, Rajeev
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Approximate and exact deterministic parallel selection :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B748-C
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 10 p.
%X The selection problem of size $n$ is,
given a set of $n$ elements drawn from an ordered
universe and an integer $r$ with $1\le r\le n$, to
identify the $r$th smallest element in the set.
We study approximate and exact selection on
deterministic concurrent-read concurrent-write
parallel RAMs, where approximate selection with
relative accuracy $\lambda>0$ asks for any element
whose true rank differs from $r$ by at most $\lambda n$.
Our main results are:
(1) For all $t\ge(\log\log n)^4$, approximate
selection problems of size $n$ can be solved in
$O(t)$ time with optimal speedup with relative accuracy
$2^{-{t/{(\log\log n)^4}}}$;
no deterministic PRAM algorithm for approximate
selection with a running time below
$\Theta({{\log n}/{\log\log n}})$
was previously known.
(2) Exact selection problems of size $n$ can be solved
in $O({{\log n}/{\log\log n}})$ time with
$O({{n\log\log n}/{\log n}})$ processors.
This running time is the best possible
(using only a polynomial number of processors),
and the number of processors is optimal for the
given running time (optimal speedup);
the best previous algorithm achieves optimal speedup
with a running time of $O({{\log n\log^*\! n}/{\log\log n}})$.
%B Research Report / Max-Planck-Institut für Informatik
The complexity of parallel prefix problems on small domains
S. Chaudhuri and J. Radhakrishnan
Technical Report, 1993
S. Chaudhuri and J. Radhakrishnan
Technical Report, 1993
Abstract
We show non-trivial lower bounds for several prefix problems in the
CRCW PRAM model. Our main result is an $\Omega(\alpha(n))$ lower bound
for the chaining problem, matching the previously known upper bound.
We give a reduction to show that the same lower bound applies to a
parenthesis matching problem, again matching the previously known
upper bound. We also give reductions to show that similar lower
bounds hold for the prefix maxima and the range maxima problems.
Export
BibTeX
@techreport{MPI-I-93-147,
TITLE = {The complexity of parallel prefix problems on small domains},
AUTHOR = {Chaudhuri, Shiva and Radhakrishnan, Jaikumar},
LANGUAGE = {eng},
NUMBER = {MPI-I-93-147},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {We show non-trivial lower bounds for several prefix problems in the CRCW PRAM model. Our main result is an $\Omega(\alpha(n))$ lower bound for the chaining problem, matching the previously known upper bound. We give a reduction to show that the same lower bound applies to a parenthesis matching problem, again matching the previously known upper bound. We also give reductions to show that similar lower bounds hold for the prefix maxima and the range maxima problems.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Chaudhuri, Shiva
%A Radhakrishnan, Jaikumar
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T The complexity of parallel prefix problems on small domains :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B766-8
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 17 p.
%X We show non-trivial lower bounds for several prefix problems in the
CRCW PRAM model. Our main result is an $\Omega(\alpha(n))$ lower bound
for the chaining problem, matching the previously known upper bound.
We give a reduction to show that the same lower bound applies to a
parenthesis matching problem, again matching the previously known
upper bound. We also give reductions to show that similar lower
bounds hold for the prefix maxima and the range maxima problems.
%B Research Report / Max-Planck-Institut für Informatik
A Complete and Efficient Algorithm for the Intersection of a General and a Convex Polyhedron
K. Dobrindt, K. Mehlhorn and M. Yvinec
Technical Report, 1993
K. Dobrindt, K. Mehlhorn and M. Yvinec
Technical Report, 1993
Abstract
A polyhedron is any set that can be obtained from the open half\-spaces by a<br>finite number of set complement and set intersection operations. We give an<br>efficient and complete algorithm for intersecting two three--dimensional<br>polyhedra, one of which is convex. The algorithm is efficient in the sense<br>that its running time is bounded by the size of the inputs plus the size of<br>the output times a logarithmic factor. The algorithm is complete in the sense<br>that it can handle all inputs and requires no general position assumption. We<br>also describe a novel data structure that can represent all three--dimensional<br>polyhedra (the set of polyhedra representable by all previous data structures<br>is not closed under the basic boolean operations).
Export
BibTeX
@techreport{MPI-I-93-140,
TITLE = {A Complete and Efficient Algorithm for the Intersection of a General and a Convex Polyhedron},
AUTHOR = {Dobrindt, K. and Mehlhorn, Kurt and Yvinec, M.},
LANGUAGE = {eng},
NUMBER = {MPI-I-93-140},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {A polyhedron is any set that can be obtained from the open half\-spaces by a<br>finite number of set complement and set intersection operations. We give an<br>efficient and complete algorithm for intersecting two three--dimensional<br>polyhedra, one of which is convex. The algorithm is efficient in the sense<br>that its running time is bounded by the size of the inputs plus the size of<br>the output times a logarithmic factor. The algorithm is complete in the sense<br>that it can handle all inputs and requires no general position assumption. We<br>also describe a novel data structure that can represent all three--dimensional<br>polyhedra (the set of polyhedra representable by all previous data structures<br>is not closed under the basic boolean operations).},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Dobrindt, K.
%A Mehlhorn, Kurt
%A Yvinec, M.
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A Complete and Efficient Algorithm for the Intersection of a General and a Convex Polyhedron :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B755-E
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 14 p.
%X A polyhedron is any set that can be obtained from the open half\-spaces by a<br>finite number of set complement and set intersection operations. We give an<br>efficient and complete algorithm for intersecting two three--dimensional<br>polyhedra, one of which is convex. The algorithm is efficient in the sense<br>that its running time is bounded by the size of the inputs plus the size of<br>the output times a logarithmic factor. The algorithm is complete in the sense<br>that it can handle all inputs and requires no general position assumption. We<br>also describe a novel data structure that can represent all three--dimensional<br>polyhedra (the set of polyhedra representable by all previous data structures<br>is not closed under the basic boolean operations).
%B Research Report / Max-Planck-Institut für Informatik
Searching, sorting and randomised algorithms for central elements and ideal counting in posets
D. P. Dubhashi, K. Mehlhorn, D. Ranjan and C. Thiel
Technical Report, 1993
D. P. Dubhashi, K. Mehlhorn, D. Ranjan and C. Thiel
Technical Report, 1993
Abstract
By the Central Element Theorem of Linial and Saks, it follows that for
the problem of (generalised) searching in posets, the
information--theoretic lower bound of $\log N$ comparisons (where $N$ is the
number of order--ideals in the poset) is tight asymptotically. We
observe that this implies that the problem of (generalised) sorting
in posets has complexity $\Theta(n \cdot \log N)$ (where $n$ is the
number of elements in the poset). We present schemes for
(efficiently) transforming a randomised generation procedure for
central elements (which often exists for some classes of posets) into
randomised procedures for approximately counting ideals in the poset
and for testing if an arbitrary element is central.
Export
BibTeX
@techreport{,
TITLE = {Searching, sorting and randomised algorithms for central elements and ideal counting in posets},
AUTHOR = {Dubhashi, Devdatt P. and Mehlhorn, Kurt and Ranjan, Desh and Thiel, Christian},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-154},
NUMBER = {MPI-I-93-154},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {By the Central Element Theorem of Linial and Saks, it follows that for the problem of (generalised) searching in posets, the information--theoretic lower bound of $\log N$ comparisons (where $N$ is the number of order--ideals in the poset) is tight asymptotically. We observe that this implies that the problem of (generalised) sorting in posets has complexity $\Theta(n \cdot \log N)$ (where $n$ is the number of elements in the poset). We present schemes for (efficiently) transforming a randomised generation procedure for central elements (which often exists for some classes of posets) into randomised procedures for approximately counting ideals in the poset and for testing if an arbitrary element is central.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Dubhashi, Devdatt P.
%A Mehlhorn, Kurt
%A Ranjan, Desh
%A Thiel, Christian
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Searching, sorting and randomised algorithms for central elements and ideal counting in posets :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B76C-B
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-154
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 8 p.
%X By the Central Element Theorem of Linial and Saks, it follows that for
the problem of (generalised) searching in posets, the
information--theoretic lower bound of $\log N$ comparisons (where $N$ is the
number of order--ideals in the poset) is tight asymptotically. We
observe that this implies that the problem of (generalised) sorting
in posets has complexity $\Theta(n \cdot \log N)$ (where $n$ is the
number of elements in the poset). We present schemes for
(efficiently) transforming a randomised generation procedure for
central elements (which often exists for some classes of posets) into
randomised procedures for approximately counting ideals in the poset
and for testing if an arbitrary element is central.
%B Research Report / Max-Planck-Institut für Informatik
Quantifier elimination in p-adic fields
D. P. Dubhashi
Technical Report, 1993
D. P. Dubhashi
Technical Report, 1993
Abstract
We present a tutorial survey of quantifier-elimination and decision procedures
in p-adic fields. The p-adic fields are studied in the (so-called)
$P_n$--formalism of Angus Macintyre, for which motivation is provided
through a rich body of analogies with real-closed fields.
Quantifier-elimination and decision procedures are described
proceeding via a Cylindrical Algebraic Decomposition of affine p-adic
space. Effective complexity analyses are also provided.
Export
BibTeX
@techreport{,
TITLE = {Quantifier elimination in p-adic fields},
AUTHOR = {Dubhashi, Devdatt P.},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-155},
NUMBER = {MPI-I-93-155},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {We present a tutorial survey of quantifier-elimination and decision procedures in p-adic fields. The p-adic fields are studied in the (so-called) $P_n$--formalism of Angus Macintyre, for which motivation is provided through a rich body of analogies with real-closed fields. Quantifier-elimination and decision procedures are described proceeding via a Cylindrical Algebraic Decomposition of affine p-adic space. Effective complexity analyses are also provided.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Dubhashi, Devdatt P.
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Quantifier elimination in p-adic fields :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B76E-7
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-155
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 21 p.
%X We present a tutorial survey of quantifier-elimination and decision procedures
in p-adic fields. The p-adic fields are studied in the (so-called)
$P_n$--formalism of Angus Macintyre, for which motivation is provided
through a rich body of analogies with real-closed fields.
Quantifier-elimination and decision procedures are described
proceeding via a Cylindrical Algebraic Decomposition of affine p-adic
space. Effective complexity analyses are also provided.
%B Research Report / Max-Planck-Institut für Informatik
Randomized Data Structures for the Dynamic Closest-Pair Problem
M. J. Golin, R. Raman, C. Schwarz and M. Smid
Technical Report, 1993
M. J. Golin, R. Raman, C. Schwarz and M. Smid
Technical Report, 1993
Abstract
We describe a new randomized data structure,
the {\em sparse partition}, for solving the
dynamic closest-pair problem. Using this
data structure the closest pair of a set of $n$ points in
$k$-dimensional space, for any fixed $k$, can be found
in constant time. If the points are chosen from a finite universe,
and if the floor function is available at unit-cost, then the
data structure supports insertions into and deletions from the set
in expected $O(\log n)$ time and requires expected $O(n)$ space.
Here, it is assumed that the updates are chosen by an adversary who
does not know the random choices made by the data structure.
The data structure can be modified to run in $O(\log^2 n)$ expected
time per update in the algebraic decision tree model of
computation. Even this version is more efficient than the currently
best known deterministic algorithms for solving the problem for $k>1$.
Export
BibTeX
@techreport{GolinRamanSchwarzSmid93,
TITLE = {Randomized Data Structures for the Dynamic Closest-Pair Problem},
AUTHOR = {Golin, Mordecai J. and Raman, Rajeev and Schwarz, Christian and Smid, Michiel},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-102},
NUMBER = {MPI-I-93-102},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {We describe a new randomized data structure, the {\em sparse partition}, for solving the dynamic closest-pair problem. Using this data structure the closest pair of a set of $n$ points in $k$-dimensional space, for any fixed $k$, can be found in constant time. If the points are chosen from a finite universe, and if the floor function is available at unit-cost, then the data structure supports insertions into and deletions from the set in expected $O(\log n)$ time and requires expected $O(n)$ space. Here, it is assumed that the updates are chosen by an adversary who does not know the random choices made by the data structure. The data structure can be modified to run in $O(\log^2 n)$ expected time per update in the algebraic decision tree model of computation. Even this version is more efficient than the currently best known deterministic algorithms for solving the problem for $k>1$.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Golin, Mordecai J.
%A Raman, Rajeev
%A Schwarz, Christian
%A Smid, Michiel
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Randomized Data Structures for the Dynamic Closest-Pair Problem :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B3F0-3
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-102
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 32 p.
%X We describe a new randomized data structure,
the {\em sparse partition}, for solving the
dynamic closest-pair problem. Using this
data structure the closest pair of a set of $n$ points in
$k$-dimensional space, for any fixed $k$, can be found
in constant time. If the points are chosen from a finite universe,
and if the floor function is available at unit-cost, then the
data structure supports insertions into and deletions from the set
in expected $O(\log n)$ time and requires expected $O(n)$ space.
Here, it is assumed that the updates are chosen by an adversary who
does not know the random choices made by the data structure.
The data structure can be modified to run in $O(\log^2 n)$ expected
time per update in the algebraic decision tree model of
computation. Even this version is more efficient than the currently
best known deterministic algorithms for solving the problem for $k>1$.
%B Research Report / Max-Planck-Institut für Informatik
On multi-party communication complexity of random functions
V. Grolmusz
Technical Report, 1993a
V. Grolmusz
Technical Report, 1993a
Abstract
We prove that almost all Boolean function has a high $k$--party communication complexity. The 2--party case was settled by {\it Papadimitriou} and {\it Sipser}.
Proving the $k$--party case needs a deeper investigation of the underlying structure
of the $k$--cylinder--intersections; (the 2--cylinder--intersections are the rectangles).
\noindent First we examine the basic properties of $k$--cylinder--intersections, then an upper estimation is given for their number, which facilitates to prove the lower--bound theorem for the $k$--party communication complexity of randomly chosen Boolean functions. In the last section we extend our results to the $\varepsilon$--distributional communication complexity of random functions.
Export
BibTeX
@techreport{,
TITLE = {On multi-party communication complexity of random functions},
AUTHOR = {Grolmusz, Vince},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-162},
NUMBER = {MPI-I-93-162},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {We prove that almost all Boolean function has a high $k$--party communication complexity. The 2--party case was settled by {\it Papadimitriou} and {\it Sipser}. Proving the $k$--party case needs a deeper investigation of the underlying structure of the $k$--cylinder--intersections; (the 2--cylinder--intersections are the rectangles). \noindent First we examine the basic properties of $k$--cylinder--intersections, then an upper estimation is given for their number, which facilitates to prove the lower--bound theorem for the $k$--party communication complexity of randomly chosen Boolean functions. In the last section we extend our results to the $\varepsilon$--distributional communication complexity of random functions.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Grolmusz, Vince
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On multi-party communication complexity of random functions :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B775-6
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-162
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 10 p.
%X We prove that almost all Boolean function has a high $k$--party communication complexity. The 2--party case was settled by {\it Papadimitriou} and {\it Sipser}.
Proving the $k$--party case needs a deeper investigation of the underlying structure
of the $k$--cylinder--intersections; (the 2--cylinder--intersections are the rectangles).
\noindent First we examine the basic properties of $k$--cylinder--intersections, then an upper estimation is given for their number, which facilitates to prove the lower--bound theorem for the $k$--party communication complexity of randomly chosen Boolean functions. In the last section we extend our results to the $\varepsilon$--distributional communication complexity of random functions.
%B Research Report / Max-Planck-Institut für Informatik
Multi-party protocols and spectral norms
V. Grolmusz
Technical Report, 1993b
V. Grolmusz
Technical Report, 1993b
Abstract
Let f be a Boolean function of $n$ variables with $L_1$ spectral norm
$ L_1(f)>n^\epsilon $ for some positive $ \epsilon $ . Then f can be computed
by a
$ O(\log L_1(f)) $ player multi--party protocol with $ O(\log^2 L_1(f)) $
communication.
Export
BibTeX
@techreport{MPI-I-93-132,
TITLE = {Multi-party protocols and spectral norms},
AUTHOR = {Grolmusz, Vince},
LANGUAGE = {eng},
NUMBER = {MPI-I-93-132},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {Let f be a Boolean function of $n$ variables with $L_1$ spectral norm $ L_1(f)>n^\epsilon $ for some positive $ \epsilon $ . Then f can be computed by a $ O(\log L_1(f)) $ player multi--party protocol with $ O(\log^2 L_1(f)) $ communication.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Grolmusz, Vince
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Multi-party protocols and spectral norms :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B753-1
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 11 p.
%X Let f be a Boolean function of $n$ variables with $L_1$ spectral norm
$ L_1(f)>n^\epsilon $ for some positive $ \epsilon $ . Then f can be computed
by a
$ O(\log L_1(f)) $ player multi--party protocol with $ O(\log^2 L_1(f)) $
communication.
%B Research Report / Max-Planck-Institut für Informatik
Harmonic analysis, real approximation, and the communication complexity of Boolean functions
V. Grolmusz
Technical Report, 1993c
V. Grolmusz
Technical Report, 1993c
Abstract
In this paper we prove several fundamental theorems, concerning the multi--party communication complexity of Boolean functions.
Let $g$ be a real function which approximates Boolean function $f$ of $n$ variables with error less than $1/5$. Then --- from our Theorem 1 --- there exists a $k=O(\log (n\L_1(g)))$--party protocol which computes $f$ with a communication of $O(\log^3(n\L_1(g)))$ bits, where $\L_1(g)$ denotes the $\L_1$ spectral norm of $g$.
We show an upper bound to the symmetric $k$--party communication complexity of Boolean functions in terms of their $\L_1$ norms in our Theorem 3. For $k=2$ it was known that the communication complexity of Boolean functions are closely related with the {\it rank} of their communication matrix [Ya1]. No analogous upper bound was known for the k--party communication complexity of {\it arbitrary} Boolean functions, where $k>2$.
Export
BibTeX
@techreport{,
TITLE = {Harmonic analysis, real approximation, and the communication complexity of Boolean functions},
AUTHOR = {Grolmusz, Vince},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-161},
NUMBER = {MPI-I-93-161},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {In this paper we prove several fundamental theorems, concerning the multi--party communication complexity of Boolean functions. Let $g$ be a real function which approximates Boolean function $f$ of $n$ variables with error less than $1/5$. Then --- from our Theorem 1 --- there exists a $k=O(\log (n\L_1(g)))$--party protocol which computes $f$ with a communication of $O(\log^3(n\L_1(g)))$ bits, where $\L_1(g)$ denotes the $\L_1$ spectral norm of $g$. We show an upper bound to the symmetric $k$--party communication complexity of Boolean functions in terms of their $\L_1$ norms in our Theorem 3. For $k=2$ it was known that the communication complexity of Boolean functions are closely related with the {\it rank} of their communication matrix [Ya1]. No analogous upper bound was known for the k--party communication complexity of {\it arbitrary} Boolean functions, where $k>2$.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Grolmusz, Vince
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Harmonic analysis, real approximation, and the communication complexity of Boolean functions :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B770-0
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-161
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 15 p.
%X In this paper we prove several fundamental theorems, concerning the multi--party communication complexity of Boolean functions.
Let $g$ be a real function which approximates Boolean function $f$ of $n$ variables with error less than $1/5$. Then --- from our Theorem 1 --- there exists a $k=O(\log (n\L_1(g)))$--party protocol which computes $f$ with a communication of $O(\log^3(n\L_1(g)))$ bits, where $\L_1(g)$ denotes the $\L_1$ spectral norm of $g$.
We show an upper bound to the symmetric $k$--party communication complexity of Boolean functions in terms of their $\L_1$ norms in our Theorem 3. For $k=2$ it was known that the communication complexity of Boolean functions are closely related with the {\it rank} of their communication matrix [Ya1]. No analogous upper bound was known for the k--party communication complexity of {\it arbitrary} Boolean functions, where $k>2$.
%B Research Report / Max-Planck-Institut für Informatik
MOD m gates do not help on the ground floor
V. Grolmusz
Technical Report, 1993d
V. Grolmusz
Technical Report, 1993d
Abstract
We prove that any depth--3 circuit with MOD m gates of unbounded fan-in on the lowest level, AND gates on the second, and a weighted threshold gate on the top
needs either exponential size or exponential weights to compute the {\it inner product} of two vectors of length $n$ over GF(2). More exactly we prove that $\Omega(n\log n)\leq \log w\log M$, where $w$ is the sum of the absolute values of the weights, and $M$ is the maximum fan--in of the AND gates on level 2. Setting all weights to 1, we got a trade--off between the logarithms of the top--fan--in and the maximum fan--in on level 2.
In contrast, with $n$ AND gates at the bottom and {\it a single} MOD 2 gate at the top one can compute the {\it inner product} function.
The lower--bound proof does not use any monotonicity or uniformity assumptions, and all of our gates have unbounded fan--in. The key step in the proof is a {\it random} evaluation protocol of a circuit with MOD $m$ gates.
Export
BibTeX
@techreport{MPI-I-93-142,
TITLE = {{MOD} m gates do not help on the ground floor},
AUTHOR = {Grolmusz, Vince},
LANGUAGE = {eng},
NUMBER = {MPI-I-93-142},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {We prove that any depth--3 circuit with MOD m gates of unbounded fan-in on the lowest level, AND gates on the second, and a weighted threshold gate on the top needs either exponential size or exponential weights to compute the {\it inner product} of two vectors of length $n$ over GF(2). More exactly we prove that $\Omega(n\log n)\leq \log w\log M$, where $w$ is the sum of the absolute values of the weights, and $M$ is the maximum fan--in of the AND gates on level 2. Setting all weights to 1, we got a trade--off between the logarithms of the top--fan--in and the maximum fan--in on level 2. In contrast, with $n$ AND gates at the bottom and {\it a single} MOD 2 gate at the top one can compute the {\it inner product} function. The lower--bound proof does not use any monotonicity or uniformity assumptions, and all of our gates have unbounded fan--in. The key step in the proof is a {\it random} evaluation protocol of a circuit with MOD $m$ gates.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Grolmusz, Vince
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T MOD m gates do not help on the ground floor :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B758-8
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 13 p.
%X We prove that any depth--3 circuit with MOD m gates of unbounded fan-in on the lowest level, AND gates on the second, and a weighted threshold gate on the top
needs either exponential size or exponential weights to compute the {\it inner product} of two vectors of length $n$ over GF(2). More exactly we prove that $\Omega(n\log n)\leq \log w\log M$, where $w$ is the sum of the absolute values of the weights, and $M$ is the maximum fan--in of the AND gates on level 2. Setting all weights to 1, we got a trade--off between the logarithms of the top--fan--in and the maximum fan--in on level 2.
In contrast, with $n$ AND gates at the bottom and {\it a single} MOD 2 gate at the top one can compute the {\it inner product} function.
The lower--bound proof does not use any monotonicity or uniformity assumptions, and all of our gates have unbounded fan--in. The key step in the proof is a {\it random} evaluation protocol of a circuit with MOD $m$ gates.
%B Research Report / Max-Planck-Institut für Informatik
On Intersection Searching Problems Involving Curved Objects
P. Gupta, R. Janardan and M. Smid
Technical Report, 1993a
P. Gupta, R. Janardan and M. Smid
Technical Report, 1993a
Export
BibTeX
@techreport{Smid93b,
TITLE = {On Intersection Searching Problems Involving Curved Objects},
AUTHOR = {Gupta, Prosenjit and Janardan, Ravi and Smid, Michiel},
LANGUAGE = {eng},
NUMBER = {MPI-I-93-124},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Gupta, Prosenjit
%A Janardan, Ravi
%A Smid, Michiel
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On Intersection Searching Problems Involving Curved Objects :
%G eng
%U http://hdl.handle.net/21.11116/0000-000D-5EFE-D
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 43 p.
%B Research Report / Max-Planck-Institut für Informatik
Efficient algorithms for generalized intersection searching on non-iso-oriented objects
P. Gupta, R. Janardan and M. Smid
Technical Report, 1993b
P. Gupta, R. Janardan and M. Smid
Technical Report, 1993b
Abstract
In a generalized intersection searching problem, a set $S$ of <br>colored geometric objects is to be preprocessed so that, given a <br>query object $q$, the distinct colors of the objects of $S$ that <br>are intersected by $q$ can be reported or counted efficiently. <br>These problems generalize the well-studied standard intersection <br>searching problems and are rich in applications. Unfortunately, <br>the solutions known for the standard problems do not yield efficient <br>solutions to the generalized problems. Recently, efficient solutions <br>have been given for generalized problems where the input and query <br>objects are iso-oriented, i.e., axes-parallel, or where the color <br>classes satisfy additional properties, e.g., connectedness.<br>In this paper, efficient algorithms are given for several<br>generalized problems involving non-iso-oriented objects.<br>These problems include: generalized halfspace range searching in<br>${\cal R}^d$, for any fixed $d \geq 2$, segment intersection<br>searching, triangle stabbing, and triangle range searching<br>in ${\cal R}^2$. The techniques used include: computing suitable <br>sparse representations of the input, persistent data structures, and<br>filtering search.
Export
BibTeX
@techreport{GuptaJanardanSmid93b,
TITLE = {Efficient algorithms for generalized intersection searching on non-iso-oriented objects},
AUTHOR = {Gupta, Prosenjit and Janardan, Ravi and Smid, Michiel},
LANGUAGE = {eng},
NUMBER = {MPI-I-93-166},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {In a generalized intersection searching problem, a set $S$ of <br>colored geometric objects is to be preprocessed so that, given a <br>query object $q$, the distinct colors of the objects of $S$ that <br>are intersected by $q$ can be reported or counted efficiently. <br>These problems generalize the well-studied standard intersection <br>searching problems and are rich in applications. Unfortunately, <br>the solutions known for the standard problems do not yield efficient <br>solutions to the generalized problems. Recently, efficient solutions <br>have been given for generalized problems where the input and query <br>objects are iso-oriented, i.e., axes-parallel, or where the color <br>classes satisfy additional properties, e.g., connectedness.<br>In this paper, efficient algorithms are given for several<br>generalized problems involving non-iso-oriented objects.<br>These problems include: generalized halfspace range searching in<br>${\cal R}^d$, for any fixed $d \geq 2$, segment intersection<br>searching, triangle stabbing, and triangle range searching<br>in ${\cal R}^2$. The techniques used include: computing suitable <br>sparse representations of the input, persistent data structures, and<br>filtering search.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Gupta, Prosenjit
%A Janardan, Ravi
%A Smid, Michiel
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Efficient algorithms for generalized intersection searching on non-iso-oriented objects :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B434-1
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 32 p.
%X In a generalized intersection searching problem, a set $S$ of <br>colored geometric objects is to be preprocessed so that, given a <br>query object $q$, the distinct colors of the objects of $S$ that <br>are intersected by $q$ can be reported or counted efficiently. <br>These problems generalize the well-studied standard intersection <br>searching problems and are rich in applications. Unfortunately, <br>the solutions known for the standard problems do not yield efficient <br>solutions to the generalized problems. Recently, efficient solutions <br>have been given for generalized problems where the input and query <br>objects are iso-oriented, i.e., axes-parallel, or where the color <br>classes satisfy additional properties, e.g., connectedness.<br>In this paper, efficient algorithms are given for several<br>generalized problems involving non-iso-oriented objects.<br>These problems include: generalized halfspace range searching in<br>${\cal R}^d$, for any fixed $d \geq 2$, segment intersection<br>searching, triangle stabbing, and triangle range searching<br>in ${\cal R}^2$. The techniques used include: computing suitable <br>sparse representations of the input, persistent data structures, and<br>filtering search.
%B Research Report / Max-Planck-Institut für Informatik
Optimal parallel string algorithms: sorting, merching and computing the minimum
T. Hagerup
Technical Report, 1993
T. Hagerup
Technical Report, 1993
Abstract
We study fundamental comparison problems on
strings of characters, equipped with the usual
lexicographical ordering.
For each problem studied, we give a parallel algorithm
that is optimal with respect to at least one
criterion for which no optimal
algorithm was previously known.
Specifically, our main results are:
%
\begin{itemize}
\item Two sorted sequences of strings, containing
altogether $n$~characters, can be merged in
$O(\log n)$ time using $O(n)$ operations
on an EREW PRAM.
This is optimal as regards both the running time
and the number of operations.
\item A sequence of strings, containing altogether
$n$~characters represented by integers of size
polynomial in~$n$, can be sorted
in $O({{\log n}/{\log\log n}})$ time
using $O(n\log\log n)$ operations on
a CRCW PRAM.
The running time is optimal for any
polynomial number of processors.
\item The minimum string in a sequence of strings
containing altogether $n$ characters can be
found using (expected) $O(n)$ operations in
constant expected time on a randomized
CRCW PRAM, in $O(\log\log n)$ time on a
deterministic CRCW PRAM with a program depending on~$n$,
in $O((\log\log n)^3)$ time on a
deterministic CRCW PRAM with a program
not depending on~$n$,
in $O(\log n)$ expected time on a randomized
EREW PRAM, and in $O(\log n\log\log n)$ time
on a deterministic EREW PRAM.
The number of operations is optimal, and
the running time is optimal for the randomized algorithms
and, if the number of processors is limited to~$n$,
for the nonuniform deterministic CRCW
PRAM algorithm as wel
Export
BibTeX
@techreport{,
TITLE = {Optimal parallel string algorithms: sorting, merching and computing the minimum},
AUTHOR = {Hagerup, Torben},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-152},
NUMBER = {MPI-I-93-152},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {We study fundamental comparison problems on strings of characters, equipped with the usual lexicographical ordering. For each problem studied, we give a parallel algorithm that is optimal with respect to at least one criterion for which no optimal algorithm was previously known. Specifically, our main results are: % \begin{itemize} \item Two sorted sequences of strings, containing altogether $n$~characters, can be merged in $O(\log n)$ time using $O(n)$ operations on an EREW PRAM. This is optimal as regards both the running time and the number of operations. \item A sequence of strings, containing altogether $n$~characters represented by integers of size polynomial in~$n$, can be sorted in $O({{\log n}/{\log\log n}})$ time using $O(n\log\log n)$ operations on a CRCW PRAM. The running time is optimal for any polynomial number of processors. \item The minimum string in a sequence of strings containing altogether $n$ characters can be found using (expected) $O(n)$ operations in constant expected time on a randomized CRCW PRAM, in $O(\log\log n)$ time on a deterministic CRCW PRAM with a program depending on~$n$, in $O((\log\log n)^3)$ time on a deterministic CRCW PRAM with a program not depending on~$n$, in $O(\log n)$ expected time on a randomized EREW PRAM, and in $O(\log n\log\log n)$ time on a deterministic EREW PRAM. The number of operations is optimal, and the running time is optimal for the randomized algorithms and, if the number of processors is limited to~$n$, for the nonuniform deterministic CRCW PRAM algorithm as wel},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Hagerup, Torben
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Optimal parallel string algorithms: sorting, merching and computing the minimum :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B76A-F
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-152
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 25 p.
%X We study fundamental comparison problems on
strings of characters, equipped with the usual
lexicographical ordering.
For each problem studied, we give a parallel algorithm
that is optimal with respect to at least one
criterion for which no optimal
algorithm was previously known.
Specifically, our main results are:
%
\begin{itemize}
\item Two sorted sequences of strings, containing
altogether $n$~characters, can be merged in
$O(\log n)$ time using $O(n)$ operations
on an EREW PRAM.
This is optimal as regards both the running time
and the number of operations.
\item A sequence of strings, containing altogether
$n$~characters represented by integers of size
polynomial in~$n$, can be sorted
in $O({{\log n}/{\log\log n}})$ time
using $O(n\log\log n)$ operations on
a CRCW PRAM.
The running time is optimal for any
polynomial number of processors.
\item The minimum string in a sequence of strings
containing altogether $n$ characters can be
found using (expected) $O(n)$ operations in
constant expected time on a randomized
CRCW PRAM, in $O(\log\log n)$ time on a
deterministic CRCW PRAM with a program depending on~$n$,
in $O((\log\log n)^3)$ time on a
deterministic CRCW PRAM with a program
not depending on~$n$,
in $O(\log n)$ expected time on a randomized
EREW PRAM, and in $O(\log n\log\log n)$ time
on a deterministic EREW PRAM.
The number of operations is optimal, and
the running time is optimal for the randomized algorithms
and, if the number of processors is limited to~$n$,
for the nonuniform deterministic CRCW
PRAM algorithm as wel
%B Research Report / Max-Planck-Institut für Informatik
Generalized topological sorting in linear time
T. Hagerup and M. Maas
Technical Report, 1993
T. Hagerup and M. Maas
Technical Report, 1993
Abstract
The generalized topological sorting problem
takes as input a positive integer $k$
and a directed, acyclic graph with
some vertices labeled by positive integers, and
the goal is to label the remaining vertices
by positive integers in such a way that each edge
leads from a lower-labeled vertex
to a higher-labeled vertex,
and such that the set of labels used
is exactly $\{1,\ldots,k\}$.
Given a generalized topological sorting problem, we want
to compute a solution, if one exists, and also
to test the uniqueness of a given solution.
%
The best previous algorithm for the generalized
topological sorting problem computes a solution,
if one exists, and tests its uniqueness in
$O(n\log\log n+m)$ time on input graphs with $n$
vertices and $m$ edges.
We describe improved algorithms
that solve both problems
in linear time $O(n+m)$.
Export
BibTeX
@techreport{,
TITLE = {Generalized topological sorting in linear time},
AUTHOR = {Hagerup, Torben and Maas, Martin},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-119},
NUMBER = {MPI-I-93-119},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {The generalized topological sorting problem takes as input a positive integer $k$ and a directed, acyclic graph with some vertices labeled by positive integers, and the goal is to label the remaining vertices by positive integers in such a way that each edge leads from a lower-labeled vertex to a higher-labeled vertex, and such that the set of labels used is exactly $\{1,\ldots,k\}$. Given a generalized topological sorting problem, we want to compute a solution, if one exists, and also to test the uniqueness of a given solution. % The best previous algorithm for the generalized topological sorting problem computes a solution, if one exists, and tests its uniqueness in $O(n\log\log n+m)$ time on input graphs with $n$ vertices and $m$ edges. We describe improved algorithms that solve both problems in linear time $O(n+m)$.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Hagerup, Torben
%A Maas, Martin
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Generalized topological sorting in linear time :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B74A-8
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-119
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 10 p.
%X The generalized topological sorting problem
takes as input a positive integer $k$
and a directed, acyclic graph with
some vertices labeled by positive integers, and
the goal is to label the remaining vertices
by positive integers in such a way that each edge
leads from a lower-labeled vertex
to a higher-labeled vertex,
and such that the set of labels used
is exactly $\{1,\ldots,k\}$.
Given a generalized topological sorting problem, we want
to compute a solution, if one exists, and also
to test the uniqueness of a given solution.
%
The best previous algorithm for the generalized
topological sorting problem computes a solution,
if one exists, and tests its uniqueness in
$O(n\log\log n+m)$ time on input graphs with $n$
vertices and $m$ edges.
We describe improved algorithms
that solve both problems
in linear time $O(n+m)$.
%B Research Report / Max-Planck-Institut für Informatik
New techniques for exact and approximate dynamic closest-point problems
S. Kapoor and M. Smid
Technical Report, 1993
S. Kapoor and M. Smid
Technical Report, 1993
Abstract
Let $S$ be a set of $n$ points in $\IR^{D}$. It is shown that
a range tree can be used to find an $L_{\infty}$-nearest
neighbor in $S$ of any query point, in
$O((\log n)^{D-1} \log\log n)$ time. This data structure has
size $O(n (\log n)^{D-1})$ and an amortized update time of
$O((\log n)^{D-1} \log\log n)$. This result is used to
solve the $(1+\epsilon)$-approximate $L_{2}$-nearest
neighbor problem within the same bounds. In this problem,
for any query point $p$, a point $q \in S$ is computed such
that the euclidean distance between $p$ and $q$ is at most
$(1+\epsilon)$ times the euclidean distance between $p$ and
its true nearest neighbor.
This is the first dynamic data structure for this problem
having close to linear size and polylogarithmic query and
update times.
Export
BibTeX
@techreport{KapoorSmid93,
TITLE = {New techniques for exact and approximate dynamic closest-point problems},
AUTHOR = {Kapoor, Sanjiv and Smid, Michiel},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-159},
NUMBER = {MPI-I-93-159},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {Let $S$ be a set of $n$ points in $\IR^{D}$. It is shown that a range tree can be used to find an $L_{\infty}$-nearest neighbor in $S$ of any query point, in $O((\log n)^{D-1} \log\log n)$ time. This data structure has size $O(n (\log n)^{D-1})$ and an amortized update time of $O((\log n)^{D-1} \log\log n)$. This result is used to solve the $(1+\epsilon)$-approximate $L_{2}$-nearest neighbor problem within the same bounds. In this problem, for any query point $p$, a point $q \in S$ is computed such that the euclidean distance between $p$ and $q$ is at most $(1+\epsilon)$ times the euclidean distance between $p$ and its true nearest neighbor. This is the first dynamic data structure for this problem having close to linear size and polylogarithmic query and update times.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Kapoor, Sanjiv
%A Smid, Michiel
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T New techniques for exact and approximate dynamic closest-point problems :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B42E-2
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-159
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 29 p.
%X Let $S$ be a set of $n$ points in $\IR^{D}$. It is shown that
a range tree can be used to find an $L_{\infty}$-nearest
neighbor in $S$ of any query point, in
$O((\log n)^{D-1} \log\log n)$ time. This data structure has
size $O(n (\log n)^{D-1})$ and an amortized update time of
$O((\log n)^{D-1} \log\log n)$. This result is used to
solve the $(1+\epsilon)$-approximate $L_{2}$-nearest
neighbor problem within the same bounds. In this problem,
for any query point $p$, a point $q \in S$ is computed such
that the euclidean distance between $p$ and $q$ is at most
$(1+\epsilon)$ times the euclidean distance between $p$ and
its true nearest neighbor.
This is the first dynamic data structure for this problem
having close to linear size and polylogarithmic query and
update times.
%B Research Report / Max-Planck-Institut für Informatik
Expected complexity of graph partitioning problems
L. Kučera
Technical Report, 1993a
L. Kučera
Technical Report, 1993a
Abstract
We study one bit broadcast in a one-dimensional network with nodes
${\cal N}_0,\ldots,{\cal N}_n$, in which each ${\cal N}_{i-1}$ sends information to ${\cal N}_i$. We suppose that the broadcasting is synchronous, and at each step each atomic transmission ${\cal N}_{i-1}\rightarrow{\cal N}_i$ could be temporarily incorrect with probability equal to a constant $0<p<1/2$. The probabilities of failure for different steps and different nodes are supposed to be independent.
For each constant $c$ there is a ``classical'' algorithm with $O(n\log n)$ broadcast time and error probability $O(n^{-c})$.
The paper studies the possibility of a reliable broadcasting in $o(n\log n)$ time. We first show that one natural generalization of the classical algorithm, which was believed to behave well, has very bad properties (the probability of an error close to 1/2).
The second part of the paper presents the ultimate solution of the problem of the broadcast time in a one-dimensional nework with faults. Our algorithms have linear broadcast time, good (though not optimal) delay time, and they are extremely reliable. For example we can transmit a bit through a network of $N=1000000$ of nodes with $p=0.1$ in $8999774<9N$ steps with probability of error less than $10^{-436}$.
Export
BibTeX
@techreport{Kucera93c,
TITLE = {Expected complexity of graph partitioning problems},
AUTHOR = {Ku{\v c}era, Lud{\v e}k},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-107},
NUMBER = {MPI-I-93-107},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {We study one bit broadcast in a one-dimensional network with nodes ${\cal N}_0,\ldots,{\cal N}_n$, in which each ${\cal N}_{i-1}$ sends information to ${\cal N}_i$. We suppose that the broadcasting is synchronous, and at each step each atomic transmission ${\cal N}_{i-1}\rightarrow{\cal N}_i$ could be temporarily incorrect with probability equal to a constant $0<p<1/2$. The probabilities of failure for different steps and different nodes are supposed to be independent. For each constant $c$ there is a ``classical'' algorithm with $O(n\log n)$ broadcast time and error probability $O(n^{-c})$. The paper studies the possibility of a reliable broadcasting in $o(n\log n)$ time. We first show that one natural generalization of the classical algorithm, which was believed to behave well, has very bad properties (the probability of an error close to 1/2). The second part of the paper presents the ultimate solution of the problem of the broadcast time in a one-dimensional nework with faults. Our algorithms have linear broadcast time, good (though not optimal) delay time, and they are extremely reliable. For example we can transmit a bit through a network of $N=1000000$ of nodes with $p=0.1$ in $8999774<9N$ steps with probability of error less than $10^{-436}$.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Kučera, Luděk
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Expected complexity of graph partitioning problems :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B73F-2
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-107
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 16 p.
%X We study one bit broadcast in a one-dimensional network with nodes
${\cal N}_0,\ldots,{\cal N}_n$, in which each ${\cal N}_{i-1}$ sends information to ${\cal N}_i$. We suppose that the broadcasting is synchronous, and at each step each atomic transmission ${\cal N}_{i-1}\rightarrow{\cal N}_i$ could be temporarily incorrect with probability equal to a constant $0<p<1/2$. The probabilities of failure for different steps and different nodes are supposed to be independent.
For each constant $c$ there is a ``classical'' algorithm with $O(n\log n)$ broadcast time and error probability $O(n^{-c})$.
The paper studies the possibility of a reliable broadcasting in $o(n\log n)$ time. We first show that one natural generalization of the classical algorithm, which was believed to behave well, has very bad properties (the probability of an error close to 1/2).
The second part of the paper presents the ultimate solution of the problem of the broadcast time in a one-dimensional nework with faults. Our algorithms have linear broadcast time, good (though not optimal) delay time, and they are extremely reliable. For example we can transmit a bit through a network of $N=1000000$ of nodes with $p=0.1$ in $8999774<9N$ steps with probability of error less than $10^{-436}$.
%B Research Report / Max-Planck-Institut für Informatik
Coloring k-colorable graphs in constant expected parallel time
L. Kučera
Technical Report, 1993b
L. Kučera
Technical Report, 1993b
Abstract
A parallel (CRCW PRAM) algorithm is given to find a $k$-coloring of
a graph randomly drawn from the family of $k$-colorable graphs with
$n$ vertices, where $k = \log^{O(1)}n$. The average running time of
the algorithm is {\em constant}, and the number of processors is equal
to $|V|+|E|$, where $|V|$, $|E|$, resp. is the number of vertices,
edges, resp. of the input graph.
Export
BibTeX
@techreport{MPI-I-93-110,
TITLE = {Coloring k-colorable graphs in constant expected parallel time},
AUTHOR = {Ku{\v c}era, Lud{\v e}k},
LANGUAGE = {eng},
NUMBER = {MPI-I-93-110},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {A parallel (CRCW PRAM) algorithm is given to find a $k$-coloring of a graph randomly drawn from the family of $k$-colorable graphs with $n$ vertices, where $k = \log^{O(1)}n$. The average running time of the algorithm is {\em constant}, and the number of processors is equal to $|V|+|E|$, where $|V|$, $|E|$, resp. is the number of vertices, edges, resp. of the input graph.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Kučera, Luděk
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Coloring k-colorable graphs in constant expected parallel time :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B745-1
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 14 p.
%X A parallel (CRCW PRAM) algorithm is given to find a $k$-coloring of
a graph randomly drawn from the family of $k$-colorable graphs with
$n$ vertices, where $k = \log^{O(1)}n$. The average running time of
the algorithm is {\em constant}, and the number of processors is equal
to $|V|+|E|$, where $|V|$, $|E|$, resp. is the number of vertices,
edges, resp. of the input graph.
%B Research Report / Max-Planck-Institut für Informatik
Randomized incremental construction of abstract Voronoi diagrams
L. Kučera
Technical Report, 1993c
L. Kučera
Technical Report, 1993c
Abstract
Abstract Voronoi diagrams were introduced by R.~Klein
as an axiomatic basis of
Voronoi diagrams.
We show how to construct abstract Voronoi diagrams in time
$O(n\log n)$ by a randomized algorithm,
which is based on Clarkson and Shor's randomized
incremental construction technique.
The new algorithm has the following advantages over
previous algorithms:
\begin{itemize}
\item
It can handle a much wider class of abstract Voronoi
diagrams than the algorithms presented in [Kle89b, MMO91].
\item
It can be adapted to a concrete kind of Voronoi diagram by
providing a single basic operation, namely the
construction of a Voronoi diagram of five sites.
Moreover, all geometric decisions are confined to the
basic operation, and using this operation, abstract
Voronoi diagrams can be constructed in a purely
combinatorial manner.
Export
BibTeX
@techreport{MPI-I-93-105,
TITLE = {Randomized incremental construction of abstract Voronoi diagrams},
AUTHOR = {Ku{\v c}era, Lud{\v e}k},
LANGUAGE = {eng},
NUMBER = {MPI-I-93-105},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {Abstract Voronoi diagrams were introduced by R.~Klein as an axiomatic basis of Voronoi diagrams. We show how to construct abstract Voronoi diagrams in time $O(n\log n)$ by a randomized algorithm, which is based on Clarkson and Shor's randomized incremental construction technique. The new algorithm has the following advantages over previous algorithms: \begin{itemize} \item It can handle a much wider class of abstract Voronoi diagrams than the algorithms presented in [Kle89b, MMO91]. \item It can be adapted to a concrete kind of Voronoi diagram by providing a single basic operation, namely the construction of a Voronoi diagram of five sites. Moreover, all geometric decisions are confined to the basic operation, and using this operation, abstract Voronoi diagrams can be constructed in a purely combinatorial manner.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Kučera, Luděk
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Randomized incremental construction of abstract Voronoi diagrams :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B738-0
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 29 p.
%X Abstract Voronoi diagrams were introduced by R.~Klein
as an axiomatic basis of
Voronoi diagrams.
We show how to construct abstract Voronoi diagrams in time
$O(n\log n)$ by a randomized algorithm,
which is based on Clarkson and Shor's randomized
incremental construction technique.
The new algorithm has the following advantages over
previous algorithms:
\begin{itemize}
\item
It can handle a much wider class of abstract Voronoi
diagrams than the algorithms presented in [Kle89b, MMO91].
\item
It can be adapted to a concrete kind of Voronoi diagram by
providing a single basic operation, namely the
construction of a Voronoi diagram of five sites.
Moreover, all geometric decisions are confined to the
basic operation, and using this operation, abstract
Voronoi diagrams can be constructed in a purely
combinatorial manner.
%B Research Report / Max-Planck-Institut für Informatik
Broadcasting through a noisy one-dimensional network
L. Kučera
Technical Report, 1993d
L. Kučera
Technical Report, 1993d
Abstract
We study the expected time complexity of two graph partitioning
problems: the graph coloring and the cut into equal parts.
If $k=o(\sqrt{n/\log n})$, we can test whether two vertices of a $k$-colorable
graph can be $k$-colored by the same color in time $O(k\log n)$ per pair of
vertices with
$O(k^4\log^3n)$-time preprocessing in such a way that for almost all $k$-colorable
graphs the answer is correct for all pairs of vertices.
As a consequence, we obtain a sublinear (with respect to the number of edges)
expected time algorithm for $k$-coloring
of $k$-colorable graphs (assuming the uniform input distribution).
Similarly, if $ c\le (1/8-\epsilon)n^2 $, $ \epsilon>0 $ a constant,
and $G$ is a graph having cut of the vertex
set into two equal parts with at most $c$ cross-edges, we can test whether two
vertices belong to the same class of some $c$-cut in time $O(\log n)$ per vertex
with $O(\log^3n)$-time preprocessing in such a way that for almost all graphs
having a $c$-cut the answer is correct for all pairs of vertices.
The methods presented in the paper can also be used to other graph partitioning
problems, e.g. the largest clique or independent subset.
Export
BibTeX
@techreport{,
TITLE = {Broadcasting through a noisy one-dimensional network},
AUTHOR = {Ku{\v c}era, Lud{\v e}k},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-106},
NUMBER = {MPI-I-93-106},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {We study the expected time complexity of two graph partitioning problems: the graph coloring and the cut into equal parts. If $k=o(\sqrt{n/\log n})$, we can test whether two vertices of a $k$-colorable graph can be $k$-colored by the same color in time $O(k\log n)$ per pair of vertices with $O(k^4\log^3n)$-time preprocessing in such a way that for almost all $k$-colorable graphs the answer is correct for all pairs of vertices. As a consequence, we obtain a sublinear (with respect to the number of edges) expected time algorithm for $k$-coloring of $k$-colorable graphs (assuming the uniform input distribution). Similarly, if $ c\le (1/8-\epsilon)n^2 $, $ \epsilon>0 $ a constant, and $G$ is a graph having cut of the vertex set into two equal parts with at most $c$ cross-edges, we can test whether two vertices belong to the same class of some $c$-cut in time $O(\log n)$ per vertex with $O(\log^3n)$-time preprocessing in such a way that for almost all graphs having a $c$-cut the answer is correct for all pairs of vertices. The methods presented in the paper can also be used to other graph partitioning problems, e.g. the largest clique or independent subset.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Kučera, Luděk
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Broadcasting through a noisy one-dimensional network :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B73D-6
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-106
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 14 p.
%X We study the expected time complexity of two graph partitioning
problems: the graph coloring and the cut into equal parts.
If $k=o(\sqrt{n/\log n})$, we can test whether two vertices of a $k$-colorable
graph can be $k$-colored by the same color in time $O(k\log n)$ per pair of
vertices with
$O(k^4\log^3n)$-time preprocessing in such a way that for almost all $k$-colorable
graphs the answer is correct for all pairs of vertices.
As a consequence, we obtain a sublinear (with respect to the number of edges)
expected time algorithm for $k$-coloring
of $k$-colorable graphs (assuming the uniform input distribution).
Similarly, if $ c\le (1/8-\epsilon)n^2 $, $ \epsilon>0 $ a constant,
and $G$ is a graph having cut of the vertex
set into two equal parts with at most $c$ cross-edges, we can test whether two
vertices belong to the same class of some $c$-cut in time $O(\log n)$ per vertex
with $O(\log^3n)$-time preprocessing in such a way that for almost all graphs
having a $c$-cut the answer is correct for all pairs of vertices.
The methods presented in the paper can also be used to other graph partitioning
problems, e.g. the largest clique or independent subset.
%B Research Report / Max-Planck-Institut für Informatik
Tail estimates for the efficiency of randomized incremental algorithms for line segment intersection
K. Mehlhorn, M. Sharir and E. Welzl
Technical Report, 1993
K. Mehlhorn, M. Sharir and E. Welzl
Technical Report, 1993
Abstract
We give tail estimates for the efficiency of some randomized
incremental algorithms for line segment intersection in the
plane.
In particular, we show that there is a constant $C$ such that the
probability that the running times of algorithms due to Mulmuley
and Clarkson and Shor
exceed $C$ times their expected time is bounded by $e^{-\Omega (m/(n\ln n))}$
where $n$ is the number of segments, $m$ is the number of
intersections, and $m \geq n \ln n \ln^{(3)}n$.
Export
BibTeX
@techreport{MehlhornSharirWelzl,
TITLE = {Tail estimates for the efficiency of randomized incremental algorithms for line segment intersection},
AUTHOR = {Mehlhorn, Kurt and Sharir, Micha and Welzl, Emo},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-103},
NUMBER = {MPI-I-93-103},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {We give tail estimates for the efficiency of some randomized incremental algorithms for line segment intersection in the plane. In particular, we show that there is a constant $C$ such that the probability that the running times of algorithms due to Mulmuley and Clarkson and Shor exceed $C$ times their expected time is bounded by $e^{-\Omega (m/(n\ln n))}$ where $n$ is the number of segments, $m$ is the number of intersections, and $m \geq n \ln n \ln^{(3)}n$.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%A Sharir, Micha
%A Welzl, Emo
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Tail estimates for the efficiency of randomized incremental algorithms for line segment intersection :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B736-3
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-103
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 12 p.
%X We give tail estimates for the efficiency of some randomized
incremental algorithms for line segment intersection in the
plane.
In particular, we show that there is a constant $C$ such that the
probability that the running times of algorithms due to Mulmuley
and Clarkson and Shor
exceed $C$ times their expected time is bounded by $e^{-\Omega (m/(n\ln n))}$
where $n$ is the number of segments, $m$ is the number of
intersections, and $m \geq n \ln n \ln^{(3)}n$.
%B Research Report / Max-Planck-Institut für Informatik
A Complete and Efficient Algorithm for the Intersection of a General and a Convex Polyhedron
K. Mehlhorn, K. Dobrindt and M. Yvinec
Technical Report, 1993
K. Mehlhorn, K. Dobrindt and M. Yvinec
Technical Report, 1993
Export
BibTeX
@techreport{,
TITLE = {A Complete and Efficient Algorithm for the Intersection of a General and a Convex Polyhedron},
AUTHOR = {Mehlhorn, Kurt and Dobrindt, Katrin and Yvinec, Mariette},
LANGUAGE = {eng},
NUMBER = {RR-2023},
INSTITUTION = {Institut National de Recherche en Informatique et en Automatique},
ADDRESS = {Sophia Antipolis, France},
YEAR = {1993},
DATE = {1993},
TYPE = {Rapport de Recherche},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%A Dobrindt, Katrin
%A Yvinec, Mariette
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
%T A Complete and Efficient Algorithm for the Intersection of a General and a Convex Polyhedron :
%G eng
%U http://hdl.handle.net/21.11116/0000-000E-473C-0
%Y Institut National de Recherche en Informatique et en Automatique
%C Sophia Antipolis, France
%D 1993
%B Rapport de Recherche
Maintaining dynamic sequences under equality-tests in polylogorithmic time
K. Mehlhorn and C. Uhrig
Technical Report, 1993
K. Mehlhorn and C. Uhrig
Technical Report, 1993
Abstract
We present a randomized and a deterministic
data structure for maintaining a dynamic family of
sequences under equality--tests of pairs of sequences and creations
of new sequences by joining or splitting existing sequences.
Both data structures support equality--tests in $O(1)$ time. The
randomized version supports new sequence creations in $O(\log^2 n)$
expected time
where $n$ is the length of the sequence created. The
deterministic solution supports sequence creations in
$O(\log n(\log m \log^* m + \log n))$ time for the $m$--th operation.
Export
BibTeX
@techreport{MehlhornUhrig93,
TITLE = {Maintaining dynamic sequences under equality-tests in polylogorithmic time},
AUTHOR = {Mehlhorn, Kurt and Uhrig, Christian},
LANGUAGE = {eng},
NUMBER = {MPI-I-93-128},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {We present a randomized and a deterministic data structure for maintaining a dynamic family of sequences under equality--tests of pairs of sequences and creations of new sequences by joining or splitting existing sequences. Both data structures support equality--tests in $O(1)$ time. The randomized version supports new sequence creations in $O(\log^2 n)$ expected time where $n$ is the length of the sequence created. The deterministic solution supports sequence creations in $O(\log n(\log m \log^* m + \log n))$ time for the $m$--th operation.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%A Uhrig, Christian
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Maintaining dynamic sequences under equality-tests in polylogorithmic time :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B425-3
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 17 p.
%X We present a randomized and a deterministic
data structure for maintaining a dynamic family of
sequences under equality--tests of pairs of sequences and creations
of new sequences by joining or splitting existing sequences.
Both data structures support equality--tests in $O(1)$ time. The
randomized version supports new sequence creations in $O(\log^2 n)$
expected time
where $n$ is the length of the sequence created. The
deterministic solution supports sequence creations in
$O(\log n(\log m \log^* m + \log n))$ time for the $m$--th operation.
%B Research Report / Max-Planck-Institut für Informatik
An implementation of the Hopcroft and Tarjan planarity test and embedding algorithm
K. Mehlhorn, P. Mutzel and S. Näher
Technical Report, 1993
K. Mehlhorn, P. Mutzel and S. Näher
Technical Report, 1993
Abstract
We design new inference systems for total orderings
by applying rewrite techniques to chaining calculi.
Equality relations may either be specified axiomatically
or built into the deductive calculus via paramodulation or superposition.
We demonstrate that our inference systems are compatible
with a concept of (global) redundancy for clauses and inferences
that covers such widely used simplification techniques
as tautology deletion, subsumption, and demodulation.
A key to the practicality of chaining techniques is
the extent to which so-called variable chainings can be restricted.
Syntactic ordering restrictions on terms
and the rewrite techniques which account for their completeness
considerably restrict variable chaining.
We show that variable elimination
is an admissible simplification techniques
within our redundancy framework,
and that consequently for dense total orderings without endpoints
no variable chaining is needed at all.
Export
BibTeX
@techreport{Mehlhorn93,
TITLE = {An implementation of the Hopcroft and Tarjan planarity test and embedding algorithm},
AUTHOR = {Mehlhorn, Kurt and Mutzel, Petra and N{\"a}her, Stefan},
LANGUAGE = {eng},
NUMBER = {MPI-I-93-151},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {We design new inference systems for total orderings by applying rewrite techniques to chaining calculi. Equality relations may either be specified axiomatically or built into the deductive calculus via paramodulation or superposition. We demonstrate that our inference systems are compatible with a concept of (global) redundancy for clauses and inferences that covers such widely used simplification techniques as tautology deletion, subsumption, and demodulation. A key to the practicality of chaining techniques is the extent to which so-called variable chainings can be restricted. Syntactic ordering restrictions on terms and the rewrite techniques which account for their completeness considerably restrict variable chaining. We show that variable elimination is an admissible simplification techniques within our redundancy framework, and that consequently for dense total orderings without endpoints no variable chaining is needed at all.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%A Mutzel, Petra
%A Näher, Stefan
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T An implementation of the Hopcroft and Tarjan planarity test and embedding algorithm :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B42B-8
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 20 p.
%X We design new inference systems for total orderings
by applying rewrite techniques to chaining calculi.
Equality relations may either be specified axiomatically
or built into the deductive calculus via paramodulation or superposition.
We demonstrate that our inference systems are compatible
with a concept of (global) redundancy for clauses and inferences
that covers such widely used simplification techniques
as tautology deletion, subsumption, and demodulation.
A key to the practicality of chaining techniques is
the extent to which so-called variable chainings can be restricted.
Syntactic ordering restrictions on terms
and the rewrite techniques which account for their completeness
considerably restrict variable chaining.
We show that variable elimination
is an admissible simplification techniques
within our redundancy framework,
and that consequently for dense total orderings without endpoints
no variable chaining is needed at all.
%B Research Report / Max-Planck-Institut für Informatik
LEDA-Manual Version 3.0
S. Näher
Technical Report, 1993
S. Näher
Technical Report, 1993
Abstract
No abstract available.
Export
BibTeX
@techreport{,
TITLE = {{LEDA}-Manual Version 3.0},
AUTHOR = {N{\"a}her, Stefan},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-109},
NUMBER = {MPI-I-93-109},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {No abstract available.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Näher, Stefan
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T LEDA-Manual Version 3.0 :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B743-5
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-109
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 140 p.
%X No abstract available.
%B Research Report / Max-Planck-Institut für Informatik
Constructive Deterministic PRAM Simulation on a Mesh-connected Computer
A. Pietracaprina, G. Pucci and J. Sibeyn
Technical Report, 1993
A. Pietracaprina, G. Pucci and J. Sibeyn
Technical Report, 1993
Export
BibTeX
@techreport{,
TITLE = {Constructive Deterministic {PRAM} Simulation on a Mesh-connected Computer},
AUTHOR = {Pietracaprina, Andrea and Pucci, Geppino and Sibeyn, Jop},
LANGUAGE = {eng},
NUMBER = {TR-93-059},
INSTITUTION = {International Computer Science Institute},
ADDRESS = {Berkeley},
YEAR = {1993},
DATE = {1993},
TYPE = {ICSI Technical Report},
}
Endnote
%0 Report
%A Pietracaprina, Andrea
%A Pucci, Geppino
%A Sibeyn, Jop
%+ External Organizations
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Constructive Deterministic PRAM Simulation on a Mesh-connected Computer :
%G eng
%U http://hdl.handle.net/21.11116/0000-000E-50DD-F
%Y International Computer Science Institute
%C Berkeley
%D 1993
%P 18
%B ICSI Technical Report
Lower bounds for merging on the hypercube
C. Rüb
Technical Report, 1993
C. Rüb
Technical Report, 1993
Abstract
We show non-trivial lower bounds for several prefix problems in the
CRCW PRAM model. Our main result is an $\Omega(\alpha(n))$ lower bound
for the chaining problem, matching the previously known upper bound.
We give a reduction to show that the same lower bound applies to a
parenthesis matching problem, again matching the previously known
upper bound. We also give reductions to show that similar lower
bounds hold for the prefix maxima and the range maxima problems.
Export
BibTeX
@techreport{MPI-I-93-148,
TITLE = {Lower bounds for merging on the hypercube},
AUTHOR = {R{\"u}b, Christine},
LANGUAGE = {eng},
NUMBER = {MPI-I-93-148},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {We show non-trivial lower bounds for several prefix problems in the CRCW PRAM model. Our main result is an $\Omega(\alpha(n))$ lower bound for the chaining problem, matching the previously known upper bound. We give a reduction to show that the same lower bound applies to a parenthesis matching problem, again matching the previously known upper bound. We also give reductions to show that similar lower bounds hold for the prefix maxima and the range maxima problems.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Rüb, Christine
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Lower bounds for merging on the hypercube :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B768-4
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 10 p.
%X We show non-trivial lower bounds for several prefix problems in the
CRCW PRAM model. Our main result is an $\Omega(\alpha(n))$ lower bound
for the chaining problem, matching the previously known upper bound.
We give a reduction to show that the same lower bound applies to a
parenthesis matching problem, again matching the previously known
upper bound. We also give reductions to show that similar lower
bounds hold for the prefix maxima and the range maxima problems.
%B Research Report / Max-Planck-Institut für Informatik
Tight bounds for some problems in computational geometry: the complete sub-logarithmic parallel time range
S. Sen
Technical Report, 1993
S. Sen
Technical Report, 1993
Abstract
There are a number of fundamental problems in computational geometry
for which work-optimal algorithms exist which have a parallel
running time of $O(\log n)$ in the PRAM model. These include
problems like two and three dimensional
convex-hulls, trapezoidal decomposition, arrangement construction, dominance
among others. Further improvements in running time to sub-logarithmic
range were not considered likely
because of their close relationship to sorting for which
an $\Omega (\log n/\log\log n )$ is known to
hold even with a polynomial number of processors.
However, with recent progress in padded-sort algorithms, which circumvents
the conventional lower-bounds, there arises a natural question about
speeding up algorithms for the above-mentioned geometric
problems (with appropriate modifications in the output specification).
We present randomized parallel algorithms for some fundamental
problems like convex-hulls and trapezoidal decomposition which execute in time
$O( \log n/\log k)$ in an $nk$ ($k > 1$) processor CRCW PRAM. Our algorithms do
not make any assumptions about the input distribution.
Our work relies heavily on results on padded-sorting and some earlier
results of Reif and Sen [28, 27]. We further prove a matching
lower-bound for these problems in the bounded degree decision tree.
Export
BibTeX
@techreport{MPI-I-93-129,
TITLE = {Tight bounds for some problems in computational geometry: the complete sub-logarithmic parallel time range},
AUTHOR = {Sen, Sandeep},
LANGUAGE = {eng},
NUMBER = {MPI-I-93-129},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {There are a number of fundamental problems in computational geometry for which work-optimal algorithms exist which have a parallel running time of $O(\log n)$ in the PRAM model. These include problems like two and three dimensional convex-hulls, trapezoidal decomposition, arrangement construction, dominance among others. Further improvements in running time to sub-logarithmic range were not considered likely because of their close relationship to sorting for which an $\Omega (\log n/\log\log n )$ is known to hold even with a polynomial number of processors. However, with recent progress in padded-sort algorithms, which circumvents the conventional lower-bounds, there arises a natural question about speeding up algorithms for the above-mentioned geometric problems (with appropriate modifications in the output specification). We present randomized parallel algorithms for some fundamental problems like convex-hulls and trapezoidal decomposition which execute in time $O( \log n/\log k)$ in an $nk$ ($k > 1$) processor CRCW PRAM. Our algorithms do not make any assumptions about the input distribution. Our work relies heavily on results on padded-sorting and some earlier results of Reif and Sen [28, 27]. We further prove a matching lower-bound for these problems in the bounded degree decision tree.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Sen, Sandeep
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Tight bounds for some problems in computational geometry: the complete sub-logarithmic parallel time range :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B750-7
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 12 p.
%X There are a number of fundamental problems in computational geometry
for which work-optimal algorithms exist which have a parallel
running time of $O(\log n)$ in the PRAM model. These include
problems like two and three dimensional
convex-hulls, trapezoidal decomposition, arrangement construction, dominance
among others. Further improvements in running time to sub-logarithmic
range were not considered likely
because of their close relationship to sorting for which
an $\Omega (\log n/\log\log n )$ is known to
hold even with a polynomial number of processors.
However, with recent progress in padded-sort algorithms, which circumvents
the conventional lower-bounds, there arises a natural question about
speeding up algorithms for the above-mentioned geometric
problems (with appropriate modifications in the output specification).
We present randomized parallel algorithms for some fundamental
problems like convex-hulls and trapezoidal decomposition which execute in time
$O( \log n/\log k)$ in an $nk$ ($k > 1$) processor CRCW PRAM. Our algorithms do
not make any assumptions about the input distribution.
Our work relies heavily on results on padded-sorting and some earlier
results of Reif and Sen [28, 27]. We further prove a matching
lower-bound for these problems in the bounded degree decision tree.
%B Research Report / Max-Planck-Institut für Informatik
Deterministic 1-k routing on meshes with applications to worm-hole routing
J. Sibeyn and M. Kaufmann
Technical Report, 1993
J. Sibeyn and M. Kaufmann
Technical Report, 1993
Abstract
In $1$-$k$ routing each of the $n^2$ processing units of an $n
\times n$ mesh connected computer initially holds $1$ packet which
must be routed such that any processor is the destination of at most
$k$ packets. This problem reflects practical desire for routing
better than the popular routing of permutations. $1$-$k$ routing
also has implications for hot-potato worm-hole routing, which is of
great importance for real world systems.
We present a near-optimal deterministic algorithm running in
$\sqrt{k} \cdot n / 2 + \go{n}$ steps. We give a second
algorithm with slightly worse routing time but working queue size
three. Applying this algorithm considerably reduces the routing
time of hot-potato worm-hole routing.
Non-trivial extensions are given to the general $l$-$k$ routing
problem and for routing on higher dimensional meshes. Finally we
show that $k$-$k$ routing can be performed in $\go{k \cdot n}$ steps
with working queue size four. Hereby the hot-potato worm-hole routing
problem can be solved in $\go{k^{3/2} \cdot n}$ steps.
Export
BibTeX
@techreport{SibeynKaufmann93,
TITLE = {Deterministic 1-k routing on meshes with applications to worm-hole routing},
AUTHOR = {Sibeyn, Jop and Kaufmann, Michael},
LANGUAGE = {eng},
URL = {http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-163},
NUMBER = {MPI-I-93-163},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {In $1$-$k$ routing each of the $n^2$ processing units of an $n \times n$ mesh connected computer initially holds $1$ packet which must be routed such that any processor is the destination of at most $k$ packets. This problem reflects practical desire for routing better than the popular routing of permutations. $1$-$k$ routing also has implications for hot-potato worm-hole routing, which is of great importance for real world systems. We present a near-optimal deterministic algorithm running in $\sqrt{k} \cdot n / 2 + \go{n}$ steps. We give a second algorithm with slightly worse routing time but working queue size three. Applying this algorithm considerably reduces the routing time of hot-potato worm-hole routing. Non-trivial extensions are given to the general $l$-$k$ routing problem and for routing on higher dimensional meshes. Finally we show that $k$-$k$ routing can be performed in $\go{k \cdot n}$ steps with working queue size four. Hereby the hot-potato worm-hole routing problem can be solved in $\go{k^{3/2} \cdot n}$ steps.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Sibeyn, Jop
%A Kaufmann, Michael
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Deterministic 1-k routing on meshes with applications to worm-hole routing :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B431-7
%U http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/93-163
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 13 p.
%X In $1$-$k$ routing each of the $n^2$ processing units of an $n
\times n$ mesh connected computer initially holds $1$ packet which
must be routed such that any processor is the destination of at most
$k$ packets. This problem reflects practical desire for routing
better than the popular routing of permutations. $1$-$k$ routing
also has implications for hot-potato worm-hole routing, which is of
great importance for real world systems.
We present a near-optimal deterministic algorithm running in
$\sqrt{k} \cdot n / 2 + \go{n}$ steps. We give a second
algorithm with slightly worse routing time but working queue size
three. Applying this algorithm considerably reduces the routing
time of hot-potato worm-hole routing.
Non-trivial extensions are given to the general $l$-$k$ routing
problem and for routing on higher dimensional meshes. Finally we
show that $k$-$k$ routing can be performed in $\go{k \cdot n}$ steps
with working queue size four. Hereby the hot-potato worm-hole routing
problem can be solved in $\go{k^{3/2} \cdot n}$ steps.
%B Research Report / Max-Planck-Institut für Informatik
Deterministic Permutation Routing on Meshes
J. Sibeyn, M. Kaufmann and B. S. Chlebus
Technical Report, 1993
J. Sibeyn, M. Kaufmann and B. S. Chlebus
Technical Report, 1993
Export
BibTeX
@techreport{Sibeyn-et-al_93Rep.Passau,
TITLE = {Deterministic Permutation Routing on Meshes},
AUTHOR = {Sibeyn, Jop and Kaufmann, Michael and Chlebus, Bogdan S.},
LANGUAGE = {eng},
NUMBER = {MIP-9301},
INSTITUTION = {Universit{\"a}t Passau},
ADDRESS = {Passau, Germany},
YEAR = {1993},
DATE = {1993},
TYPE = {Universität Passau, Technische Berichte},
VOLUME = {MIP-9301},
}
Endnote
%0 Report
%A Sibeyn, Jop
%A Kaufmann, Michael
%A Chlebus, Bogdan S.
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
%T Deterministic Permutation Routing on Meshes :
%G eng
%U http://hdl.handle.net/21.11116/0000-000E-2040-5
%Y Universität Passau
%C Passau, Germany
%D 1993
%P 13
%B Universität Passau, Technische Berichte
%N MIP-9301
Routing and sorting on circular arrays
J. F. Sibeyn
Technical Report, 1993
J. F. Sibeyn
Technical Report, 1993
Abstract
We analyze routing and sorting problems on circular processor arrays
with bidirectional connections. We assume that initially and finally
each PU holds $k \geq 1$ packets. On linear processor arrays the
routing and sorting problem can easily be solved for any $k$, but
for the circular array it is not obvious how to exploit the
wrap-around connection.
We show that on an array with $n$ PUs $k$-$k$ routing, $k \geq 4$,
can be performed optimally in $k \cdot n / 4 + \sqrt{n}$ steps by a
deterministical algorithm. For $k = 1$, the routing problem is
trivial. For $k = 2$ and $k = 3$, we prove lower-bounds and show
that these (almost) can be matched. A very simple algorithm has
good performance for dynamic routing problems.
For the $k$-$k$ sorting problem we use a powerful algorithm which
also can be used for sorting on higher-dimensional tori and meshes.
For the ring the routing time is $\max\{n, k \cdot n / 4\} + {\cal
O}((k \cdot n)^{2/3})$ steps. For large $k$ we take the computation
time into account and show that for $n = o(\log k)$ optimal speed-up
can be achieved. For $k < 4$, we give specific results, which
come close to the routing times.
Export
BibTeX
@techreport{Sibeyn93,
TITLE = {Routing and sorting on circular arrays},
AUTHOR = {Sibeyn, Jop Frederic},
LANGUAGE = {eng},
NUMBER = {MPI-I-93-138},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {We analyze routing and sorting problems on circular processor arrays with bidirectional connections. We assume that initially and finally each PU holds $k \geq 1$ packets. On linear processor arrays the routing and sorting problem can easily be solved for any $k$, but for the circular array it is not obvious how to exploit the wrap-around connection. We show that on an array with $n$ PUs $k$-$k$ routing, $k \geq 4$, can be performed optimally in $k \cdot n / 4 + \sqrt{n}$ steps by a deterministical algorithm. For $k = 1$, the routing problem is trivial. For $k = 2$ and $k = 3$, we prove lower-bounds and show that these (almost) can be matched. A very simple algorithm has good performance for dynamic routing problems. For the $k$-$k$ sorting problem we use a powerful algorithm which also can be used for sorting on higher-dimensional tori and meshes. For the ring the routing time is $\max\{n, k \cdot n / 4\} + {\cal O}((k \cdot n)^{2/3})$ steps. For large $k$ we take the computation time into account and show that for $n = o(\log k)$ optimal speed-up can be achieved. For $k < 4$, we give specific results, which come close to the routing times.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Sibeyn, Jop Frederic
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Routing and sorting on circular arrays :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B428-E
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 20 p.
%X We analyze routing and sorting problems on circular processor arrays
with bidirectional connections. We assume that initially and finally
each PU holds $k \geq 1$ packets. On linear processor arrays the
routing and sorting problem can easily be solved for any $k$, but
for the circular array it is not obvious how to exploit the
wrap-around connection.
We show that on an array with $n$ PUs $k$-$k$ routing, $k \geq 4$,
can be performed optimally in $k \cdot n / 4 + \sqrt{n}$ steps by a
deterministical algorithm. For $k = 1$, the routing problem is
trivial. For $k = 2$ and $k = 3$, we prove lower-bounds and show
that these (almost) can be matched. A very simple algorithm has
good performance for dynamic routing problems.
For the $k$-$k$ sorting problem we use a powerful algorithm which
also can be used for sorting on higher-dimensional tori and meshes.
For the ring the routing time is $\max\{n, k \cdot n / 4\} + {\cal
O}((k \cdot n)^{2/3})$ steps. For large $k$ we take the computation
time into account and show that for $n = o(\log k)$ optimal speed-up
can be achieved. For $k < 4$, we give specific results, which
come close to the routing times.
%B Research Report / Max-Planck-Institut für Informatik
An O(n log n) algorithm for finding a k-point subset with minimal L∞-diameter
M. Smid
Technical Report, 1993
M. Smid
Technical Report, 1993
Export
BibTeX
@techreport{Smid93,
TITLE = {An $O(n\log n)$ algorithm for finding a $k$-point subset with minimal $L_{\infty}$-diameter},
AUTHOR = {Smid, Michiel},
LANGUAGE = {eng},
NUMBER = {MPI-I-93-116},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Smid, Michiel
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T An O(n log n) algorithm for finding a k-point subset with minimal L∞-diameter :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B3F8-4
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 17 p.
%B Research Report / Max-Planck-Institut für Informatik
Static and dynamic algorithms for k-point clustering problems
M. Smid, A. Datta, H.-P. Lenhof and C. Schwarz
Technical Report, 1993
M. Smid, A. Datta, H.-P. Lenhof and C. Schwarz
Technical Report, 1993
Abstract
Let $S$ be a set of $n$ points in $d$-space and let
$1 \leq k \leq n$ be an integer. A unified approach is given
for solving the problem of finding a subset of $S$ of size $k$
that minimizes some closeness measure, such as the diameter,
perimeter or the circumradius. Moreover, data structures are
given that maintain such a subset under insertions and
deletions of points.
Export
BibTeX
@techreport{MPI-I-93-108,
TITLE = {Static and dynamic algorithms for k-point clustering problems},
AUTHOR = {Smid, Michiel and Datta, Amitava and Lenhof, Hans-Peter and Schwarz, Christian},
LANGUAGE = {eng},
NUMBER = {MPI-I-93-108},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1993},
DATE = {1993},
ABSTRACT = {Let $S$ be a set of $n$ points in $d$-space and let $1 \leq k \leq n$ be an integer. A unified approach is given for solving the problem of finding a subset of $S$ of size $k$ that minimizes some closeness measure, such as the diameter, perimeter or the circumradius. Moreover, data structures are given that maintain such a subset under insertions and deletions of points.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Smid, Michiel
%A Datta, Amitava
%A Lenhof, Hans-Peter
%A Schwarz, Christian
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Static and dynamic algorithms for k-point clustering problems :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B741-9
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1993
%P 20 p.
%X Let $S$ be a set of $n$ points in $d$-space and let
$1 \leq k \leq n$ be an integer. A unified approach is given
for solving the problem of finding a subset of $S$ of size $k$
that minimizes some closeness measure, such as the diameter,
perimeter or the circumradius. Moreover, data structures are
given that maintain such a subset under insertions and
deletions of points.
%B Research Report / Max-Planck-Institut für Informatik
1992
The influence of lookahead in competitive on-line algorithms
S. Albers
Technical Report, 1992
S. Albers
Technical Report, 1992
Abstract
In the competitive analysis of on-line problems, an on-line algorithm is
presented with a sequence of requests to be served.
The on-line algorithm
must satisfy each request without knowledge of any future requests.
We consider the question of lookahead in on-line problems: What
improvement can be achieved in terms of competitiveness, if the
on-line algorithm sees not only the present request but also some
future requests? We introduce two different models of lookahead and
study the ``classical'' on-line
problems such as paging, caching, the $k$-server problem,
the list update
problem and metrical task systems using these two models.
We prove that in the paging problem and the list update problem,
lookahead can significantly reduce the competitive factors of
on-line algorithms without lookahead. In addition
to lower bounds we present a number of on-line algorithms with
lookahead for these two problems. However, we also show that
in more general
on-line problems such as caching, the $k$-server problem and
metrical task systems
lookahead cannot improve competitive factors of deterministic
on-line algorithms without lookahead.
Export
BibTeX
@techreport{Albers92,
TITLE = {The influence of lookahead in competitive on-line algorithms},
AUTHOR = {Albers, Susanne},
LANGUAGE = {eng},
NUMBER = {MPI-I-92-143},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1992},
DATE = {1992},
ABSTRACT = {In the competitive analysis of on-line problems, an on-line algorithm is presented with a sequence of requests to be served. The on-line algorithm must satisfy each request without knowledge of any future requests. We consider the question of lookahead in on-line problems: What improvement can be achieved in terms of competitiveness, if the on-line algorithm sees not only the present request but also some future requests? We introduce two different models of lookahead and study the ``classical'' on-line problems such as paging, caching, the $k$-server problem, the list update problem and metrical task systems using these two models. We prove that in the paging problem and the list update problem, lookahead can significantly reduce the competitive factors of on-line algorithms without lookahead. In addition to lower bounds we present a number of on-line algorithms with lookahead for these two problems. However, we also show that in more general on-line problems such as caching, the $k$-server problem and metrical task systems lookahead cannot improve competitive factors of deterministic on-line algorithms without lookahead.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Albers, Susanne
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T The influence of lookahead in competitive on-line algorithms :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B70E-E
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1992
%P 56 p.
%X In the competitive analysis of on-line problems, an on-line algorithm is
presented with a sequence of requests to be served.
The on-line algorithm
must satisfy each request without knowledge of any future requests.
We consider the question of lookahead in on-line problems: What
improvement can be achieved in terms of competitiveness, if the
on-line algorithm sees not only the present request but also some
future requests? We introduce two different models of lookahead and
study the ``classical'' on-line
problems such as paging, caching, the $k$-server problem,
the list update
problem and metrical task systems using these two models.
We prove that in the paging problem and the list update problem,
lookahead can significantly reduce the competitive factors of
on-line algorithms without lookahead. In addition
to lower bounds we present a number of on-line algorithms with
lookahead for these two problems. However, we also show that
in more general
on-line problems such as caching, the $k$-server problem and
metrical task systems
lookahead cannot improve competitive factors of deterministic
on-line algorithms without lookahead.
%B Research Report / Max-Planck-Institut für Informatik
A Method for Obtaining Randomized Algorithms with Small Tail Probabilities
H. Alt, L. Guibas, K. Mehlhorn, R. Karp and A. Widgerson
Technical Report, 1992
H. Alt, L. Guibas, K. Mehlhorn, R. Karp and A. Widgerson
Technical Report, 1992
Abstract
We study strategies for converting randomized algorithms of the <br>Las Vegas type into randomized algorithms with small tail <br>probabilities.
Export
BibTeX
@techreport{AltGuibasMehlhornKarpWidgerson92,
TITLE = {A Method for Obtaining Randomized Algorithms with Small Tail Probabilities},
AUTHOR = {Alt, Helmut and Guibas, L. and Mehlhorn, Kurt and Karp, R. and Widgerson, A.},
LANGUAGE = {eng},
NUMBER = {MPI-I-92-110},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1992},
DATE = {1992},
ABSTRACT = {We study strategies for converting randomized algorithms of the <br>Las Vegas type into randomized algorithms with small tail <br>probabilities.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Alt, Helmut
%A Guibas, L.
%A Mehlhorn, Kurt
%A Karp, R.
%A Widgerson, A.
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A Method for Obtaining Randomized Algorithms with Small Tail Probabilities :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B6EB-4
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1992
%P 5 p.
%X We study strategies for converting randomized algorithms of the <br>Las Vegas type into randomized algorithms with small tail <br>probabilities.
%B Research Report / Max-Planck-Institut für Informatik
Dynamic point location in general subdivisions
H. Baumgarten, H. Jung and K. Mehlhorn
Technical Report, 1992
H. Baumgarten, H. Jung and K. Mehlhorn
Technical Report, 1992
Abstract
The {\em dynamic planar point location problem} is the
task of maintaining a dynamic set $S$ of $n$ non-intersecting, except
possibly at endpoints, line segments in the plane
under the following operations:
\begin{itemize}
\item Locate($q$: point): Report the
segment immediately above $q$, i.e., the first segment
intersected by an upward vertical ray starting at
$q$;
\item Insert($s$: segment): Add segment $s$ to the collection
$S$ of segments;
\item Delete($s$: segment): Remove segment $s$ from the
collection $S$ of segments.
\end{itemize}
We present a solution which requires space $O(n)$,
has query and insertion time and deletion time. A query time
was previously only known for monotone subdivisions and horizontal segments and required non-linear space.
Export
BibTeX
@techreport{BaumgartenJungMehlhorn92,
TITLE = {Dynamic point location in general subdivisions},
AUTHOR = {Baumgarten, Hanna and Jung, Hermann and Mehlhorn, Kurt},
LANGUAGE = {eng},
NUMBER = {MPI-I-92-126},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1992},
DATE = {1992},
ABSTRACT = {The {\em dynamic planar point location problem} is the task of maintaining a dynamic set $S$ of $n$ non-intersecting, except possibly at endpoints, line segments in the plane under the following operations: \begin{itemize} \item Locate($q$: point): Report the segment immediately above $q$, i.e., the first segment intersected by an upward vertical ray starting at $q$; \item Insert($s$: segment): Add segment $s$ to the collection $S$ of segments; \item Delete($s$: segment): Remove segment $s$ from the collection $S$ of segments. \end{itemize} We present a solution which requires space $O(n)$, has query and insertion time and deletion time. A query time was previously only known for monotone subdivisions and horizontal segments and required non-linear space.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Baumgarten, Hanna
%A Jung, Hermann
%A Mehlhorn, Kurt
%+ External Organizations
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Dynamic point location in general subdivisions :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B703-4
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1992
%P 30 p.
%X The {\em dynamic planar point location problem} is the
task of maintaining a dynamic set $S$ of $n$ non-intersecting, except
possibly at endpoints, line segments in the plane
under the following operations:
\begin{itemize}
\item Locate($q$: point): Report the
segment immediately above $q$, i.e., the first segment
intersected by an upward vertical ray starting at
$q$;
\item Insert($s$: segment): Add segment $s$ to the collection
$S$ of segments;
\item Delete($s$: segment): Remove segment $s$ from the
collection $S$ of segments.
\end{itemize}
We present a solution which requires space $O(n)$,
has query and insertion time and deletion time. A query time
was previously only known for monotone subdivisions and horizontal segments and required non-linear space.
%B Research Report / Max-Planck-Institut für Informatik
Four results on randomized incremental constructions
K. L. Clarkson and K. Mehlhorn
Technical Report, 1992
K. L. Clarkson and K. Mehlhorn
Technical Report, 1992
Abstract
We prove four results on randomized incremental constructions (RICs):
\begin{itemize}
\item
an analysis of the expected behavior under insertion and deletions,
\item
a fully dynamic data structure for convex hull maintenance in
arbitrary dimensions,
\item
a tail estimate for the space complexity of RICs,
\item
a lower bound on the complexity of a game related to RICs.
\end{itemize}
Export
BibTeX
@techreport{ClarksonMehlhornSeidel92,
TITLE = {Four results on randomized incremental constructions},
AUTHOR = {Clarkson, K. L. and Mehlhorn, Kurt},
LANGUAGE = {eng},
NUMBER = {MPI-I-92-112},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1992},
DATE = {1992},
ABSTRACT = {We prove four results on randomized incremental constructions (RICs): \begin{itemize} \item an analysis of the expected behavior under insertion and deletions, \item a fully dynamic data structure for convex hull maintenance in arbitrary dimensions, \item a tail estimate for the space complexity of RICs, \item a lower bound on the complexity of a game related to RICs. \end{itemize}},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Clarkson, K. L.
%A Mehlhorn, Kurt
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Four results on randomized incremental constructions :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B6ED-F
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1992
%P 21 p.
%X We prove four results on randomized incremental constructions (RICs):
\begin{itemize}
\item
an analysis of the expected behavior under insertion and deletions,
\item
a fully dynamic data structure for convex hull maintenance in
arbitrary dimensions,
\item
a tail estimate for the space complexity of RICs,
\item
a lower bound on the complexity of a game related to RICs.
\end{itemize}
%B Research Report / Max-Planck-Institut für Informatik
A new lower bound technique for decision trees
R. Fleischer
Technical Report, 1992a
R. Fleischer
Technical Report, 1992a
Abstract
In this paper, we prove two general lower bounds for algebraic
decision trees which test membership in a set $S\subseteq\Re^n$ which is
defined by linear inequalities.
Let $rank(S)$ be
the maximal dimension of a linear subspace contained in the closure of
$S$.
% \endgraf
First we prove that any decision tree which uses multilinear
functions (i.e.~arbitrary
products of linear functions) must have depth
at least $n-rank(S)$.
This solves an open question raised by A.C.~Yao
and can be used to show
that multilinear functions are not really more powerful
than simple comparisons between the input variables when
computing the largest $k$ elements of $n$ given numbers.
Yao could only prove this result in the special case when
products of at most two linear functions are used.
Our proof is based on a dimension argument.
It seems to be the first time that such an approach
yields good lower bounds for nonlinear decision trees.
% \endgraf
Surprisingly, we can use the same methods to give an
alternative proof for Rabin's fundamental Theorem,
namely that the depth of any decision tree using arbitrary
analytic functions is at least $n-rank(S)$.
Since we show that Rabin's original proof is incorrect,
our proof of Rabin's Theorem is not only the first correct one
but also generalizes the Theorem to a wider class of functions.
Export
BibTeX
@techreport{Fleischer_Report1992,
TITLE = {A new lower bound technique for decision trees},
AUTHOR = {Fleischer, Rudolf},
LANGUAGE = {eng},
NUMBER = {MPI-I-92-125},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1992},
DATE = {1992},
ABSTRACT = {In this paper, we prove two general lower bounds for algebraic decision trees which test membership in a set $S\subseteq\Re^n$ which is defined by linear inequalities. Let $rank(S)$ be the maximal dimension of a linear subspace contained in the closure of $S$. % \endgraf First we prove that any decision tree which uses multilinear functions (i.e.~arbitrary products of linear functions) must have depth at least $n-rank(S)$. This solves an open question raised by A.C.~Yao and can be used to show that multilinear functions are not really more powerful than simple comparisons between the input variables when computing the largest $k$ elements of $n$ given numbers. Yao could only prove this result in the special case when products of at most two linear functions are used. Our proof is based on a dimension argument. It seems to be the first time that such an approach yields good lower bounds for nonlinear decision trees. % \endgraf Surprisingly, we can use the same methods to give an alternative proof for Rabin's fundamental Theorem, namely that the depth of any decision tree using arbitrary analytic functions is at least $n-rank(S)$. Since we show that Rabin's original proof is incorrect, our proof of Rabin's Theorem is not only the first correct one but also generalizes the Theorem to a wider class of functions.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Fleischer, Rudolf
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A new lower bound technique for decision trees :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B6FB-F
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1992
%P 21 p.
%X In this paper, we prove two general lower bounds for algebraic
decision trees which test membership in a set $S\subseteq\Re^n$ which is
defined by linear inequalities.
Let $rank(S)$ be
the maximal dimension of a linear subspace contained in the closure of
$S$.
% \endgraf
First we prove that any decision tree which uses multilinear
functions (i.e.~arbitrary
products of linear functions) must have depth
at least $n-rank(S)$.
This solves an open question raised by A.C.~Yao
and can be used to show
that multilinear functions are not really more powerful
than simple comparisons between the input variables when
computing the largest $k$ elements of $n$ given numbers.
Yao could only prove this result in the special case when
products of at most two linear functions are used.
Our proof is based on a dimension argument.
It seems to be the first time that such an approach
yields good lower bounds for nonlinear decision trees.
% \endgraf
Surprisingly, we can use the same methods to give an
alternative proof for Rabin's fundamental Theorem,
namely that the depth of any decision tree using arbitrary
analytic functions is at least $n-rank(S)$.
Since we show that Rabin's original proof is incorrect,
our proof of Rabin's Theorem is not only the first correct one
but also generalizes the Theorem to a wider class of functions.
%B Research Report / Max-Planck-Institut für Informatik
A simple balanced search tree with 0(1) worst-case update time
R. Fleischer
Technical Report, 1992b
R. Fleischer
Technical Report, 1992b
Abstract
In this paper we show how a slight modification of $(a,b)$-trees allows us to perform member and
neighbor queries in $O(\log n)$ time and updates in $O(1)$ worst-case time (once the position of the inserted or
deleted key is known).
Our data structure is quite natural and much simpler than previous worst-case optimal solutions.
It is based on two techniques :
1) \em{bucketing}, i.e.~storing an ordered list of $2\log n$ keys in each leaf of an $(a,b)$ tree, and \quad
2) \em{lazy splitting}, i.e.~postponing necessary splits of big nodes until we have time to handle them.
It can also be used as a finger tree with $O(\log^*n)$ worst-case update time.
Export
BibTeX
@techreport{Fleischer92a,
TITLE = {A simple balanced search tree with 0(1) worst-case update time},
AUTHOR = {Fleischer, Rudolf},
LANGUAGE = {eng},
NUMBER = {MPI-I-92-101},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1992},
DATE = {1992},
ABSTRACT = {In this paper we show how a slight modification of $(a,b)$-trees allows us to perform member and neighbor queries in $O(\log n)$ time and updates in $O(1)$ worst-case time (once the position of the inserted or deleted key is known). Our data structure is quite natural and much simpler than previous worst-case optimal solutions. It is based on two techniques : 1) \em{bucketing}, i.e.~storing an ordered list of $2\log n$ keys in each leaf of an $(a,b)$ tree, and \quad 2) \em{lazy splitting}, i.e.~postponing necessary splits of big nodes until we have time to handle them. It can also be used as a finger tree with $O(\log^*n)$ worst-case update time.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Fleischer, Rudolf
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A simple balanced search tree with 0(1) worst-case update time :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B1A6-A
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1992
%P 10 p.
%X In this paper we show how a slight modification of $(a,b)$-trees allows us to perform member and
neighbor queries in $O(\log n)$ time and updates in $O(1)$ worst-case time (once the position of the inserted or
deleted key is known).
Our data structure is quite natural and much simpler than previous worst-case optimal solutions.
It is based on two techniques :
1) \em{bucketing}, i.e.~storing an ordered list of $2\log n$ keys in each leaf of an $(a,b)$ tree, and \quad
2) \em{lazy splitting}, i.e.~postponing necessary splits of big nodes until we have time to handle them.
It can also be used as a finger tree with $O(\log^*n)$ worst-case update time.
%B Research Report / Max-Planck-Institut für Informatik
Simple randomized algorithms for closest pair problems
M. J. Golin, R. Raman, C. Schwarz and M. Smid
Technical Report, 1992
M. J. Golin, R. Raman, C. Schwarz and M. Smid
Technical Report, 1992
Abstract
We present a conceptually simple, randomized incremental algorithm
for finding the closest pair in a set of $n$ points in
$D$-dimensional space, where $D \geq 2$ is a fixed constant.
Using dynamic perfect hashing, the algorithm runs in $O(n)$
expected time.
In addition to being quick on the average, this algorithm
is reliable: we show that it runs in $O(n \log n / \log\log n)$
time with high probability.
Export
BibTeX
@techreport{MPI-I-92-155,
TITLE = {Simple randomized algorithms for closest pair problems},
AUTHOR = {Golin, Mordecai J. and Raman, Rajeev and Schwarz, Christian and Smid, Michiel},
LANGUAGE = {eng},
NUMBER = {MPI-I-92-155},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1992},
DATE = {1992},
ABSTRACT = {We present a conceptually simple, randomized incremental algorithm for finding the closest pair in a set of $n$ points in $D$-dimensional space, where $D \geq 2$ is a fixed constant. Using dynamic perfect hashing, the algorithm runs in $O(n)$ expected time. In addition to being quick on the average, this algorithm is reliable: we show that it runs in $O(n \log n / \log\log n)$ time with high probability.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Golin, Mordecai J.
%A Raman, Rajeev
%A Schwarz, Christian
%A Smid, Michiel
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Simple randomized algorithms for closest pair problems :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B723-B
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1992
%P 14 p.
%X We present a conceptually simple, randomized incremental algorithm
for finding the closest pair in a set of $n$ points in
$D$-dimensional space, where $D \geq 2$ is a fixed constant.
Using dynamic perfect hashing, the algorithm runs in $O(n)$
expected time.
In addition to being quick on the average, this algorithm
is reliable: we show that it runs in $O(n \log n / \log\log n)$
time with high probability.
%B Research Report
Circuits and multi-party protocols
V. Grolmusz
Technical Report, 1992a
V. Grolmusz
Technical Report, 1992a
Abstract
We present a multi--party protocol for computing certain functions of
an $n\times k$ $0-1$ matrix $A$. The protocol is for $k$ players, where
player $i$ knows every column of $A$, except column $i$. {\it Babai,
Nisan}
and {\it Szegedy} proved that to compute $GIP(A)$ needs $\Omega (n/4^k)$
bits to communicate. We show that players can count those rows of matrix $A$
which sum is divisible by $m$, with communicating only $O(mk\log n)$ bits,
while counting the rows with sum congruent to 1 $\pmod m$ needs
$\Omega (n/4^k)$ bits of communication (with an odd $m$ and $k\equiv m\pmod
{2m}$). $\Omega(n/4^k)$ communication is needed also to count the rows
of $A$ with sum in any congruence class
modulo an {\it even} $m$.
The exponential gap in communication complexities allows us to prove
exponential lower bounds for the sizes of some bounded--depth circuits with
MAJORITY, SYMMETRIC and MOD$_m$ gates, where $m$ is an odd
-- prime or composite -- number.
Export
BibTeX
@techreport{Grolmusz92a,
TITLE = {Circuits and multi-party protocols},
AUTHOR = {Grolmusz, Vince},
LANGUAGE = {eng},
NUMBER = {MPI-I-92-104},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1992},
DATE = {1992},
ABSTRACT = {We present a multi--party protocol for computing certain functions of an $n\times k$ $0-1$ matrix $A$. The protocol is for $k$ players, where player $i$ knows every column of $A$, except column $i$. {\it Babai, Nisan} and {\it Szegedy} proved that to compute $GIP(A)$ needs $\Omega (n/4^k)$ bits to communicate. We show that players can count those rows of matrix $A$ which sum is divisible by $m$, with communicating only $O(mk\log n)$ bits, while counting the rows with sum congruent to 1 $\pmod m$ needs $\Omega (n/4^k)$ bits of communication (with an odd $m$ and $k\equiv m\pmod {2m}$). $\Omega(n/4^k)$ communication is needed also to count the rows of $A$ with sum in any congruence class modulo an {\it even} $m$. The exponential gap in communication complexities allows us to prove exponential lower bounds for the sizes of some bounded--depth circuits with MAJORITY, SYMMETRIC and MOD$_m$ gates, where $m$ is an odd -- prime or composite -- number.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Grolmusz, Vince
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Circuits and multi-party protocols :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B6E7-C
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1992
%P 16 p.
%X We present a multi--party protocol for computing certain functions of
an $n\times k$ $0-1$ matrix $A$. The protocol is for $k$ players, where
player $i$ knows every column of $A$, except column $i$. {\it Babai,
Nisan}
and {\it Szegedy} proved that to compute $GIP(A)$ needs $\Omega (n/4^k)$
bits to communicate. We show that players can count those rows of matrix $A$
which sum is divisible by $m$, with communicating only $O(mk\log n)$ bits,
while counting the rows with sum congruent to 1 $\pmod m$ needs
$\Omega (n/4^k)$ bits of communication (with an odd $m$ and $k\equiv m\pmod
{2m}$). $\Omega(n/4^k)$ communication is needed also to count the rows
of $A$ with sum in any congruence class
modulo an {\it even} $m$.
The exponential gap in communication complexities allows us to prove
exponential lower bounds for the sizes of some bounded--depth circuits with
MAJORITY, SYMMETRIC and MOD$_m$ gates, where $m$ is an odd
-- prime or composite -- number.
%B Research Report / Max-Planck-Institut für Informatik
Separating the communication complexities of MOD m and MOD p circuits
V. Grolmusz
Technical Report, 1992b
V. Grolmusz
Technical Report, 1992b
Abstract
We prove in this paper that it is much harder to evaluate depth--2,
size--$N$ circuits with MOD $m$ gates than with MOD $p$ gates by
$k$--party communication protocols: we show a $k$--party protocol
which communicates $O(1)$ bits to evaluate circuits with MOD $p$ gates,
while evaluating circuits with MOD $m$ gates needs $\Omega(N)$ bits,
where $p$ denotes a prime, and $m$ a composite, non-prime power number.
Let us note that using $k$--party protocols with $k\geq p$ is crucial
here, since there are depth--2, size--$N$ circuits with MOD $p$ gates
with $p>k$, whose $k$--party evaluation needs $\Omega(N)$ bits. As a
corollary, for all $m$, we show a function, computable with a depth--2
circuit with MOD $m$ gates, but not with any depth--2 circuit with MOD
$p$ gates.
It is easy to see that the $k$--party protocols are not weaker than the
$k'$--party protocols, for $k'>k$. Our results imply that if there is a
prime $p$ between $k$ and $k'$: $k<p\leq k'$, then there exists a
function which can be computed by a $k'$--party
protocol with a constant number of communicated bits, while any
$k$--party protocol needs linearly many bits of communication. This
result gives a hierarchy theorem for multi--party protocols.
Export
BibTeX
@techreport{,
TITLE = {Separating the communication complexities of {MOD} m and {MOD} p circuits},
AUTHOR = {Grolmusz, Vince},
LANGUAGE = {eng},
NUMBER = {MPI-I-92-120},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1992},
DATE = {1992},
ABSTRACT = {We prove in this paper that it is much harder to evaluate depth--2, size--$N$ circuits with MOD $m$ gates than with MOD $p$ gates by $k$--party communication protocols: we show a $k$--party protocol which communicates $O(1)$ bits to evaluate circuits with MOD $p$ gates, while evaluating circuits with MOD $m$ gates needs $\Omega(N)$ bits, where $p$ denotes a prime, and $m$ a composite, non-prime power number. Let us note that using $k$--party protocols with $k\geq p$ is crucial here, since there are depth--2, size--$N$ circuits with MOD $p$ gates with $p>k$, whose $k$--party evaluation needs $\Omega(N)$ bits. As a corollary, for all $m$, we show a function, computable with a depth--2 circuit with MOD $m$ gates, but not with any depth--2 circuit with MOD $p$ gates. It is easy to see that the $k$--party protocols are not weaker than the $k'$--party protocols, for $k'>k$. Our results imply that if there is a prime $p$ between $k$ and $k'$: $k<p\leq k'$, then there exists a function which can be computed by a $k'$--party protocol with a constant number of communicated bits, while any $k$--party protocol needs linearly many bits of communication. This result gives a hierarchy theorem for multi--party protocols.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Grolmusz, Vince
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Separating the communication complexities of MOD m and MOD p circuits :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B6F3-0
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1992
%P 16 p.
%X We prove in this paper that it is much harder to evaluate depth--2,
size--$N$ circuits with MOD $m$ gates than with MOD $p$ gates by
$k$--party communication protocols: we show a $k$--party protocol
which communicates $O(1)$ bits to evaluate circuits with MOD $p$ gates,
while evaluating circuits with MOD $m$ gates needs $\Omega(N)$ bits,
where $p$ denotes a prime, and $m$ a composite, non-prime power number.
Let us note that using $k$--party protocols with $k\geq p$ is crucial
here, since there are depth--2, size--$N$ circuits with MOD $p$ gates
with $p>k$, whose $k$--party evaluation needs $\Omega(N)$ bits. As a
corollary, for all $m$, we show a function, computable with a depth--2
circuit with MOD $m$ gates, but not with any depth--2 circuit with MOD
$p$ gates.
It is easy to see that the $k$--party protocols are not weaker than the
$k'$--party protocols, for $k'>k$. Our results imply that if there is a
prime $p$ between $k$ and $k'$: $k<p\leq k'$, then there exists a
function which can be computed by a $k'$--party
protocol with a constant number of communicated bits, while any
$k$--party protocol needs linearly many bits of communication. This
result gives a hierarchy theorem for multi--party protocols.
%B Research Report / Max-Planck-Institut für Informatik
Fast integer merging on the EREW PRAM
T. Hagerup and M. Kutylowski
Technical Report, 1992
T. Hagerup and M. Kutylowski
Technical Report, 1992
Abstract
We investigate the complexity of merging sequences of small integers
on the EREW PRAM.
Our most surprising result is that two sorted sequences
of $n$ bits each can be merged in $O(\log\log n)$ time.
More generally, we describe an algorithm to merge two
sorted sequences of $n$ integers drawn from the set
$\{0,\ldots,m-1\}$ in $O(\log\log n+\log m)$ time
using an optimal number of processors.
No sublogarithmic merging algorithm for this model
of computation was previously known.
The algorithm not only produces the merged sequence, but also
computes the rank of each input element in the merged sequence.
On the other hand, we show a lower bound of
$\Omega(\log\min\{n,m\})$ on the time needed
to merge two sorted sequences of length $n$ each
with elements in the set $\{0,\ldots,m-1\}$,
implying that our merging algorithm is as fast
as possible for $m=(\log n)^{\Omega(1)}$.
If we impose an additional stability condition
requiring the ranks of each input sequence to
form an increasing sequence, then the time
complexity of the problem becomes $\Theta(\log n)$,
even for $m=2$.
Stable merging is thus harder than nonstable merging.
Export
BibTeX
@techreport{HagerupKutylowski92,
TITLE = {Fast integer merging on the {EREW} {PRAM}},
AUTHOR = {Hagerup, Torben and Kutylowski, Miroslaw},
LANGUAGE = {eng},
NUMBER = {MPI-I-92-115},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1992},
DATE = {1992},
ABSTRACT = {We investigate the complexity of merging sequences of small integers on the EREW PRAM. Our most surprising result is that two sorted sequences of $n$ bits each can be merged in $O(\log\log n)$ time. More generally, we describe an algorithm to merge two sorted sequences of $n$ integers drawn from the set $\{0,\ldots,m-1\}$ in $O(\log\log n+\log m)$ time using an optimal number of processors. No sublogarithmic merging algorithm for this model of computation was previously known. The algorithm not only produces the merged sequence, but also computes the rank of each input element in the merged sequence. On the other hand, we show a lower bound of $\Omega(\log\min\{n,m\})$ on the time needed to merge two sorted sequences of length $n$ each with elements in the set $\{0,\ldots,m-1\}$, implying that our merging algorithm is as fast as possible for $m=(\log n)^{\Omega(1)}$. If we impose an additional stability condition requiring the ranks of each input sequence to form an increasing sequence, then the time complexity of the problem becomes $\Theta(\log n)$, even for $m=2$. Stable merging is thus harder than nonstable merging.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Hagerup, Torben
%A Kutylowski, Miroslaw
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Fast integer merging on the EREW PRAM :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B6EF-B
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1992
%P 12 p.
%X We investigate the complexity of merging sequences of small integers
on the EREW PRAM.
Our most surprising result is that two sorted sequences
of $n$ bits each can be merged in $O(\log\log n)$ time.
More generally, we describe an algorithm to merge two
sorted sequences of $n$ integers drawn from the set
$\{0,\ldots,m-1\}$ in $O(\log\log n+\log m)$ time
using an optimal number of processors.
No sublogarithmic merging algorithm for this model
of computation was previously known.
The algorithm not only produces the merged sequence, but also
computes the rank of each input element in the merged sequence.
On the other hand, we show a lower bound of
$\Omega(\log\min\{n,m\})$ on the time needed
to merge two sorted sequences of length $n$ each
with elements in the set $\{0,\ldots,m-1\}$,
implying that our merging algorithm is as fast
as possible for $m=(\log n)^{\Omega(1)}$.
If we impose an additional stability condition
requiring the ranks of each input sequence to
form an increasing sequence, then the time
complexity of the problem becomes $\Theta(\log n)$,
even for $m=2$.
Stable merging is thus harder than nonstable merging.
%B Research Report / Max-Planck-Institut für Informatik
Fast deterministic processor allocation
T. Hagerup
Technical Report, 1992
T. Hagerup
Technical Report, 1992
Abstract
Interval allocation has been suggested as a possible
formalization for the PRAM
of the (vaguely defined) processor allocation
problem, which is of fundamental importance
in parallel computing.
The interval allocation problem is, given $n$ nonnegative
integers $x_1,\ldots,x_n$, to allocate $n$ nonoverlapping
subarrays of sizes $x_1,\ldots,x_n$ from within a base
array of
$O(\sum_{j=1}^n x_j)$ cells.
We show that interval allocation problems of size $n$
can be solved in $O((\log\log n)^3)$ time with
optimal speedup on a deterministic CRCW PRAM.
In addition to a general solution to the
processor allocation problem,
this implies an improved deterministic
algorithm for the problem of approximate summation.
For both interval allocation and approximate
summation, the fastest previous deterministic
algorithms have running times of
$\Theta({{\log n}/{\log\log n}})$.
We also describe an application to the problem of
computing the connected components of an
undirected graph.
Export
BibTeX
@techreport{Hagerup92,
TITLE = {Fast deterministic processor allocation},
AUTHOR = {Hagerup, Torben},
LANGUAGE = {eng},
NUMBER = {MPI-I-92-149},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1992},
DATE = {1992},
ABSTRACT = {Interval allocation has been suggested as a possible formalization for the PRAM of the (vaguely defined) processor allocation problem, which is of fundamental importance in parallel computing. The interval allocation problem is, given $n$ nonnegative integers $x_1,\ldots,x_n$, to allocate $n$ nonoverlapping subarrays of sizes $x_1,\ldots,x_n$ from within a base array of $O(\sum_{j=1}^n x_j)$ cells. We show that interval allocation problems of size $n$ can be solved in $O((\log\log n)^3)$ time with optimal speedup on a deterministic CRCW PRAM. In addition to a general solution to the processor allocation problem, this implies an improved deterministic algorithm for the problem of approximate summation. For both interval allocation and approximate summation, the fastest previous deterministic algorithms have running times of $\Theta({{\log n}/{\log\log n}})$. We also describe an application to the problem of computing the connected components of an undirected graph.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Hagerup, Torben
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Fast deterministic processor allocation :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B710-6
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1992
%P 11 p.
%X Interval allocation has been suggested as a possible
formalization for the PRAM
of the (vaguely defined) processor allocation
problem, which is of fundamental importance
in parallel computing.
The interval allocation problem is, given $n$ nonnegative
integers $x_1,\ldots,x_n$, to allocate $n$ nonoverlapping
subarrays of sizes $x_1,\ldots,x_n$ from within a base
array of
$O(\sum_{j=1}^n x_j)$ cells.
We show that interval allocation problems of size $n$
can be solved in $O((\log\log n)^3)$ time with
optimal speedup on a deterministic CRCW PRAM.
In addition to a general solution to the
processor allocation problem,
this implies an improved deterministic
algorithm for the problem of approximate summation.
For both interval allocation and approximate
summation, the fastest previous deterministic
algorithms have running times of
$\Theta({{\log n}/{\log\log n}})$.
We also describe an application to the problem of
computing the connected components of an
undirected graph.
%B Research Report / Max-Planck-Institut für Informatik
Waste makes haste: tight bounds for loose parallel sorting
T. Hagerup and R. Raman
Technical Report, 1992
T. Hagerup and R. Raman
Technical Report, 1992
Abstract
Conventional
parallel sorting requires the $n$ input
keys to be output in an array of size $n$, and is known to
take $\Omega({{\log n}/{\log\log n}})$ time using
any polynomial number of processors.
The lower bound does not apply to the more ``wasteful''
convention of {\em padded sorting}, which
requires the keys to be output in sorted order in
an array of size $(1 + o(1)) n$.
We give very fast randomized CRCW PRAM algorithms for
several padded-sorting problems.
Applying only pairwise comparisons to the input
and using $kn$ processors, where $2\le k\le n$,
we can padded-sort $n$ keys in
$O({{\log n}/{\log k}})$ time with
high probability (whp), which
is the best possible (expected)
run time for any comparison-based algorithm.
We also show how to padded-sort
$n$ independent random numbers
in $O(\log^*\! n)$ time whp with $O(n)$ work,
which matches a recent lower bound,
and how to padded-sort
$n$ integers in the range $ 1..n $
in constant time whp using $n$ processors.
If the integer sorting is required to be stable,
we can still solve the problem in
$O({{\log\log n}/{\log k}})$ time whp using
$kn$ processors, for any $k$ with $2\le k\le\log n$.
The integer sorting results require the
nonstandard OR PRAM; alternative implementations
on standard PRAM variants run in $O(\log\log n)$ time whp.
As an application of our padded-sorting algorithms,
we can solve approximate prefix summation problems
of size~$n$ with $O(n)$ work
in constant time whp on the OR PRAM,
and in $O(\log\log n)$ time whp on
standard PRAM variants.
Export
BibTeX
@techreport{HagerupRaman92,
TITLE = {Waste makes haste: tight bounds for loose parallel sorting},
AUTHOR = {Hagerup, Torben and Raman, Rajeev},
LANGUAGE = {eng},
NUMBER = {MPI-I-92-141},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1992},
DATE = {1992},
ABSTRACT = {Conventional parallel sorting requires the $n$ input keys to be output in an array of size $n$, and is known to take $\Omega({{\log n}/{\log\log n}})$ time using any polynomial number of processors. The lower bound does not apply to the more ``wasteful'' convention of {\em padded sorting}, which requires the keys to be output in sorted order in an array of size $(1 + o(1)) n$. We give very fast randomized CRCW PRAM algorithms for several padded-sorting problems. Applying only pairwise comparisons to the input and using $kn$ processors, where $2\le k\le n$, we can padded-sort $n$ keys in $O({{\log n}/{\log k}})$ time with high probability (whp), which is the best possible (expected) run time for any comparison-based algorithm. We also show how to padded-sort $n$ independent random numbers in $O(\log^*\! n)$ time whp with $O(n)$ work, which matches a recent lower bound, and how to padded-sort $n$ integers in the range $ 1..n $ in constant time whp using $n$ processors. If the integer sorting is required to be stable, we can still solve the problem in $O({{\log\log n}/{\log k}})$ time whp using $kn$ processors, for any $k$ with $2\le k\le\log n$. The integer sorting results require the nonstandard OR PRAM; alternative implementations on standard PRAM variants run in $O(\log\log n)$ time whp. As an application of our padded-sorting algorithms, we can solve approximate prefix summation problems of size~$n$ with $O(n)$ work in constant time whp on the OR PRAM, and in $O(\log\log n)$ time whp on standard PRAM variants.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Hagerup, Torben
%A Raman, Rajeev
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Waste makes haste: tight bounds for loose parallel sorting :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B70C-1
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1992
%P 185 p.
%X Conventional
parallel sorting requires the $n$ input
keys to be output in an array of size $n$, and is known to
take $\Omega({{\log n}/{\log\log n}})$ time using
any polynomial number of processors.
The lower bound does not apply to the more ``wasteful''
convention of {\em padded sorting}, which
requires the keys to be output in sorted order in
an array of size $(1 + o(1)) n$.
We give very fast randomized CRCW PRAM algorithms for
several padded-sorting problems.
Applying only pairwise comparisons to the input
and using $kn$ processors, where $2\le k\le n$,
we can padded-sort $n$ keys in
$O({{\log n}/{\log k}})$ time with
high probability (whp), which
is the best possible (expected)
run time for any comparison-based algorithm.
We also show how to padded-sort
$n$ independent random numbers
in $O(\log^*\! n)$ time whp with $O(n)$ work,
which matches a recent lower bound,
and how to padded-sort
$n$ integers in the range $ 1..n $
in constant time whp using $n$ processors.
If the integer sorting is required to be stable,
we can still solve the problem in
$O({{\log\log n}/{\log k}})$ time whp using
$kn$ processors, for any $k$ with $2\le k\le\log n$.
The integer sorting results require the
nonstandard OR PRAM; alternative implementations
on standard PRAM variants run in $O(\log\log n)$ time whp.
As an application of our padded-sorting algorithms,
we can solve approximate prefix summation problems
of size~$n$ with $O(n)$ work
in constant time whp on the OR PRAM,
and in $O(\log\log n)$ time whp on
standard PRAM variants.
%B Research Report / Max-Planck-Institut für Informatik
The largest hyper-rectangle in a three dimensional orthogonal polyhedron
K. Krithivasan, R. Vanisree and A. Datta
Technical Report, 1992
K. Krithivasan, R. Vanisree and A. Datta
Technical Report, 1992
Abstract
Given a three dimensional orthogonal
polyhedron P, we present a simple and
efficient algorithm for finding the three
dimensional orthogonal hyper-rectangle R
of maximum volume, such that R is completely
contained in P. Our algorithm finds out the
three dimensional hyper-rectangle of
maximum volume by using space sweep
technique and enumerating all possible
such rectangles. The presented algorithm
runs in O(($n^2$+K)logn) time using O(n)
space, where n is the number of vertices of
the given polyhedron P and K is the number
of reported three dimensional orthogonal
hyper-rectangles for a problem instance,
which is O($n^3$) in the worst case.
Export
BibTeX
@techreport{KrithivasanVanisreeDatta92,
TITLE = {The largest hyper-rectangle in a three dimensional orthogonal polyhedron},
AUTHOR = {Krithivasan, Kamala and Vanisree, R. and Datta, Amitava},
LANGUAGE = {eng},
NUMBER = {MPI-I-92-123},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1992},
DATE = {1992},
ABSTRACT = {Given a three dimensional orthogonal polyhedron P, we present a simple and efficient algorithm for finding the three dimensional orthogonal hyper-rectangle R of maximum volume, such that R is completely contained in P. Our algorithm finds out the three dimensional hyper-rectangle of maximum volume by using space sweep technique and enumerating all possible such rectangles. The presented algorithm runs in O(($n^2$+K)logn) time using O(n) space, where n is the number of vertices of the given polyhedron P and K is the number of reported three dimensional orthogonal hyper-rectangles for a problem instance, which is O($n^3$) in the worst case.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Krithivasan, Kamala
%A Vanisree, R.
%A Datta, Amitava
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T The largest hyper-rectangle in a three dimensional orthogonal polyhedron :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B6F9-4
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1992
%P 7 p.
%X Given a three dimensional orthogonal
polyhedron P, we present a simple and
efficient algorithm for finding the three
dimensional orthogonal hyper-rectangle R
of maximum volume, such that R is completely
contained in P. Our algorithm finds out the
three dimensional hyper-rectangle of
maximum volume by using space sweep
technique and enumerating all possible
such rectangles. The presented algorithm
runs in O(($n^2$+K)logn) time using O(n)
space, where n is the number of vertices of
the given polyhedron P and K is the number
of reported three dimensional orthogonal
hyper-rectangles for a problem instance,
which is O($n^3$) in the worst case.
%B Research Report / Max-Planck-Institut für Informatik
Sequential and parallel algorithms for the k closest pairs problem
H.-P. Lenhof and M. Smid
Technical Report, 1992a
H.-P. Lenhof and M. Smid
Technical Report, 1992a
Abstract
Let $S$ be a set of $n$ points in $D$-dimensional space, where
$D$ is a constant,
and let $k$ be an integer between $1$ and $n \choose 2$.
A new and simpler proof is given of Salowe's theorem, i.e.,
a sequential algorithm is given that computes the
$k$ closest pairs
in the set $S$ in $O(n \log n + k)$ time, using $O(n+k)$
space. The algorithm fits
in the algebraic decision tree model and is,
therefore, optimal. Salowe's algorithm seems difficult to
parallelize. A parallel version of our
algorithm is given for the CRCW-PRAM model. This version
runs in $O((\log n)^{2} \log\log n )$
expected parallel time and has an $O(n \log n \log\log n +k)$
time-processor product.
Export
BibTeX
@techreport{LenhofSmid92b,
TITLE = {Sequential and parallel algorithms for the k closest pairs problem},
AUTHOR = {Lenhof, Hans-Peter and Smid, Michiel},
LANGUAGE = {eng},
NUMBER = {MPI-I-92-134},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1992},
DATE = {1992},
ABSTRACT = {Let $S$ be a set of $n$ points in $D$-dimensional space, where $D$ is a constant, and let $k$ be an integer between $1$ and $n \choose 2$. A new and simpler proof is given of Salowe's theorem, i.e., a sequential algorithm is given that computes the $k$ closest pairs in the set $S$ in $O(n \log n + k)$ time, using $O(n+k)$ space. The algorithm fits in the algebraic decision tree model and is, therefore, optimal. Salowe's algorithm seems difficult to parallelize. A parallel version of our algorithm is given for the CRCW-PRAM model. This version runs in $O((\log n)^{2} \log\log n )$ expected parallel time and has an $O(n \log n \log\log n +k)$ time-processor product.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Lenhof, Hans-Peter
%A Smid, Michiel
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Sequential and parallel algorithms for the k closest pairs problem :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B708-9
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1992
%P 18 p.
%X Let $S$ be a set of $n$ points in $D$-dimensional space, where
$D$ is a constant,
and let $k$ be an integer between $1$ and $n \choose 2$.
A new and simpler proof is given of Salowe's theorem, i.e.,
a sequential algorithm is given that computes the
$k$ closest pairs
in the set $S$ in $O(n \log n + k)$ time, using $O(n+k)$
space. The algorithm fits
in the algebraic decision tree model and is,
therefore, optimal. Salowe's algorithm seems difficult to
parallelize. A parallel version of our
algorithm is given for the CRCW-PRAM model. This version
runs in $O((\log n)^{2} \log\log n )$
expected parallel time and has an $O(n \log n \log\log n +k)$
time-processor product.
%B Research Report / Max-Planck-Institut für Informatik
Maintaining the visibility map of spheres while moving the viewpoint on a circle at infinity
H.-P. Lenhof and M. Smid
Technical Report, 1992b
H.-P. Lenhof and M. Smid
Technical Report, 1992b
Abstract
We investigate 3D visibility problems for scenes that consist of
$n$ non-intersecting spheres. The
viewing point $v$ moves on a flightpath that
is part of a ``circle at infinity'' given by
a plane $P$ and a range of angles $\{\alpha(t)|t\in [0:1]\}\subset
[0:2\pi]$. At
``time'' $t$, the lines of sight are parallel to the ray $r(t)$ in the
plane $P$, which starts in the origin of $P$ and represents the angle
$\alpha(t)$ (orthographic views of the scene).
We describe algorithms that compute the visibility graph at the
start of the flight, all time parameters $t$ at which
the topology of the scene changes, and the corresponding topology
changes.
We present an algorithm with running time
$O((n+k+p)\log n)$, where $n$ is the number of spheres in the scene;
$p$ is the number of transparent topology changes (the number of
different scene topologies visible along the flightpath, assuming that
all spheres are transparent); and $k$ denotes the number of
vertices (conflicts)
which are in the (transparent) visibility graph at the start
and do not disappear during the flight.
Export
BibTeX
@techreport{LenhofSmid92a,
TITLE = {Maintaining the visibility map of spheres while moving the viewpoint on a circle at infinity},
AUTHOR = {Lenhof, Hans-Peter and Smid, Michiel},
LANGUAGE = {eng},
NUMBER = {MPI-I-92-102},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1992},
DATE = {1992},
ABSTRACT = {We investigate 3D visibility problems for scenes that consist of $n$ non-intersecting spheres. The viewing point $v$ moves on a flightpath that is part of a ``circle at infinity'' given by a plane $P$ and a range of angles $\{\alpha(t)|t\in [0:1]\}\subset [0:2\pi]$. At ``time'' $t$, the lines of sight are parallel to the ray $r(t)$ in the plane $P$, which starts in the origin of $P$ and represents the angle $\alpha(t)$ (orthographic views of the scene). We describe algorithms that compute the visibility graph at the start of the flight, all time parameters $t$ at which the topology of the scene changes, and the corresponding topology changes. We present an algorithm with running time $O((n+k+p)\log n)$, where $n$ is the number of spheres in the scene; $p$ is the number of transparent topology changes (the number of different scene topologies visible along the flightpath, assuming that all spheres are transparent); and $k$ denotes the number of vertices (conflicts) which are in the (transparent) visibility graph at the start and do not disappear during the flight.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Lenhof, Hans-Peter
%A Smid, Michiel
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Maintaining the visibility map of spheres while moving the viewpoint on a circle at infinity :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B059-2
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1992
%P 16 p.
%X We investigate 3D visibility problems for scenes that consist of
$n$ non-intersecting spheres. The
viewing point $v$ moves on a flightpath that
is part of a ``circle at infinity'' given by
a plane $P$ and a range of angles $\{\alpha(t)|t\in [0:1]\}\subset
[0:2\pi]$. At
``time'' $t$, the lines of sight are parallel to the ray $r(t)$ in the
plane $P$, which starts in the origin of $P$ and represents the angle
$\alpha(t)$ (orthographic views of the scene).
We describe algorithms that compute the visibility graph at the
start of the flight, all time parameters $t$ at which
the topology of the scene changes, and the corresponding topology
changes.
We present an algorithm with running time
$O((n+k+p)\log n)$, where $n$ is the number of spheres in the scene;
$p$ is the number of transparent topology changes (the number of
different scene topologies visible along the flightpath, assuming that
all spheres are transparent); and $k$ denotes the number of
vertices (conflicts)
which are in the (transparent) visibility graph at the start
and do not disappear during the flight.
%B Research Report / Max-Planck-Institut für Informatik
Furthest Site Abstract Voronoi Diagrams
K. Mehlhorn, S. Meiser and R. Rasch
Technical Report, 1992
K. Mehlhorn, S. Meiser and R. Rasch
Technical Report, 1992
Abstract
Abstract Voronoi diagrams were introduced by R. Klein as a<br>unifying approach to Voronoi diagrams. In this paper we study <br>furthest site abstract Voronoi diagrams and give a unified mathematical <br>and algorithmic treatment for them. In particular, we show that furthest <br>site abstract Voronoi diagrams are trees, have<br>linear size, and that, given a set of $n$ sites, the<br>furthest site abstract Voronoi diagram can be computed by a<br>randomized algorithm in expected time $O(n\log n)$.
Export
BibTeX
@techreport{,
TITLE = {Furthest Site Abstract {V}oronoi Diagrams},
AUTHOR = {Mehlhorn, Kurt and Meiser, Stefan and Rasch, Roland},
LANGUAGE = {eng},
NUMBER = {MPI-I-92-135},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1992},
DATE = {1992},
ABSTRACT = {Abstract Voronoi diagrams were introduced by R. Klein as a<br>unifying approach to Voronoi diagrams. In this paper we study <br>furthest site abstract Voronoi diagrams and give a unified mathematical <br>and algorithmic treatment for them. In particular, we show that furthest <br>site abstract Voronoi diagrams are trees, have<br>linear size, and that, given a set of $n$ sites, the<br>furthest site abstract Voronoi diagram can be computed by a<br>randomized algorithm in expected time $O(n\log n)$.},
TYPE = {Research Report},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%A Meiser, Stefan
%A Rasch, Roland
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T Furthest Site Abstract Voronoi Diagrams :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B70A-5
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1992
%P 25 p.
%X Abstract Voronoi diagrams were introduced by R. Klein as a<br>unifying approach to Voronoi diagrams. In this paper we study <br>furthest site abstract Voronoi diagrams and give a unified mathematical <br>and algorithmic treatment for them. In particular, we show that furthest <br>site abstract Voronoi diagrams are trees, have<br>linear size, and that, given a set of $n$ sites, the<br>furthest site abstract Voronoi diagram can be computed by a<br>randomized algorithm in expected time $O(n\log n)$.
%B Research Report
Lower bound for set intersection queries
K. Mehlhorn, C. Uhrig and R. Raman
Technical Report, 1992
K. Mehlhorn, C. Uhrig and R. Raman
Technical Report, 1992
Abstract
We consider the following {\em set intersection reporting\/} problem.
We have a collection of initially empty sets and would like to
process an intermixed sequence of $n$ updates (insertions into and
deletions from individual sets) and $q$ queries (reporting the
intersection of two sets). We cast this problem in the
{\em arithmetic\/} model of computation of Fredman
and Yao and show that any algorithm that fits
in this model must take $\Omega(q + n \sqrt{q})$ to
process a sequence of $n$ updates and $q$ queries,
ignoring factors that are polynomial in $\log n$.
By adapting an algorithm due to Yellin
we can show that this bound
is tight in this model of computation, again
to within a polynomial in $\log n$ factor.
Export
BibTeX
@techreport{MehlhornUhrigRaman92,
TITLE = {Lower bound for set intersection queries},
AUTHOR = {Mehlhorn, Kurt and Uhrig, Christian and Raman, Rajeev},
LANGUAGE = {eng},
NUMBER = {MPI-I-92-127},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1992},
DATE = {1992},
ABSTRACT = {We consider the following {\em set intersection reporting\/} problem. We have a collection of initially empty sets and would like to process an intermixed sequence of $n$ updates (insertions into and deletions from individual sets) and $q$ queries (reporting the intersection of two sets). We cast this problem in the {\em arithmetic\/} model of computation of Fredman and Yao and show that any algorithm that fits in this model must take $\Omega(q + n \sqrt{q})$ to process a sequence of $n$ updates and $q$ queries, ignoring factors that are polynomial in $\log n$. By adapting an algorithm due to Yellin we can show that this bound is tight in this model of computation, again to within a polynomial in $\log n$ factor.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%A Uhrig, Christian
%A Raman, Rajeev
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Lower bound for set intersection queries :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B706-D
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1992
%P 14 p.
%X We consider the following {\em set intersection reporting\/} problem.
We have a collection of initially empty sets and would like to
process an intermixed sequence of $n$ updates (insertions into and
deletions from individual sets) and $q$ queries (reporting the
intersection of two sets). We cast this problem in the
{\em arithmetic\/} model of computation of Fredman
and Yao and show that any algorithm that fits
in this model must take $\Omega(q + n \sqrt{q})$ to
process a sequence of $n$ updates and $q$ queries,
ignoring factors that are polynomial in $\log n$.
By adapting an algorithm due to Yellin
we can show that this bound
is tight in this model of computation, again
to within a polynomial in $\log n$ factor.
%B Research Report / Max-Planck-Institut für Informatik
Computing intersections and arrangements for red-blue curve segments in parallel
C. Rüb
Technical Report, 1992
C. Rüb
Technical Report, 1992
Abstract
Let $A$ and $B$ be two sets of ``well-behaved'' (i.e., continuous and
x-monotone) curve segments in the plane, where no two segments in $A$
(similarly, $B$) intersect. In this paper we show how to report all
points of intersection between segments in $A$ and segments in $B$, and
how to construct the arrangement defined by the segments in $A\cup B$
in parallel using the concurrent-read-exclusive-write (CREW-) PRAM
model. The algorithms perform a work of $O(n\log n+k)$ using
$p\leq n+k/\log n$ ($p\leq n/\log n+k/\log ^2 n$, resp.,) processors
if we assume that the handling of segments is ``cheap'', e.g., if
two segments intersect at most a constant number of times,
where $n$ is the total number of segments and $k$ is the number of
points of intersection. If we only assume that
a single processor can compute an arbitrary point of intersection
between two segments in constant time, the performed work increases
to $O(n\log n+m(k+p))$, where $m$ is the maximal number of points of
intersection between two segments.
We also show how to count the number of points of intersection between
segments in $A$ and segments in $B$ in time $O(\log n)$ using $n$
processors on a CREW-PRAM if two curve segments intersect at most twice.
Export
BibTeX
@techreport{Rueb92,
TITLE = {Computing intersections and arrangements for red-blue curve segments in parallel},
AUTHOR = {R{\"u}b, Christine},
LANGUAGE = {eng},
NUMBER = {MPI-I-92-108},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1992},
DATE = {1992},
ABSTRACT = {Let $A$ and $B$ be two sets of ``well-behaved'' (i.e., continuous and x-monotone) curve segments in the plane, where no two segments in $A$ (similarly, $B$) intersect. In this paper we show how to report all points of intersection between segments in $A$ and segments in $B$, and how to construct the arrangement defined by the segments in $A\cup B$ in parallel using the concurrent-read-exclusive-write (CREW-) PRAM model. The algorithms perform a work of $O(n\log n+k)$ using $p\leq n+k/\log n$ ($p\leq n/\log n+k/\log ^2 n$, resp.,) processors if we assume that the handling of segments is ``cheap'', e.g., if two segments intersect at most a constant number of times, where $n$ is the total number of segments and $k$ is the number of points of intersection. If we only assume that a single processor can compute an arbitrary point of intersection between two segments in constant time, the performed work increases to $O(n\log n+m(k+p))$, where $m$ is the maximal number of points of intersection between two segments. We also show how to count the number of points of intersection between segments in $A$ and segments in $B$ in time $O(\log n)$ using $n$ processors on a CREW-PRAM if two curve segments intersect at most twice.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Rüb, Christine
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Computing intersections and arrangements for red-blue curve segments in parallel :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B6E9-8
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1992
%P 30 p.
%X Let $A$ and $B$ be two sets of ``well-behaved'' (i.e., continuous and
x-monotone) curve segments in the plane, where no two segments in $A$
(similarly, $B$) intersect. In this paper we show how to report all
points of intersection between segments in $A$ and segments in $B$, and
how to construct the arrangement defined by the segments in $A\cup B$
in parallel using the concurrent-read-exclusive-write (CREW-) PRAM
model. The algorithms perform a work of $O(n\log n+k)$ using
$p\leq n+k/\log n$ ($p\leq n/\log n+k/\log ^2 n$, resp.,) processors
if we assume that the handling of segments is ``cheap'', e.g., if
two segments intersect at most a constant number of times,
where $n$ is the total number of segments and $k$ is the number of
points of intersection. If we only assume that
a single processor can compute an arbitrary point of intersection
between two segments in constant time, the performed work increases
to $O(n\log n+m(k+p))$, where $m$ is the maximal number of points of
intersection between two segments.
We also show how to count the number of points of intersection between
segments in $A$ and segments in $B$ in time $O(\log n)$ using $n$
processors on a CREW-PRAM if two curve segments intersect at most twice.
%B Research Report / Max-Planck-Institut für Informatik
Semi-dynamic maintenance of the width of a planar point set
C. Schwarz
Technical Report, 1992
C. Schwarz
Technical Report, 1992
Abstract
We give an algorithm that maintains an approximation of the width of a
set of $n$ points in the plane in $O(\alpha\log n)$ amortized
time, if only insertions or only deletions are performed.
The data structure allows for reporting the approximation in
$O(\alpha\log n\log\log n)$ time.
$\alpha$ is a parameter that expresses the quality of the approximation.
Our data structure is based on a method of Janardan that maintains an
approximation of the width under insertions and deletions using
$O(\alpha\log^2 n)$ time for the width query and the updates.
Export
BibTeX
@techreport{Schwarz92,
TITLE = {Semi-dynamic maintenance of the width of a planar point set},
AUTHOR = {Schwarz, Christian},
LANGUAGE = {eng},
NUMBER = {MPI-I-92-153},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1992},
DATE = {1992},
ABSTRACT = {We give an algorithm that maintains an approximation of the width of a set of $n$ points in the plane in $O(\alpha\log n)$ amortized time, if only insertions or only deletions are performed. The data structure allows for reporting the approximation in $O(\alpha\log n\log\log n)$ time. $\alpha$ is a parameter that expresses the quality of the approximation. Our data structure is based on a method of Janardan that maintains an approximation of the width under insertions and deletions using $O(\alpha\log^2 n)$ time for the width query and the updates.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Schwarz, Christian
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Semi-dynamic maintenance of the width of a planar point set :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B714-D
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1992
%P 14 p.
%X We give an algorithm that maintains an approximation of the width of a
set of $n$ points in the plane in $O(\alpha\log n)$ amortized
time, if only insertions or only deletions are performed.
The data structure allows for reporting the approximation in
$O(\alpha\log n\log\log n)$ time.
$\alpha$ is a parameter that expresses the quality of the approximation.
Our data structure is based on a method of Janardan that maintains an
approximation of the width under insertions and deletions using
$O(\alpha\log^2 n)$ time for the width query and the updates.
%B Research Report / Max-Planck-Institut für Informatik
Further results on generalized intersection searching problems: counting, reporting, and dynamization
M. Smid and P. Gupta
Technical Report, 1992
M. Smid and P. Gupta
Technical Report, 1992
Abstract
In a generalized intersection searching problem, a
set, $S$, of colored geometric objects is to be
preprocessed so that given some query object, $q$,
the distinct colors of the objects intersected by
$q$ can be reported
efficiently or the number of such colors can be
counted efficiently. In the dynamic setting, colored objects
can be inserted into or deleted from $S$. These
problems generalize the well-studied standard
intersection searching problems and are rich in
applications. Unfortunately, the techniques known
for the standard problems do not yield efficient
solutions for the generalized problems. Moreover,
previous work on generalized
problems applies only to the static reporting
problems. In this
paper, a uniform framework is presented
to solve efficiently the counting/reporting/dynamic
versions of a variety of generalized
intersection searching problems, including: 1-, 2-,
and 3-dimensional range
searching, quadrant searching, interval intersection
searching, 1- and 2-dimensional
point enclosure searching, and orthogonal segment
intersection searching.
Export
BibTeX
@techreport{,
TITLE = {Further results on generalized intersection searching problems: counting, reporting, and dynamization},
AUTHOR = {Smid, Michiel and Gupta, Prosenjit},
LANGUAGE = {eng},
NUMBER = {MPI-I-92-154},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1992},
DATE = {1992},
ABSTRACT = {In a generalized intersection searching problem, a set, $S$, of colored geometric objects is to be preprocessed so that given some query object, $q$, the distinct colors of the objects intersected by $q$ can be reported efficiently or the number of such colors can be counted efficiently. In the dynamic setting, colored objects can be inserted into or deleted from $S$. These problems generalize the well-studied standard intersection searching problems and are rich in applications. Unfortunately, the techniques known for the standard problems do not yield efficient solutions for the generalized problems. Moreover, previous work on generalized problems applies only to the static reporting problems. In this paper, a uniform framework is presented to solve efficiently the counting/reporting/dynamic versions of a variety of generalized intersection searching problems, including: 1-, 2-, and 3-dimensional range searching, quadrant searching, interval intersection searching, 1- and 2-dimensional point enclosure searching, and orthogonal segment intersection searching.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Smid, Michiel
%A Gupta, Prosenjit
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Further results on generalized intersection searching problems: counting, reporting, and dynamization :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B721-F
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1992
%P 41 p.
%X In a generalized intersection searching problem, a
set, $S$, of colored geometric objects is to be
preprocessed so that given some query object, $q$,
the distinct colors of the objects intersected by
$q$ can be reported
efficiently or the number of such colors can be
counted efficiently. In the dynamic setting, colored objects
can be inserted into or deleted from $S$. These
problems generalize the well-studied standard
intersection searching problems and are rich in
applications. Unfortunately, the techniques known
for the standard problems do not yield efficient
solutions for the generalized problems. Moreover,
previous work on generalized
problems applies only to the static reporting
problems. In this
paper, a uniform framework is presented
to solve efficiently the counting/reporting/dynamic
versions of a variety of generalized
intersection searching problems, including: 1-, 2-,
and 3-dimensional range
searching, quadrant searching, interval intersection
searching, 1- and 2-dimensional
point enclosure searching, and orthogonal segment
intersection searching.
%B Research Report / Max-Planck-Institut für Informatik
Enumerating the k closest pairs mechanically
M. Smid and H.-P. Lenhof
Technical Report, 1992
M. Smid and H.-P. Lenhof
Technical Report, 1992
Abstract
Let $S$ be a set of $n$ points in $D$-dimensional space, where
$D$ is a constant,
and let $k$ be an integer between $1$ and $n \choose 2$.
An algorithm is given that computes the $k$ closest pairs
in the set $S$ in $O(n \log n + k)$ time, using $O(n+k)$
space. The algorithm fits
in the algebraic decision tree model and is,
therefore, optimal.
Export
BibTeX
@techreport{SmidLenhof92,
TITLE = {Enumerating the k closest pairs mechanically},
AUTHOR = {Smid, Michiel and Lenhof, Hans-Peter},
LANGUAGE = {eng},
NUMBER = {MPI-I-92-118},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1992},
DATE = {1992},
ABSTRACT = {Let $S$ be a set of $n$ points in $D$-dimensional space, where $D$ is a constant, and let $k$ be an integer between $1$ and $n \choose 2$. An algorithm is given that computes the $k$ closest pairs in the set $S$ in $O(n \log n + k)$ time, using $O(n+k)$ space. The algorithm fits in the algebraic decision tree model and is, therefore, optimal.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Smid, Michiel
%A Lenhof, Hans-Peter
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Enumerating the k closest pairs mechanically :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B6F1-3
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1992
%P 12 p.
%X Let $S$ be a set of $n$ points in $D$-dimensional space, where
$D$ is a constant,
and let $k$ be an integer between $1$ and $n \choose 2$.
An algorithm is given that computes the $k$ closest pairs
in the set $S$ in $O(n \log n + k)$ time, using $O(n+k)$
space. The algorithm fits
in the algebraic decision tree model and is,
therefore, optimal.
%B Research Report / Max-Planck-Institut für Informatik
Finding k points with a smallest enclosing square
M. Smid
Technical Report, 1992
M. Smid
Technical Report, 1992
Abstract
An algorithm is presented that, given a set of $n$ points in
the plane and an integer $k$, $2 \leq k \leq n$,
finds $k$ points with a smallest enclosing
axes-parallel square. The algorithm has a running time of
$O(n \log n + kn \log^{2} k)$ and uses $O(n)$ space.
The previously best known algorithm for this problem takes
$O(k^{2} n \log n)$ time and uses $O(kn)$ space.
Export
BibTeX
@techreport{Smid92,
TITLE = {Finding k points with a smallest enclosing square},
AUTHOR = {Smid, Michiel},
LANGUAGE = {eng},
NUMBER = {MPI-I-92-152},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1992},
DATE = {1992},
ABSTRACT = {An algorithm is presented that, given a set of $n$ points in the plane and an integer $k$, $2 \leq k \leq n$, finds $k$ points with a smallest enclosing axes-parallel square. The algorithm has a running time of $O(n \log n + kn \log^{2} k)$ and uses $O(n)$ space. The previously best known algorithm for this problem takes $O(k^{2} n \log n)$ time and uses $O(kn)$ space.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Smid, Michiel
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Finding k points with a smallest enclosing square :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B712-2
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1992
%P 8 p.
%X An algorithm is presented that, given a set of $n$ points in
the plane and an integer $k$, $2 \leq k \leq n$,
finds $k$ points with a smallest enclosing
axes-parallel square. The algorithm has a running time of
$O(n \log n + kn \log^{2} k)$ and uses $O(n)$ space.
The previously best known algorithm for this problem takes
$O(k^{2} n \log n)$ time and uses $O(kn)$ space.
%B Research Report / Max-Planck-Institut für Informatik
Minimum base of weighted k polymatroid and Steiner tree problem
A. Zelikovsky
Technical Report, 1992a
A. Zelikovsky
Technical Report, 1992a
Abstract
A generalized greedy approximation algorithm for
finding the lightest base of a weighted $k$--polymatroid and its
applications to the Steiner tree problem is presented.
Export
BibTeX
@techreport{,
TITLE = {Minimum base of weighted k polymatroid and Steiner tree problem},
AUTHOR = {Zelikovsky, Alexander},
LANGUAGE = {eng},
NUMBER = {MPI-I-92-121},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1992},
DATE = {1992},
ABSTRACT = {A generalized greedy approximation algorithm for finding the lightest base of a weighted $k$--polymatroid and its applications to the Steiner tree problem is presented.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Zelikovsky, Alexander
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Minimum base of weighted k polymatroid and Steiner tree problem :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B6F5-C
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1992
%P 9 p.
%X A generalized greedy approximation algorithm for
finding the lightest base of a weighted $k$--polymatroid and its
applications to the Steiner tree problem is presented.
%B Research Report / Max-Planck-Institut für Informatik
A faster 11/6-approximation algorithm for the Steiner tree problem in graphs
A. Zelikovsky
Technical Report, 1992b
A. Zelikovsky
Technical Report, 1992b
Abstract
The Steiner problem requires a shortest tree spanning a given
vertex subset $S$ within graph $G=(V,E)$. There are
two 11/6-approximation
algorithms with running time $O(VE+VS^2+S^4)$ and
$O(VE+VS^2+S^{3+{1\over 2}})$, respectively. Now we decrease
the implementation time to $O(ES+VS^2+VlogV)$.
Export
BibTeX
@techreport{,
TITLE = {A faster 11/6-approximation algorithm for the Steiner tree problem in graphs},
AUTHOR = {Zelikovsky, Alexander},
LANGUAGE = {eng},
NUMBER = {MPI-I-92-122},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1992},
DATE = {1992},
ABSTRACT = {The Steiner problem requires a shortest tree spanning a given vertex subset $S$ within graph $G=(V,E)$. There are two 11/6-approximation algorithms with running time $O(VE+VS^2+S^4)$ and $O(VE+VS^2+S^{3+{1\over 2}})$, respectively. Now we decrease the implementation time to $O(ES+VS^2+VlogV)$.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Zelikovsky, Alexander
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A faster 11/6-approximation algorithm for the Steiner tree problem in graphs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B6F7-8
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1992
%P 8 p.
%X The Steiner problem requires a shortest tree spanning a given
vertex subset $S$ within graph $G=(V,E)$. There are
two 11/6-approximation
algorithms with running time $O(VE+VS^2+S^4)$ and
$O(VE+VS^2+S^{3+{1\over 2}})$, respectively. Now we decrease
the implementation time to $O(ES+VS^2+VlogV)$.
%B Research Report / Max-Planck-Institut für Informatik
1991
An o(n3)-time maximum-flow algorithm
J. Cheriyan, T. Hagerup and K. Mehlhorn
Technical Report, 1991a
J. Cheriyan, T. Hagerup and K. Mehlhorn
Technical Report, 1991a
Abstract
We show that a maximum flow in a network with $n$ vertices
can be computed deterministically in $O({{n^3}/{\log n}})$
time on a uniform-cost RAM.
For dense graphs, this improves the
previous best bound of $O(n^3)$.
The bottleneck in our algorithm is a combinatorial
problem on (unweighted) graphs.
The number of operations executed on flow variables is
$O(n^{8/3}(\log n)^{4/3})$,
in contrast with
$\Omega(nm)$ flow operations for all previous algorithms,
where $m$ denotes the number of edges in the network.
A randomized version of our algorithm executes
$O(n^{3/2}m^{1/2}\log n+n^2(\log n)^2/
\log(2+n(\log n)^2/m))$
flow operations with high probability.
For the special case in which
all capacities are integers bounded by $U$,
we show that a maximum flow can be computed
Export
BibTeX
@techreport{CheriyanHagerupMehlhorn91,
TITLE = {An o(n\mbox{$^3$})-time maximum-flow algorithm},
AUTHOR = {Cheriyan, Joseph and Hagerup, Torben and Mehlhorn, Kurt},
LANGUAGE = {eng},
NUMBER = {MPI-I-91-120},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1991},
DATE = {1991},
ABSTRACT = {We show that a maximum flow in a network with $n$ vertices can be computed deterministically in $O({{n^3}/{\log n}})$ time on a uniform-cost RAM. For dense graphs, this improves the previous best bound of $O(n^3)$. The bottleneck in our algorithm is a combinatorial problem on (unweighted) graphs. The number of operations executed on flow variables is $O(n^{8/3}(\log n)^{4/3})$, in contrast with $\Omega(nm)$ flow operations for all previous algorithms, where $m$ denotes the number of edges in the network. A randomized version of our algorithm executes $O(n^{3/2}m^{1/2}\log n+n^2(\log n)^2/ \log(2+n(\log n)^2/m))$ flow operations with high probability. For the special case in which all capacities are integers bounded by $U$, we show that a maximum flow can be computed},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Cheriyan, Joseph
%A Hagerup, Torben
%A Mehlhorn, Kurt
%+ External Organizations
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T An o(n³)-time maximum-flow algorithm :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B08A-3
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1991
%P 30 p.
%X We show that a maximum flow in a network with $n$ vertices
can be computed deterministically in $O({{n^3}/{\log n}})$
time on a uniform-cost RAM.
For dense graphs, this improves the
previous best bound of $O(n^3)$.
The bottleneck in our algorithm is a combinatorial
problem on (unweighted) graphs.
The number of operations executed on flow variables is
$O(n^{8/3}(\log n)^{4/3})$,
in contrast with
$\Omega(nm)$ flow operations for all previous algorithms,
where $m$ denotes the number of edges in the network.
A randomized version of our algorithm executes
$O(n^{3/2}m^{1/2}\log n+n^2(\log n)^2/
\log(2+n(\log n)^2/m))$
flow operations with high probability.
For the special case in which
all capacities are integers bounded by $U$,
we show that a maximum flow can be computed
%B Research Report / Max-Planck-Institut für Informatik
A lower bound for the nondeterministic space complexity of contextfree recognition
J. Cheriyan, T. Hagerup and K. Mehlhorn
Technical Report, 1991b
J. Cheriyan, T. Hagerup and K. Mehlhorn
Technical Report, 1991b
Abstract
We show that a maximum flow in a network with $n$ vertices
can be computed deterministically in $O({{n^3}/{\log n}})$
time on a uniform-cost RAM.
For dense graphs, this improves the
previous best bound of $O(n^3)$.
The bottleneck in our algorithm is a combinatorial
problem on (unweighted) graphs.
The number of operations executed on flow variables is
$O(n^{8/3}(\log n)^{4/3})$,
in contrast with
$\Omega(nm)$ flow operations for all previous algorithms,
where $m$ denotes the number of edges in the network.
A randomized version of our algorithm executes
$O(n^{3/2}m^{1/2}\log n+n^2(\log n)^2/
\log(2+n(\log n)^2/m))$
flow operations with high probability.
For the special case in which
all capacities are integers bounded by $U$,
we show that a maximum flow can be computed
Export
BibTeX
@techreport{AltGeffertMehlhorn91,
TITLE = {A lower bound for the nondeterministic space complexity of contextfree recognition},
AUTHOR = {Cheriyan, Joseph and Hagerup, Torben and Mehlhorn, Kurt},
LANGUAGE = {eng},
NUMBER = {MPI-I-91-115},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1991},
DATE = {1991},
ABSTRACT = {We show that a maximum flow in a network with $n$ vertices can be computed deterministically in $O({{n^3}/{\log n}})$ time on a uniform-cost RAM. For dense graphs, this improves the previous best bound of $O(n^3)$. The bottleneck in our algorithm is a combinatorial problem on (unweighted) graphs. The number of operations executed on flow variables is $O(n^{8/3}(\log n)^{4/3})$, in contrast with $\Omega(nm)$ flow operations for all previous algorithms, where $m$ denotes the number of edges in the network. A randomized version of our algorithm executes $O(n^{3/2}m^{1/2}\log n+n^2(\log n)^2/ \log(2+n(\log n)^2/m))$ flow operations with high probability. For the special case in which all capacities are integers bounded by $U$, we show that a maximum flow can be computed},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Cheriyan, Joseph
%A Hagerup, Torben
%A Mehlhorn, Kurt
%+ External Organizations
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A lower bound for the nondeterministic space complexity of contextfree recognition :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7B1A-7
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1991
%P 4 p.
%X We show that a maximum flow in a network with $n$ vertices
can be computed deterministically in $O({{n^3}/{\log n}})$
time on a uniform-cost RAM.
For dense graphs, this improves the
previous best bound of $O(n^3)$.
The bottleneck in our algorithm is a combinatorial
problem on (unweighted) graphs.
The number of operations executed on flow variables is
$O(n^{8/3}(\log n)^{4/3})$,
in contrast with
$\Omega(nm)$ flow operations for all previous algorithms,
where $m$ denotes the number of edges in the network.
A randomized version of our algorithm executes
$O(n^{3/2}m^{1/2}\log n+n^2(\log n)^2/
\log(2+n(\log n)^2/m))$
flow operations with high probability.
For the special case in which
all capacities are integers bounded by $U$,
we show that a maximum flow can be computed
%B Research Report / Max-Planck-Institut für Informatik
Algorithms for dense graphs and networks
J. Cheriyan and K. Mehlhorn
Technical Report, 1991
J. Cheriyan and K. Mehlhorn
Technical Report, 1991
Abstract
We improve upon the running time of several graph and network algorithms
when applied to dense graphs. In particular, we show how to compute on a
machine with word size $\lambda$ a maximal matching in an $n$--vertex
bipartite graph in time $O(n^{2} + n^{2.5}/\lambda) = 0(n^{2.5}/\log n)$,
how to compute the transitive closure of a digraph with $n$ vertices and
$m$ edges in time $0(nm/\lambda)$, how to solve the uncapacitated transportation
problem with integer costs in the range $[0..C]$ and integer demands in
the range $[-U..U]$ in time $0((n^3(\log\log n/\log n)^{1/2} + n^2 \log U)\log nC)$,
and how to solve
the assignment problem with integer costs in the range $[0..C]$ in
time $0(n^{2.5}\log nC/(\log n/\log \log n)^{1/4})$.
\\
Assuming a suitably compressed input, we also show how to do depth--first and
breadth--first search and how to compute strongly connected components and
biconnected components in time $0(n\lambda + n^2/\lambda)$, and how to solve
the single source shortest path problem with integer costs in the range
$[0..C]$ in time $0(n^2(\log C)/\log n)$.
Export
BibTeX
@techreport{CheriyanMehlhorn91,
TITLE = {Algorithms for dense graphs and networks},
AUTHOR = {Cheriyan, Joseph and Mehlhorn, Kurt},
LANGUAGE = {eng},
NUMBER = {MPI-I-91-114},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1991},
DATE = {1991},
ABSTRACT = {We improve upon the running time of several graph and network algorithms when applied to dense graphs. In particular, we show how to compute on a machine with word size $\lambda$ a maximal matching in an $n$--vertex bipartite graph in time $O(n^{2} + n^{2.5}/\lambda) = 0(n^{2.5}/\log n)$, how to compute the transitive closure of a digraph with $n$ vertices and $m$ edges in time $0(nm/\lambda)$, how to solve the uncapacitated transportation problem with integer costs in the range $[0..C]$ and integer demands in the range $[-U..U]$ in time $0((n^3(\log\log n/\log n)^{1/2} + n^2 \log U)\log nC)$, and how to solve the assignment problem with integer costs in the range $[0..C]$ in time $0(n^{2.5}\log nC/(\log n/\log \log n)^{1/4})$. \\ Assuming a suitably compressed input, we also show how to do depth--first and breadth--first search and how to compute strongly connected components and biconnected components in time $0(n\lambda + n^2/\lambda)$, and how to solve the single source shortest path problem with integer costs in the range $[0..C]$ in time $0(n^2(\log C)/\log n)$.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Cheriyan, Joseph
%A Mehlhorn, Kurt
%+ External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Algorithms for dense graphs and networks :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7B17-D
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1991
%P 29 p.
%X We improve upon the running time of several graph and network algorithms
when applied to dense graphs. In particular, we show how to compute on a
machine with word size $\lambda$ a maximal matching in an $n$--vertex
bipartite graph in time $O(n^{2} + n^{2.5}/\lambda) = 0(n^{2.5}/\log n)$,
how to compute the transitive closure of a digraph with $n$ vertices and
$m$ edges in time $0(nm/\lambda)$, how to solve the uncapacitated transportation
problem with integer costs in the range $[0..C]$ and integer demands in
the range $[-U..U]$ in time $0((n^3(\log\log n/\log n)^{1/2} + n^2 \log U)\log nC)$,
and how to solve
the assignment problem with integer costs in the range $[0..C]$ in
time $0(n^{2.5}\log nC/(\log n/\log \log n)^{1/4})$.
\\
Assuming a suitably compressed input, we also show how to do depth--first and
breadth--first search and how to compute strongly connected components and
biconnected components in time $0(n\lambda + n^2/\lambda)$, and how to solve
the single source shortest path problem with integer costs in the range
$[0..C]$ in time $0(n^2(\log C)/\log n)$.
%B Research Report / Max-Planck-Institut für Informatik
Simultaneous inner and outer aproximation of shapes
R. Fleischer, K. Mehlhorn, G. Rote and E. welzl
Technical Report, 1991
R. Fleischer, K. Mehlhorn, G. Rote and E. welzl
Technical Report, 1991
Abstract
For compact Euclidean bodies $P,Q$, we define $\lambda(P,Q)$
to be the smallest ratio $r/s$ where $r > 0$, $s > 0$ satisfy
$sQ' \subseteq P \subseteq$ $rQ''$. Here $sQ$
denotes a scaling of $Q$ by factor $s$, and $Q', Q''$
are some translates of $Q$. This function $\lambda$ gives us a
new distance function between bodies wich, unlike previously studied
measures, is invariant under affine transformations. If homothetic
bodies are identified, the logarithm of this function is a metric.
(Two bodies are {\sl homothetic} if one can be obtained from the other
by scaling and translation).
Export
BibTeX
@techreport{FleischerMehlhornRoteWelzl91,
TITLE = {Simultaneous inner and outer aproximation of shapes},
AUTHOR = {Fleischer, Rudolf and Mehlhorn, Kurt and Rote, G{\"u}nter and welzl, Emo},
LANGUAGE = {eng},
NUMBER = {MPI-I-91-105},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1991},
DATE = {1991},
ABSTRACT = {For compact Euclidean bodies $P,Q$, we define $\lambda(P,Q)$ to be the smallest ratio $r/s$ where $r > 0$, $s > 0$ satisfy $sQ' \subseteq P \subseteq$ $rQ''$. Here $sQ$ denotes a scaling of $Q$ by factor $s$, and $Q', Q''$ are some translates of $Q$. This function $\lambda$ gives us a new distance function between bodies wich, unlike previously studied measures, is invariant under affine transformations. If homothetic bodies are identified, the logarithm of this function is a metric. (Two bodies are {\sl homothetic} if one can be obtained from the other by scaling and translation).},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Fleischer, Rudolf
%A Mehlhorn, Kurt
%A Rote, Günter
%A welzl, Emo
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Simultaneous inner and outer aproximation of shapes :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7B05-6
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1991
%P 24 p.
%X For compact Euclidean bodies $P,Q$, we define $\lambda(P,Q)$
to be the smallest ratio $r/s$ where $r > 0$, $s > 0$ satisfy
$sQ' \subseteq P \subseteq$ $rQ''$. Here $sQ$
denotes a scaling of $Q$ by factor $s$, and $Q', Q''$
are some translates of $Q$. This function $\lambda$ gives us a
new distance function between bodies wich, unlike previously studied
measures, is invariant under affine transformations. If homothetic
bodies are identified, the logarithm of this function is a metric.
(Two bodies are {\sl homothetic} if one can be obtained from the other
by scaling and translation).
%B Research Report / Max-Planck-Institut für Informatik
A tight lower bound for the worst case of bottom-up-heapsort
R. Fleischer
Technical Report, 1991
R. Fleischer
Technical Report, 1991
Abstract
Bottom-Up-Heapsort is a variant of Heapsort.
Its worst case complexity for the number of comparisons
is known to be bounded from above by ${3\over2}n\log n+O(n)$, where $n$
is the number of elements to be sorted.
There is also an example of a heap which needs ${5\over4}n\log n-
O(n\log\log n)$ comparisons.
We show in this paper that the upper bound is asymptotical tight,
i.e.~we prove for large $n$ the existence of heaps which need at least
$c_n\cdot({3\over2}n\log n-O(n\log\log n))$ comparisons
where $c_n=1-{1\over\log^2n}$ converges to 1.
This result also proves the old conjecture that the best case
for classical Heapsort needs only asymptotical $n\log n+O(n\log\log n)$
comparisons.
Export
BibTeX
@techreport{Fleischer91,
TITLE = {A tight lower bound for the worst case of bottom-up-heapsort},
AUTHOR = {Fleischer, Rudolf},
LANGUAGE = {eng},
NUMBER = {MPI-I-91-104},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1991},
DATE = {1991},
ABSTRACT = {Bottom-Up-Heapsort is a variant of Heapsort. Its worst case complexity for the number of comparisons is known to be bounded from above by ${3\over2}n\log n+O(n)$, where $n$ is the number of elements to be sorted. There is also an example of a heap which needs ${5\over4}n\log n- O(n\log\log n)$ comparisons. We show in this paper that the upper bound is asymptotical tight, i.e.~we prove for large $n$ the existence of heaps which need at least $c_n\cdot({3\over2}n\log n-O(n\log\log n))$ comparisons where $c_n=1-{1\over\log^2n}$ converges to 1. This result also proves the old conjecture that the best case for classical Heapsort needs only asymptotical $n\log n+O(n\log\log n)$ comparisons.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Fleischer, Rudolf
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A tight lower bound for the worst case of bottom-up-heapsort :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7B02-C
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1991
%P 13 p.
%X Bottom-Up-Heapsort is a variant of Heapsort.
Its worst case complexity for the number of comparisons
is known to be bounded from above by ${3\over2}n\log n+O(n)$, where $n$
is the number of elements to be sorted.
There is also an example of a heap which needs ${5\over4}n\log n-
O(n\log\log n)$ comparisons.
We show in this paper that the upper bound is asymptotical tight,
i.e.~we prove for large $n$ the existence of heaps which need at least
$c_n\cdot({3\over2}n\log n-O(n\log\log n))$ comparisons
where $c_n=1-{1\over\log^2n}$ converges to 1.
This result also proves the old conjecture that the best case
for classical Heapsort needs only asymptotical $n\log n+O(n\log\log n)$
comparisons.
%B Research Report / Max-Planck-Institut für Informatik
Fast parallel space allocation, estimation an integer sorting
T. Hagerup
Technical Report, 1991a
T. Hagerup
Technical Report, 1991a
Abstract
We show that each of the following problems can be solved
fast and with optimal speedup with high probability on a
randomized CRCW PRAM using
$O(n)$ space:
Space allocation: Given $n$ nonnegative integers
$x_1,\ldots,x_n$, allocate $n$ blocks of consecutive
memory cells of sizes $x_1,\ldots,x_n$ from a base
segment of $O(\sum_{i=1}^n x_i)$ consecutive
memory cells;
Estimation: Given $n$ integers %$x_1,\ldots,x_n$
in the range $1\Ttwodots n$, compute ``good'' estimates
of the number of occurrences of each value
in the range $1\Ttwodots n$;
Integer chain-sorting: Given $n$ integers $x_1,\ldots,x_n$
in the range $1\Ttwodots n$, construct a linked list
containing the integers $1,\ldots,n$ such that for all
$i,j\in\{1,\ldots,n\}$, if $i$ precedes $j$ in the
list, then $x_i\le x_j$.
\noindent
The running times achieved are $O(\Tlogstar n)$ for
problem (1) and $O((\Tlogstar n)^2)$ for
problems (2) and~(3).
Moreover, given slightly superlinear processor
and space bounds, these problems or variations
of them can be solved in
constant expected time.
Export
BibTeX
@techreport{Hagerup91a,
TITLE = {Fast parallel space allocation, estimation an integer sorting},
AUTHOR = {Hagerup, Torben},
LANGUAGE = {eng},
NUMBER = {MPI-I-91-106},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1991},
DATE = {1991},
ABSTRACT = {We show that each of the following problems can be solved fast and with optimal speedup with high probability on a randomized CRCW PRAM using $O(n)$ space: Space allocation: Given $n$ nonnegative integers $x_1,\ldots,x_n$, allocate $n$ blocks of consecutive memory cells of sizes $x_1,\ldots,x_n$ from a base segment of $O(\sum_{i=1}^n x_i)$ consecutive memory cells; Estimation: Given $n$ integers %$x_1,\ldots,x_n$ in the range $1\Ttwodots n$, compute ``good'' estimates of the number of occurrences of each value in the range $1\Ttwodots n$; Integer chain-sorting: Given $n$ integers $x_1,\ldots,x_n$ in the range $1\Ttwodots n$, construct a linked list containing the integers $1,\ldots,n$ such that for all $i,j\in\{1,\ldots,n\}$, if $i$ precedes $j$ in the list, then $x_i\le x_j$. \noindent The running times achieved are $O(\Tlogstar n)$ for problem (1) and $O((\Tlogstar n)^2)$ for problems (2) and~(3). Moreover, given slightly superlinear processor and space bounds, these problems or variations of them can be solved in constant expected time.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Hagerup, Torben
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Fast parallel space allocation, estimation an integer sorting :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7B08-F
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1991
%P 28 p.
%X We show that each of the following problems can be solved
fast and with optimal speedup with high probability on a
randomized CRCW PRAM using
$O(n)$ space:
Space allocation: Given $n$ nonnegative integers
$x_1,\ldots,x_n$, allocate $n$ blocks of consecutive
memory cells of sizes $x_1,\ldots,x_n$ from a base
segment of $O(\sum_{i=1}^n x_i)$ consecutive
memory cells;
Estimation: Given $n$ integers %$x_1,\ldots,x_n$
in the range $1\Ttwodots n$, compute ``good'' estimates
of the number of occurrences of each value
in the range $1\Ttwodots n$;
Integer chain-sorting: Given $n$ integers $x_1,\ldots,x_n$
in the range $1\Ttwodots n$, construct a linked list
containing the integers $1,\ldots,n$ such that for all
$i,j\in\{1,\ldots,n\}$, if $i$ precedes $j$ in the
list, then $x_i\le x_j$.
\noindent
The running times achieved are $O(\Tlogstar n)$ for
problem (1) and $O((\Tlogstar n)^2)$ for
problems (2) and~(3).
Moreover, given slightly superlinear processor
and space bounds, these problems or variations
of them can be solved in
constant expected time.
%B Research Report / Max-Planck-Institut für Informatik
On a compaction theorem of ragde
T. Hagerup
Technical Report, 1991b
T. Hagerup
Technical Report, 1991b
Abstract
Ragde demonstrated that in constant time
a PRAM with $n$ processors
can move at most $k$ items, stored in distinct cells
of an array of size $n$, to distinct cells in an
array of size at most $k^4$.
We show that the exponent of 4 in the preceding
sentence can be replaced by any constant
greater than~2.
Export
BibTeX
@techreport{Hagerup91b,
TITLE = {On a compaction theorem of ragde},
AUTHOR = {Hagerup, Torben},
LANGUAGE = {eng},
NUMBER = {MPI-I-91-121},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1991},
DATE = {1991},
ABSTRACT = {Ragde demonstrated that in constant time a PRAM with $n$ processors can move at most $k$ items, stored in distinct cells of an array of size $n$, to distinct cells in an array of size at most $k^4$. We show that the exponent of 4 in the preceding sentence can be replaced by any constant greater than~2.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Hagerup, Torben
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On a compaction theorem of ragde :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B053-E
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1991
%P 6 p.
%X Ragde demonstrated that in constant time
a PRAM with $n$ processors
can move at most $k$ items, stored in distinct cells
of an array of size $n$, to distinct cells in an
array of size at most $k^4$.
We show that the exponent of 4 in the preceding
sentence can be replaced by any constant
greater than~2.
%B Research Report / Max-Planck-Institut für Informatik
Approximate decision algorithms for point set congruence
P. J. Heffernan and S. Schirra
Technical Report, 1991
P. J. Heffernan and S. Schirra
Technical Report, 1991
Abstract
This paper considers the computer vision problem of testing whether
two equal cardinality point sets $A$ and $B$ in the plane are
$\varepsilon$-congruent.
We say that $A$ and $B$ are $\varepsilon$-congruent if there exists
an isometry $I$ and bijection $\ell : A \rightarrow B$ such that
$dist(\ell(a),I(a)) \leq \varepsilon$, for all $a
\in A$. Since known methods for this problem are expensive, we develop
approximate decision algorithms that are considerably faster
than the known decision algorithms, and have bounds on their
imprecision. Our approach reduces the problem to that of computing
maximum flows on a series of graphs with integral capacities.
Export
BibTeX
@techreport{HeffernanSchirra91,
TITLE = {Approximate decision algorithms for point set congruence},
AUTHOR = {Heffernan, Paul J. and Schirra, Stefan},
LANGUAGE = {eng},
NUMBER = {MPI-I-91-110},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1991},
DATE = {1991},
ABSTRACT = {This paper considers the computer vision problem of testing whether two equal cardinality point sets $A$ and $B$ in the plane are $\varepsilon$-congruent. We say that $A$ and $B$ are $\varepsilon$-congruent if there exists an isometry $I$ and bijection $\ell : A \rightarrow B$ such that $dist(\ell(a),I(a)) \leq \varepsilon$, for all $a \in A$. Since known methods for this problem are expensive, we develop approximate decision algorithms that are considerably faster than the known decision algorithms, and have bounds on their imprecision. Our approach reduces the problem to that of computing maximum flows on a series of graphs with integral capacities.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Heffernan, Paul J.
%A Schirra, Stefan
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Approximate decision algorithms for point set congruence :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7B0E-3
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1991
%P 25 p.
%X This paper considers the computer vision problem of testing whether
two equal cardinality point sets $A$ and $B$ in the plane are
$\varepsilon$-congruent.
We say that $A$ and $B$ are $\varepsilon$-congruent if there exists
an isometry $I$ and bijection $\ell : A \rightarrow B$ such that
$dist(\ell(a),I(a)) \leq \varepsilon$, for all $a
\in A$. Since known methods for this problem are expensive, we develop
approximate decision algorithms that are considerably faster
than the known decision algorithms, and have bounds on their
imprecision. Our approach reduces the problem to that of computing
maximum flows on a series of graphs with integral capacities.
%B Research Report / Max-Planck-Institut für Informatik
On embeddings in cycles
J. Hromkovic, V. Müller, O. Sýkora and I. Vrto
Technical Report, 1991
J. Hromkovic, V. Müller, O. Sýkora and I. Vrto
Technical Report, 1991
Abstract
We prove several exact results for the
dilation of well-known interconnection networks in cycles, namely :
${\rm dil}(T_{t,r},C_{(t^{r-1}-1)/(r-1)})=\lceil
t(t^{r-1}-1)/(2(r-1)(t-1))\rceil,$ for complete $r$-level $t$-ary
trees,
${\rm dil}(Q_n,C_{2^n})
=\sum_{k=0}^{n-1}{k\choose \lfloor \frac{k}{2}\rfloor
},$ for $n$-dimensional hypercubes,
${\rm dil}(P_n\times
P_n\times P_n,C_{n^3})=
\lfloor 3n^2/4+n/2\rfloor,$ for 3-dimensional meshes
(where $P_n$ is an $n$-vertex path) and
${\rm
dil}(P_m\times P_n,C_{mn})=
{\rm dil}(C_m\times P_n,C_{mn})={\rm dil}(C_m\times
C_n,C_{mn})=\min\{m,n\},$ for 2-dimensional ordinary, cylindrical and
toroidal meshes, respectively. The last results
solve three remaining open
problems of the type $"{\rm dil}(X\times Y, Z)=?"$, where $X,\ Y$ and
$Z$ are paths or cycles. The previously known dilations are:
${\rm
dil}(P_m\times P_n,P_{mn})=
\min \{m,n\}$,
${\rm dil}(C_m\times P_n,P_{mn})=\min \{m,2n\}$
and ${\rm dil}(C_m\times C_n,P_{mn})
=2\min \{m,n\}$, if $m\neq n$,
otherwise
${\rm dil}(C_n\times C_n)=2n-1$ .
The proofs of the above stated results are
based on the following
technique.
We find a suficient
condition for a graph $G$ which assures
the equality ${\rm dil}(G,C_n)={\rm dil}(G,P_n)$. We prove that
trees, X-trees, meshes, hypercubes, pyramides and tree of meshes
satisfy the condition. Using known optimal dilations of complete
trees, hypercubes and 2- and 3-dimensional meshes in path we get the
above exact result.
Export
BibTeX
@techreport{HromkovicMuellerSykoraVrto91,
TITLE = {On embeddings in cycles},
AUTHOR = {Hromkovic, Juraj and M{\"u}ller, Vladim{\'i}r and S{\'y}kora, Ondrej and Vrto, Imrich},
LANGUAGE = {eng},
NUMBER = {MPI-I-91-122},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1991},
DATE = {1991},
ABSTRACT = {We prove several exact results for the dilation of well-known interconnection networks in cycles, namely : ${\rm dil}(T_{t,r},C_{(t^{r-1}-1)/(r-1)})=\lceil t(t^{r-1}-1)/(2(r-1)(t-1))\rceil,$ for complete $r$-level $t$-ary trees, ${\rm dil}(Q_n,C_{2^n}) =\sum_{k=0}^{n-1}{k\choose \lfloor \frac{k}{2}\rfloor },$ for $n$-dimensional hypercubes, ${\rm dil}(P_n\times P_n\times P_n,C_{n^3})= \lfloor 3n^2/4+n/2\rfloor,$ for 3-dimensional meshes (where $P_n$ is an $n$-vertex path) and ${\rm dil}(P_m\times P_n,C_{mn})= {\rm dil}(C_m\times P_n,C_{mn})={\rm dil}(C_m\times C_n,C_{mn})=\min\{m,n\},$ for 2-dimensional ordinary, cylindrical and toroidal meshes, respectively. The last results solve three remaining open problems of the type $"{\rm dil}(X\times Y, Z)=?"$, where $X,\ Y$ and $Z$ are paths or cycles. The previously known dilations are: ${\rm dil}(P_m\times P_n,P_{mn})= \min \{m,n\}$, ${\rm dil}(C_m\times P_n,P_{mn})=\min \{m,2n\}$ and ${\rm dil}(C_m\times C_n,P_{mn}) =2\min \{m,n\}$, if $m\neq n$, otherwise ${\rm dil}(C_n\times C_n)=2n-1$ . The proofs of the above stated results are based on the following technique. We find a suficient condition for a graph $G$ which assures the equality ${\rm dil}(G,C_n)={\rm dil}(G,P_n)$. We prove that trees, X-trees, meshes, hypercubes, pyramides and tree of meshes satisfy the condition. Using known optimal dilations of complete trees, hypercubes and 2- and 3-dimensional meshes in path we get the above exact result.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Hromkovic, Juraj
%A Müller, Vladimír
%A Sýkora, Ondrej
%A Vrto, Imrich
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On embeddings in cycles :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B056-8
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1991
%P 22 p.
%X We prove several exact results for the
dilation of well-known interconnection networks in cycles, namely :
${\rm dil}(T_{t,r},C_{(t^{r-1}-1)/(r-1)})=\lceil
t(t^{r-1}-1)/(2(r-1)(t-1))\rceil,$ for complete $r$-level $t$-ary
trees,
${\rm dil}(Q_n,C_{2^n})
=\sum_{k=0}^{n-1}{k\choose \lfloor \frac{k}{2}\rfloor
},$ for $n$-dimensional hypercubes,
${\rm dil}(P_n\times
P_n\times P_n,C_{n^3})=
\lfloor 3n^2/4+n/2\rfloor,$ for 3-dimensional meshes
(where $P_n$ is an $n$-vertex path) and
${\rm
dil}(P_m\times P_n,C_{mn})=
{\rm dil}(C_m\times P_n,C_{mn})={\rm dil}(C_m\times
C_n,C_{mn})=\min\{m,n\},$ for 2-dimensional ordinary, cylindrical and
toroidal meshes, respectively. The last results
solve three remaining open
problems of the type $"{\rm dil}(X\times Y, Z)=?"$, where $X,\ Y$ and
$Z$ are paths or cycles. The previously known dilations are:
${\rm
dil}(P_m\times P_n,P_{mn})=
\min \{m,n\}$,
${\rm dil}(C_m\times P_n,P_{mn})=\min \{m,2n\}$
and ${\rm dil}(C_m\times C_n,P_{mn})
=2\min \{m,n\}$, if $m\neq n$,
otherwise
${\rm dil}(C_n\times C_n)=2n-1$ .
The proofs of the above stated results are
based on the following
technique.
We find a suficient
condition for a graph $G$ which assures
the equality ${\rm dil}(G,C_n)={\rm dil}(G,P_n)$. We prove that
trees, X-trees, meshes, hypercubes, pyramides and tree of meshes
satisfy the condition. Using known optimal dilations of complete
trees, hypercubes and 2- and 3-dimensional meshes in path we get the
above exact result.
%B Research Report / Max-Planck-Institut für Informatik
An optimal construction method for generalized convex layers
H.-P. Lenhof and M. Smid
Technical Report, 1991
H.-P. Lenhof and M. Smid
Technical Report, 1991
Abstract
Let $P$ be a set of $n$ points in the Euclidean plane
and let $C$ be a convex figure.
In 1985, Chazelle and Edelsbrunner presented an algorithm,
which preprocesses $P$ such that for any query point $q$,
the points of $P$ in the translate $C+q$ can be retrieved
efficiently. Assuming that constant time suffices for deciding
the inclusion of a point in $C$, they provided
a space and query time optimal solution. Their algorithm
uses $O(n)$ space. A~query with output size $k$ can be solved in
$O(\log n + k)$ time.
The preprocessing step of their algorithm, however,
has time complexity $O(n^2)$.
We show
that the usage of a new construction method for layers
reduces the preprocessing time to $O(n \log n)$. We thus
provide the first space, query time and preprocessing time
optimal solution for this class of point retrieval problems.
Besides, we present two new dynamic data structures
for these problems. The
first dynamic data structure allows on-line insertions
and deletions of points in
$O((\log n)^2)$ time. In this dynamic data
structure, a query with output size~$k$ can be solved in
$O(\log n + k(\log n)^2)$ time.
The second dynamic data structure, which allows only
semi-online updates, has $O((\log n)^2)$ amortized
update time and $O(\log n+k)$ query time.
Export
BibTeX
@techreport{LenhofSmid91,
TITLE = {An optimal construction method for generalized convex layers},
AUTHOR = {Lenhof, Hans-Peter and Smid, Michiel},
LANGUAGE = {eng},
NUMBER = {MPI-I-91-112},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1991},
DATE = {1991},
ABSTRACT = {Let $P$ be a set of $n$ points in the Euclidean plane and let $C$ be a convex figure. In 1985, Chazelle and Edelsbrunner presented an algorithm, which preprocesses $P$ such that for any query point $q$, the points of $P$ in the translate $C+q$ can be retrieved efficiently. Assuming that constant time suffices for deciding the inclusion of a point in $C$, they provided a space and query time optimal solution. Their algorithm uses $O(n)$ space. A~query with output size $k$ can be solved in $O(\log n + k)$ time. The preprocessing step of their algorithm, however, has time complexity $O(n^2)$. We show that the usage of a new construction method for layers reduces the preprocessing time to $O(n \log n)$. We thus provide the first space, query time and preprocessing time optimal solution for this class of point retrieval problems. Besides, we present two new dynamic data structures for these problems. The first dynamic data structure allows on-line insertions and deletions of points in $O((\log n)^2)$ time. In this dynamic data structure, a query with output size~$k$ can be solved in $O(\log n + k(\log n)^2)$ time. The second dynamic data structure, which allows only semi-online updates, has $O((\log n)^2)$ amortized update time and $O(\log n+k)$ query time.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Lenhof, Hans-Peter
%A Smid, Michiel
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T An optimal construction method for generalized convex layers :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7B11-A
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1991
%P 25 p.
%X Let $P$ be a set of $n$ points in the Euclidean plane
and let $C$ be a convex figure.
In 1985, Chazelle and Edelsbrunner presented an algorithm,
which preprocesses $P$ such that for any query point $q$,
the points of $P$ in the translate $C+q$ can be retrieved
efficiently. Assuming that constant time suffices for deciding
the inclusion of a point in $C$, they provided
a space and query time optimal solution. Their algorithm
uses $O(n)$ space. A~query with output size $k$ can be solved in
$O(\log n + k)$ time.
The preprocessing step of their algorithm, however,
has time complexity $O(n^2)$.
We show
that the usage of a new construction method for layers
reduces the preprocessing time to $O(n \log n)$. We thus
provide the first space, query time and preprocessing time
optimal solution for this class of point retrieval problems.
Besides, we present two new dynamic data structures
for these problems. The
first dynamic data structure allows on-line insertions
and deletions of points in
$O((\log n)^2)$ time. In this dynamic data
structure, a query with output size~$k$ can be solved in
$O(\log n + k(\log n)^2)$ time.
The second dynamic data structure, which allows only
semi-online updates, has $O((\log n)^2)$ amortized
update time and $O(\log n+k)$ query time.
%B Research Report / Max-Planck-Institut für Informatik
Tail estimates for the space complexity of randomized incremantal algorithms
K. Mehlhorn, M. Sharir and E. Welzl
Technical Report, 1991
K. Mehlhorn, M. Sharir and E. Welzl
Technical Report, 1991
Abstract
We give tail estimates for the space complexity of randomized
incremental algorithms for line segment intersection in the plane.
For $n$ the number of segments, $m$ is the number of intersections,
and $m\geq n \ln n \ln(3) n$, there is a constant $c$ such that
the probability that the total space cost exceeds $c$ times the
expected space cost is $e^{-\Omega(m/(n\ln n)}$.
Export
BibTeX
@techreport{MehlhornSharirWelzl91,
TITLE = {Tail estimates for the space complexity of randomized incremantal algorithms},
AUTHOR = {Mehlhorn, Kurt and Sharir, Micha and Welzl, Emo},
LANGUAGE = {eng},
NUMBER = {MPI-I-91-113},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1991},
DATE = {1991},
ABSTRACT = {We give tail estimates for the space complexity of randomized incremental algorithms for line segment intersection in the plane. For $n$ the number of segments, $m$ is the number of intersections, and $m\geq n \ln n \ln(3) n$, there is a constant $c$ such that the probability that the total space cost exceeds $c$ times the expected space cost is $e^{-\Omega(m/(n\ln n)}$.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%A Sharir, Micha
%A Welzl, Emo
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
%T Tail estimates for the space complexity of randomized incremantal algorithms :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7B14-4
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1991
%P 8 p.
%X We give tail estimates for the space complexity of randomized
incremental algorithms for line segment intersection in the plane.
For $n$ the number of segments, $m$ is the number of intersections,
and $m\geq n \ln n \ln(3) n$, there is a constant $c$ such that
the probability that the total space cost exceeds $c$ times the
expected space cost is $e^{-\Omega(m/(n\ln n)}$.
%B Research Report / Max-Planck-Institut für Informatik
An optimal algorithm for the on-line closest-pair problem
C. Schwarz and M. Smid
Technical Report, 1991a
C. Schwarz and M. Smid
Technical Report, 1991a
Abstract
We give an algorithm that computes the closest pair in a set
of $n$ points in $k$-dimensional space on-line, in $O(n \log n)$
ime. The algorithm only uses algebraic functions and, therefore,
is optimal. The algorithm maintains a hierarchical subdivision of $k$-space
into hyperrectangles, which is stored in a binary
tree. Centroids are used to maintain a balanced decomposition
of this tree.
Export
BibTeX
@techreport{SchwarzSmidSnoeyink91,
TITLE = {An optimal algorithm for the on-line closest-pair problem},
AUTHOR = {Schwarz, Christian and Smid, Michiel},
LANGUAGE = {eng},
NUMBER = {MPI-I-91-123},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1991},
DATE = {1991},
ABSTRACT = {We give an algorithm that computes the closest pair in a set of $n$ points in $k$-dimensional space on-line, in $O(n \log n)$ ime. The algorithm only uses algebraic functions and, therefore, is optimal. The algorithm maintains a hierarchical subdivision of $k$-space into hyperrectangles, which is stored in a binary tree. Centroids are used to maintain a balanced decomposition of this tree.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Schwarz, Christian
%A Smid, Michiel
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T An optimal algorithm for the on-line closest-pair problem :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-B6DA-A
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1991
%P 11 p.
%X We give an algorithm that computes the closest pair in a set
of $n$ points in $k$-dimensional space on-line, in $O(n \log n)$
ime. The algorithm only uses algebraic functions and, therefore,
is optimal. The algorithm maintains a hierarchical subdivision of $k$-space
into hyperrectangles, which is stored in a binary
tree. Centroids are used to maintain a balanced decomposition
of this tree.
%B Research Report / Max-Planck-Institut für Informatik
An O(n log n log log n) algorithm for the on-line closes pair problem
C. Schwarz and M. Smid
Technical Report, 1991b
C. Schwarz and M. Smid
Technical Report, 1991b
Abstract
Let $V$ be a set of $n$ points in $k$-dimensional space.
It is shown how the closest pair in $V$ can be maintained
under insertions in
$O(\log n \log\log n)$
amortized time, using $O(n)$ space. Distances are measured in the
$L_{t}$-metric, where $1 \leq t \leq \infty$.
This gives an $O(n \log n \log\log n)$ time on-line algorithm
for computing the closest pair. The algorithm is based
on Bentley's logarithmic method for decomposable searching problems.
It uses a non-trivial extension of fractional cascading to
$k$-dimensional space. It is also shown how to extend
the method to maintain the closest pair during semi-online updates.
Then, the update time becomes $O((\log n)^{2})$, even in the worst case.
Export
BibTeX
@techreport{SchwarzSmid91,
TITLE = {An O(n log n log log n) algorithm for the on-line closes pair problem},
AUTHOR = {Schwarz, Christian and Smid, Michiel},
LANGUAGE = {eng},
NUMBER = {MPI-I-91-107},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1991},
DATE = {1991},
ABSTRACT = {Let $V$ be a set of $n$ points in $k$-dimensional space. It is shown how the closest pair in $V$ can be maintained under insertions in $O(\log n \log\log n)$ amortized time, using $O(n)$ space. Distances are measured in the $L_{t}$-metric, where $1 \leq t \leq \infty$. This gives an $O(n \log n \log\log n)$ time on-line algorithm for computing the closest pair. The algorithm is based on Bentley's logarithmic method for decomposable searching problems. It uses a non-trivial extension of fractional cascading to $k$-dimensional space. It is also shown how to extend the method to maintain the closest pair during semi-online updates. Then, the update time becomes $O((\log n)^{2})$, even in the worst case.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Schwarz, Christian
%A Smid, Michiel
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T An O(n log n log log n) algorithm for the on-line closes pair problem :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7B0B-9
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1991
%P 21 p.
%X Let $V$ be a set of $n$ points in $k$-dimensional space.
It is shown how the closest pair in $V$ can be maintained
under insertions in
$O(\log n \log\log n)$
amortized time, using $O(n)$ space. Distances are measured in the
$L_{t}$-metric, where $1 \leq t \leq \infty$.
This gives an $O(n \log n \log\log n)$ time on-line algorithm
for computing the closest pair. The algorithm is based
on Bentley's logarithmic method for decomposable searching problems.
It uses a non-trivial extension of fractional cascading to
$k$-dimensional space. It is also shown how to extend
the method to maintain the closest pair during semi-online updates.
Then, the update time becomes $O((\log n)^{2})$, even in the worst case.
%B Research Report / Max-Planck-Institut für Informatik
Range trees with slack parameter
M. Smid
Technical Report, 1991a
M. Smid
Technical Report, 1991a
Abstract
Range trees with slack parameter were introduced by Mehlhorn as
a dynamic data structure for solving the orthogonal range
searching
problem. By varying the slack parameter, this structure gives
many trade-offs for the complexity measures. In this note, a
complete analysis is given for this data structure.
Export
BibTeX
@techreport{Smid91b,
TITLE = {Range trees with slack parameter},
AUTHOR = {Smid, Michiel},
LANGUAGE = {eng},
NUMBER = {MPI-I-91-102},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1991},
DATE = {1991},
ABSTRACT = {Range trees with slack parameter were introduced by Mehlhorn as a dynamic data structure for solving the orthogonal range searching problem. By varying the slack parameter, this structure gives many trade-offs for the complexity measures. In this note, a complete analysis is given for this data structure.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Smid, Michiel
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Range trees with slack parameter :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7AFD-2
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1991
%P 11 p.
%X Range trees with slack parameter were introduced by Mehlhorn as
a dynamic data structure for solving the orthogonal range
searching
problem. By varying the slack parameter, this structure gives
many trade-offs for the complexity measures. In this note, a
complete analysis is given for this data structure.
%B Research Report / Max-Planck-Institut für Informatik
Maintaining the minimal distance of a point set in polylogarithmic time (revised version)
M. Smid
Technical Report, 1991b
M. Smid
Technical Report, 1991b
Abstract
A dynamic data structure is given that maintains the
minimal distance in a set of $n$ points in $k$-dimensional
Export
BibTeX
@techreport{Smid91c,
TITLE = {Maintaining the minimal distance of a point set in polylogarithmic time (revised version)},
AUTHOR = {Smid, Michiel},
LANGUAGE = {eng},
NUMBER = {MPI-I-91-103},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1991},
DATE = {1991},
ABSTRACT = {A dynamic data structure is given that maintains the minimal distance in a set of $n$ points in $k$-dimensional},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Smid, Michiel
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Maintaining the minimal distance of a point set in polylogarithmic time (revised version) :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7B00-0
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1991
%P 17 p.
%X A dynamic data structure is given that maintains the
minimal distance in a set of $n$ points in $k$-dimensional
%B Research Report / Max-Planck-Institut für Informatik
Dynamic rectangular point location, with an application to the closest pair problem
M. Smid
Technical Report, 1991c
M. Smid
Technical Report, 1991c
Abstract
In the $k$-dimensional rectangular point location problem,
we have to store a set of $n$ non-overlapping axes-parallel
hyperrectangles in a data structure, such that the following
operations can be performed efficiently: point location
queries, insertions and deletions of hyperrectangles, and
splitting and merging of hyperrectangles. A linear size data
structure is given for this problem, allowing queries to be
solved in $O((\log n)^{k-1} \log\log n )$ time,
and allowing the four update
operations to be performed in $O((\log n)^{2} \log\log n)$
amortized time. If only queries,
insertions and split operations have to be supported,
the $\log\log n$ factors disappear.
The data structure is based on the skewer tree of
Edelsbrunner, Haring and Hilbert and uses dynamic fractional
cascading.
This result is used to obtain a linear size data structure that
maintains the closest pair in a set of $n$ points in
$k$-dimensional space, when points are inserted. This structure
has an $O((\log n)^{k-1})$ amortized insertion time.
This leads to an on-line algorithm for computing the
closest pair in a point set in $O( n (\log n)^{k-1})$ time.
Export
BibTeX
@techreport{Smid91a,
TITLE = {Dynamic rectangular point location, with an application to the closest pair problem},
AUTHOR = {Smid, Michiel},
LANGUAGE = {eng},
NUMBER = {MPI-I-91-101},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1991},
DATE = {1991},
ABSTRACT = {In the $k$-dimensional rectangular point location problem, we have to store a set of $n$ non-overlapping axes-parallel hyperrectangles in a data structure, such that the following operations can be performed efficiently: point location queries, insertions and deletions of hyperrectangles, and splitting and merging of hyperrectangles. A linear size data structure is given for this problem, allowing queries to be solved in $O((\log n)^{k-1} \log\log n )$ time, and allowing the four update operations to be performed in $O((\log n)^{2} \log\log n)$ amortized time. If only queries, insertions and split operations have to be supported, the $\log\log n$ factors disappear. The data structure is based on the skewer tree of Edelsbrunner, Haring and Hilbert and uses dynamic fractional cascading. This result is used to obtain a linear size data structure that maintains the closest pair in a set of $n$ points in $k$-dimensional space, when points are inserted. This structure has an $O((\log n)^{k-1})$ amortized insertion time. This leads to an on-line algorithm for computing the closest pair in a point set in $O( n (\log n)^{k-1})$ time.},
TYPE = {Research Report / Max-Planck-Institut für Informatik},
}
Endnote
%0 Report
%A Smid, Michiel
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Dynamic rectangular point location, with an application to the closest pair problem :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-7AFA-8
%Y Max-Planck-Institut für Informatik
%C Saarbrücken
%D 1991
%P 28 p.
%X In the $k$-dimensional rectangular point location problem,
we have to store a set of $n$ non-overlapping axes-parallel
hyperrectangles in a data structure, such that the following
operations can be performed efficiently: point location
queries, insertions and deletions of hyperrectangles, and
splitting and merging of hyperrectangles. A linear size data
structure is given for this problem, allowing queries to be
solved in $O((\log n)^{k-1} \log\log n )$ time,
and allowing the four update
operations to be performed in $O((\log n)^{2} \log\log n)$
amortized time. If only queries,
insertions and split operations have to be supported,
the $\log\log n$ factors disappear.
The data structure is based on the skewer tree of
Edelsbrunner, Haring and Hilbert and uses dynamic fractional
cascading.
This result is used to obtain a linear size data structure that
maintains the closest pair in a set of $n$ points in
$k$-dimensional space, when points are inserted. This structure
has an $O((\log n)^{k-1})$ amortized insertion time.
This leads to an on-line algorithm for computing the
closest pair in a point set in $O( n (\log n)^{k-1})$ time.
%B Research Report / Max-Planck-Institut für Informatik
1990
Hidden line elimination for isooriented rectangles
K. Mehlhorn, S. Näher and C. Uhrig
Technical Report, 1990
K. Mehlhorn, S. Näher and C. Uhrig
Technical Report, 1990
Export
BibTeX
@techreport{mehlhorn90,
TITLE = {Hidden line elimination for isooriented rectangles},
AUTHOR = {Mehlhorn, Kurt and N{\"a}her, Stefan and Uhrig, Christian},
LANGUAGE = {eng},
LOCALID = {Local-ID: C1256428004B93B8-7CE4C02F853E3A79C125721A004E16DA-mehlhorn90},
INSTITUTION = {Fachbereich Informatik, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1990},
DATE = {1990},
TYPE = {Report},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%A Näher, Stefan
%A Uhrig, Christian
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Hidden line elimination for isooriented rectangles :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AE64-2
%F EDOC: 344750
%F OTHER: Local-ID: C1256428004B93B8-7CE4C02F853E3A79C125721A004E16DA-mehlhorn90
%Y Fachbereich Informatik, Universität des Saarlandes
%C Saarbrücken
%D 1990
%B Report
1989
Routing Problems in Grid Graphs
M. Kaufmann and K. Mehlhorn
Technical Report, 1989a
M. Kaufmann and K. Mehlhorn
Technical Report, 1989a
Export
BibTeX
@techreport{mehlhorn89z,
TITLE = {Routing Problems in Grid Graphs},
AUTHOR = {Kaufmann, Michael and Mehlhorn, Kurt},
LANGUAGE = {eng},
ISSN = {0724-3138},
LOCALID = {Local-ID: C1256428004B93B8-7C8F10A2625E42DBC12571EF00400013-mehlhorn89z},
INSTITUTION = {Institut f{\"u}r {\"O}konometrie und Operations Research},
ADDRESS = {Bonn},
YEAR = {1989},
DATE = {1989},
TYPE = {Report},
VOLUME = {89},
EID = {561},
}
Endnote
%0 Report
%A Kaufmann, Michael
%A Mehlhorn, Kurt
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Routing Problems in Grid Graphs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AE76-9
%F EDOC: 344733
%F OTHER: Local-ID: C1256428004B93B8-7C8F10A2625E42DBC12571EF00400013-mehlhorn89z
%Y Institut für Ökonometrie und Operations Research
%C Bonn
%D 1989
%B Report
%N 89
%N 561
%@ false
Routing Problems in Grid Graphs
M. Kaufmann and K. Mehlhorn
Technical Report, 1989b
M. Kaufmann and K. Mehlhorn
Technical Report, 1989b
Export
BibTeX
@techreport{mehlhorn89,
TITLE = {Routing Problems in Grid Graphs},
AUTHOR = {Kaufmann, Michael and Mehlhorn, Kurt},
LANGUAGE = {eng},
LOCALID = {Local-ID: C1256428004B93B8-521B7133DEAC9C5FC125721B004C359E-mehlhorn89},
INSTITUTION = {SFB Sonderforschungsbereich 124, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1989},
DATE = {1989},
TYPE = {SFB Report},
VOLUME = {89/05},
}
Endnote
%0 Report
%A Kaufmann, Michael
%A Mehlhorn, Kurt
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Routing Problems in Grid Graphs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AE7A-1
%F EDOC: 344759
%F OTHER: Local-ID: C1256428004B93B8-521B7133DEAC9C5FC125721B004C359E-mehlhorn89
%Y Teubner
%C Bonn, Germany
%D 1989
%B SFB Report
%N 89/05
On the Construction of Abstract Voronoi Diagrams, II
R. Klein, K. Mehlhorn and S. Meiser
Technical Report, 1989
R. Klein, K. Mehlhorn and S. Meiser
Technical Report, 1989
Export
BibTeX
@techreport{Klein03/89,
TITLE = {On the Construction of Abstract {Voronoi} Diagrams, II},
AUTHOR = {Klein, R. and Mehlhorn, Kurt and Meiser, Stefan},
LANGUAGE = {eng},
NUMBER = {A 03/89},
INSTITUTION = {Universtit{\"a}t des Saarlandes / Fachbereich Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1989},
DATE = {1989},
}
Endnote
%0 Report
%A Klein, R.
%A Mehlhorn, Kurt
%A Meiser, Stefan
%+ External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T On the Construction of Abstract Voronoi Diagrams, II :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0027-F732-A
%Y Universtität des Saarlandes / Fachbereich Informatik
%C Saarbrücken
%D 1989
Data structures
K. Mehlhorn and A. K. Tsakalidis
Technical Report, 1989
K. Mehlhorn and A. K. Tsakalidis
Technical Report, 1989
Export
BibTeX
@techreport{mehlhorn90w,
TITLE = {Data structures},
AUTHOR = {Mehlhorn, Kurt and Tsakalidis, Athanasios K.},
LANGUAGE = {eng},
NUMBER = {A 89/02},
LOCALID = {Local-ID: C1256428004B93B8-3838FA99706A4EA0C125721A004F24B8-mehlhorn90w},
INSTITUTION = {Fachbereich Informatik, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1989},
DATE = {1989},
TYPE = {Report},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%A Tsakalidis, Athanasios K.
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Max Planck Society
%T Data structures :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AE78-5
%F EDOC: 344751
%F OTHER: Local-ID: C1256428004B93B8-3838FA99706A4EA0C125721A004F24B8-mehlhorn90w
%Y Fachbereich Informatik, Universität des Saarlandes
%C Saarbrücken
%D 1989
%B Report
On the construction of abstract Voronoi diagrams
K. Mehlhorn, S. Meiser and C. Ó’Dúnlaing
Technical Report, 1989
K. Mehlhorn, S. Meiser and C. Ó’Dúnlaing
Technical Report, 1989
Export
BibTeX
@techreport{mehlhorn89z,
TITLE = {On the construction of abstract Voronoi diagrams},
AUTHOR = {Mehlhorn, Kurt and Meiser, Stefan and {\'O}'D{\'u}nlaing, Colm},
LANGUAGE = {eng},
NUMBER = {A 89/01},
LOCALID = {Local-ID: C1256428004B93B8-EB07D41A65F9FFCBC12571BF00438D8C-mehlhorn89z},
INSTITUTION = {Fachbereich Informatik, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1989},
DATE = {1989},
TYPE = {Report},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%A Meiser, Stefan
%A Ó'Dúnlaing, Colm
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Max Planck Society
%T On the construction of abstract Voronoi diagrams :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AE74-D
%F EDOC: 344615
%F OTHER: Local-ID: C1256428004B93B8-EB07D41A65F9FFCBC12571BF00438D8C-mehlhorn89z
%Y Fachbereich Informatik, Universität des Saarlandes
%C Saarbrücken
%D 1989
%B Report
1988
Faster Algorithms for the Shortest Path Problem
R. K. Ahuja, K. Mehlhorn, J. B. Orlin and R. E. Tarjan
Technical Report, 1988a
R. K. Ahuja, K. Mehlhorn, J. B. Orlin and R. E. Tarjan
Technical Report, 1988a
Export
BibTeX
@techreport{mehlhorn88l,
TITLE = {Faster Algorithms for the Shortest Path Problem},
AUTHOR = {Ahuja, Ravindra K. and Mehlhorn, Kurt and Orlin, James B. and Tarjan, Robert E.},
LANGUAGE = {eng},
NUMBER = {A 88/04},
LOCALID = {Local-ID: C1256428004B93B8-E692C9A89ADA0CEAC125721A004C420A-mehlhorn88l},
INSTITUTION = {Fachbereich Informatik, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1988},
DATE = {1988},
TYPE = {Report},
}
Endnote
%0 Report
%A Ahuja, Ravindra K.
%A Mehlhorn, Kurt
%A Orlin, James B.
%A Tarjan, Robert E.
%+ Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Max Planck Society
%T Faster Algorithms for the Shortest Path Problem :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AE9E-4
%F EDOC: 344749
%F OTHER: Local-ID: C1256428004B93B8-E692C9A89ADA0CEAC125721A004C420A-mehlhorn88l
%Y Fachbereich Informatik, Universität des Saarlandes
%C Saarbrücken
%D 1988
%B Report
Faster Algorithms for the Shortest Path Problem
R. K. Ahuja, K. Mehlhorn, J. B. Orlin and R. E. Tarjan
Technical Report, 1988b
R. K. Ahuja, K. Mehlhorn, J. B. Orlin and R. E. Tarjan
Technical Report, 1988b
Export
BibTeX
@techreport{mehlhorn88z,
TITLE = {Faster Algorithms for the Shortest Path Problem},
AUTHOR = {Ahuja, Ravindra K. and Mehlhorn, Kurt and Orlin, James B. and Tarjan, Robert E.},
LANGUAGE = {eng},
NUMBER = {TR-193},
LOCALID = {Local-ID: C1256428004B93B8-36508A0FC4D47A03C12571F6002BDECF-mehlhorn88z},
INSTITUTION = {MIT Operations Research Center},
ADDRESS = {Cambridge},
YEAR = {1988},
DATE = {1988},
TYPE = {Report},
}
Endnote
%0 Report
%A Ahuja, Ravindra K.
%A Mehlhorn, Kurt
%A Orlin, James B.
%A Tarjan, Robert E.
%+ Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Max Planck Society
%T Faster Algorithms for the Shortest Path Problem :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AE9C-8
%F EDOC: 344743
%F OTHER: Local-ID: C1256428004B93B8-36508A0FC4D47A03C12571F6002BDECF-mehlhorn88z
%Y MIT Operations Research Center
%C Cambridge
%D 1988
%P 34 p.
%B Report
A Linear-Time Algorithm for the Homotopic Routing Problem in Grid Graphs
M. Kaufmann and K. Mehlhorn
Technical Report, 1988
M. Kaufmann and K. Mehlhorn
Technical Report, 1988
Export
BibTeX
@techreport{mehlhorn88z,
TITLE = {A Linear-Time Algorithm for the Homotopic Routing Problem in Grid Graphs},
AUTHOR = {Kaufmann, Michael and Mehlhorn, Kurt},
LANGUAGE = {eng},
LOCALID = {Local-ID: C1256428004B93B8-A36B8FDC9FECF482C12571EF003FD0BE-mehlhorn88z},
INSTITUTION = {SFB Sonderforschungsbereich 124, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1988},
DATE = {1988},
TYPE = {Report},
}
Endnote
%0 Report
%A Kaufmann, Michael
%A Mehlhorn, Kurt
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A Linear-Time Algorithm for the Homotopic Routing Problem in Grid Graphs :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AE9A-C
%F EDOC: 344732
%F OTHER: Local-ID: C1256428004B93B8-A36B8FDC9FECF482C12571EF003FD0BE-mehlhorn88z
%Y SFB Sonderforschungsbereich 124, Universität des Saarlandes
%C Saarbrücken
%D 1988
%B Report
Compaction on the Torus
K. Mehlhorn and W. Rülling
Technical Report, 1988
K. Mehlhorn and W. Rülling
Technical Report, 1988
Export
BibTeX
@techreport{mehlhorn88,
TITLE = {Compaction on the Torus},
AUTHOR = {Mehlhorn, Kurt and R{\"u}lling, W.},
LANGUAGE = {eng},
NUMBER = {08/1988},
LOCALID = {Local-ID: C1256428004B93B8-71DFD71E680478D4C12571EF003F6FE9-mehlhorn88},
INSTITUTION = {Facgbereich 10, Informatik, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1988},
DATE = {1988},
TYPE = {VLSI Entwurfsmethoden und Parallelitäten},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%A Rülling, W.
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Max Planck Society
%T Compaction on the Torus :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AE98-0
%F EDOC: 344731
%F OTHER: Local-ID: C1256428004B93B8-71DFD71E680478D4C12571EF003F6FE9-mehlhorn88
%Y Facgbereich 10, Informatik, Universität des Saarlandes
%C Saarbrücken
%D 1988
%B VLSI Entwurfsmethoden und Parallelitäten
1987
Congruence, Similarity and Symmetries of Geometric Objects
H. Alt, K. Mehlhorn, H. Wagener and E. Welzl
Technical Report, 1987
H. Alt, K. Mehlhorn, H. Wagener and E. Welzl
Technical Report, 1987
Export
BibTeX
@techreport{Alt02/87,
TITLE = {Congruence, Similarity and Symmetries of Geometric Objects},
AUTHOR = {Alt, Helmut and Mehlhorn, Kurt and Wagener, Hubert and Welzl, Emo},
LANGUAGE = {eng},
NUMBER = {A 02/87},
INSTITUTION = {Universit{\"a}t des Saarlandes / Fachbereich Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1987},
DATE = {1987},
}
Endnote
%0 Report
%A Alt, Helmut
%A Mehlhorn, Kurt
%A Wagener, Hubert
%A Welzl, Emo
%+ External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
%T Congruence, Similarity and Symmetries of Geometric Objects :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0019-EFE3-E
%Y Universität des Saarlandes / Fachbereich Informatik
%C Saarbrücken
%D 1987
Parallel algorithms for computing maximal independent sets in trees and for updating minimum spanning trees
H. Jung and K. Mehlhorn
Technical Report, 1987
H. Jung and K. Mehlhorn
Technical Report, 1987
Export
BibTeX
@techreport{mehlhorn87z,
TITLE = {Parallel algorithms for computing maximal independent sets in trees and for updating minimum spanning trees},
AUTHOR = {Jung, Hermann and Mehlhorn, Kurt},
LANGUAGE = {eng},
NUMBER = {SFB 87/01},
LOCALID = {Local-ID: C1256428004B93B8-CCC100500D62E5B2C12571EF002A55C8-mehlhorn87z},
INSTITUTION = {SFB Sonderforschungsbereich 124, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1987},
DATE = {1987},
TYPE = {Report},
}
Endnote
%0 Report
%A Jung, Hermann
%A Mehlhorn, Kurt
%+ Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Parallel algorithms for computing maximal independent sets in trees and for updating minimum spanning trees :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AEB2-3
%F EDOC: 344730
%F OTHER: Local-ID: C1256428004B93B8-CCC100500D62E5B2C12571EF002A55C8-mehlhorn87z
%Y SFB Sonderforschungsbereich 124, Universität des Saarlandes
%C Saarbrücken
%D 1987
%B Report
A Faster Compaction Algorithm with Automatic Jog Insertion
K. Mehlhorn and S. Näher
Technical Report, 1987
K. Mehlhorn and S. Näher
Technical Report, 1987
Export
BibTeX
@techreport{mehlhorn87i,
TITLE = {A Faster Compaction Algorithm with Automatic Jog Insertion},
AUTHOR = {Mehlhorn, Kurt and N{\"a}her, Stefan},
LANGUAGE = {eng},
NUMBER = {15/1987},
LOCALID = {Local-ID: C1256428004B93B8-E853C387F5715DBDC12571C2004EEC2B-mehlhorn87i},
INSTITUTION = {Fachbereich 10, Informatik, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1987},
DATE = {1987},
TYPE = {VLSI Entwurfsmethoden und Parallelitäten},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%A Näher, Stefan
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T A Faster Compaction Algorithm with Automatic Jog Insertion :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AEB0-7
%F EDOC: 344624
%F OTHER: Local-ID: C1256428004B93B8-E853C387F5715DBDC12571C2004EEC2B-mehlhorn87i
%Y Fachbereich 10, Informatik, Universität des Saarlandes
%C Saarbrücken
%D 1987
%B VLSI Entwurfsmethoden und Parallelitäten
1986
On Local Routing of Two-Terminal Nets
M. Kaufmann and K. Mehlhorn
Technical Report, 1986
M. Kaufmann and K. Mehlhorn
Technical Report, 1986
Export
BibTeX
@techreport{mehlhorn86y,
TITLE = {On Local Routing of Two-Terminal Nets},
AUTHOR = {Kaufmann, Michael and Mehlhorn, Kurt},
LANGUAGE = {eng},
NUMBER = {A 84/12},
LOCALID = {Local-ID: C1256428004B93B8-6AD4B0F84B47E551C12571EF00287406-mehlhorn86y},
INSTITUTION = {Fachbereich Informatik, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1986},
DATE = {1986},
TYPE = {Report},
}
Endnote
%0 Report
%A Kaufmann, Michael
%A Mehlhorn, Kurt
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On Local Routing of Two-Terminal Nets :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AED9-D
%F EDOC: 344729
%F OTHER: Local-ID: C1256428004B93B8-6AD4B0F84B47E551C12571EF00287406-mehlhorn86y
%Y Fachbereich Informatik, Universität des Saarlandes
%C Saarbrücken
%D 1986
%B Report
Dynamic fractional cascading
K. Mehlhorn and S. Näher
Technical Report, 1986
K. Mehlhorn and S. Näher
Technical Report, 1986
Export
BibTeX
@techreport{mehlhorn86z,
TITLE = {Dynamic fractional cascading},
AUTHOR = {Mehlhorn, Kurt and N{\"a}her, Stefan},
LANGUAGE = {eng},
NUMBER = {A 86/06},
LOCALID = {Local-ID: C1256428004B93B8-E636D6E59FA5E2EBC125721A004F876B-mehlhorn86z},
INSTITUTION = {Fachbereich Informatik, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1986},
DATE = {1986},
TYPE = {Report},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%A Näher, Stefan
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Dynamic fractional cascading :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AEDB-9
%F EDOC: 344752
%F OTHER: Local-ID: C1256428004B93B8-E636D6E59FA5E2EBC125721A004F876B-mehlhorn86z
%Y Fachbereich Informatik, Universität des Saarlandes
%C Saarbrücken
%D 1986
%B Report
1985
Deterministic simulation of idealized parallel computers on more realistic ones
H. Alt, T. Hagerup, K. Mehlhorn and F. P. Preparata
Technical Report, 1985
H. Alt, T. Hagerup, K. Mehlhorn and F. P. Preparata
Technical Report, 1985
Export
BibTeX
@techreport{mehlhorn85kz,
TITLE = {Deterministic simulation of idealized parallel computers on more realistic ones},
AUTHOR = {Alt, Helmut and Hagerup, Torben and Mehlhorn, Kurt and Preparata, Franco P.},
LANGUAGE = {eng},
NUMBER = {SFB 85/36},
LOCALID = {Local-ID: C1256428004B93B8-964293B2B27701B3C12571BF004A4E81-mehlhorn85kz},
INSTITUTION = {Sonderforschungsbereich 124, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1985},
DATE = {1985},
TYPE = {Report},
}
Endnote
%0 Report
%A Alt, Helmut
%A Hagerup, Torben
%A Mehlhorn, Kurt
%A Preparata, Franco P.
%+ Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Max Planck Society
%T Deterministic simulation of idealized parallel computers on more realistic ones :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AEF5-D
%F EDOC: 344616
%F OTHER: Local-ID: C1256428004B93B8-964293B2B27701B3C12571BF004A4E81-mehlhorn85kz
%Y Sonderforschungsbereich 124, Universität des Saarlandes
%C Saarbrücken
%D 1985
%B Report
Dynamization of geometric data structures
O. Fries, K. Mehlhorn and S. Näher
Technical Report, 1985
O. Fries, K. Mehlhorn and S. Näher
Technical Report, 1985
Export
BibTeX
@techreport{mehlhorn85z,
TITLE = {Dynamization of geometric data structures},
AUTHOR = {Fries, O. and Mehlhorn, Kurt and N{\"a}her, Stefan},
LANGUAGE = {eng},
NUMBER = {SFB 85/02},
LOCALID = {Local-ID: C1256428004B93B8-D87766346FC9E3D0C12571F6004EB972-mehlhorn85z},
INSTITUTION = {SFB Sonderforschungsbereich 124, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1985},
DATE = {1985},
TYPE = {Reports},
}
Endnote
%0 Report
%A Fries, O.
%A Mehlhorn, Kurt
%A Näher, Stefan
%+ Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Dynamization of geometric data structures :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AEFB-1
%F EDOC: 344744
%F OTHER: Local-ID: C1256428004B93B8-D87766346FC9E3D0C12571F6004EB972-mehlhorn85z
%Y SFB Sonderforschungsbereich 124, Universität des Saarlandes
%C Saarbrücken
%D 1985
%B Reports
1984
Sorting Jordan Sequences in Linear Time
K. Hoffmann, K. Mehlhorn, P. Rosenstiehl and R. E. Tarjan
Technical Report, 1984
K. Hoffmann, K. Mehlhorn, P. Rosenstiehl and R. E. Tarjan
Technical Report, 1984
Export
BibTeX
@techreport{mehlhorn93k,
TITLE = {Sorting {Jordan} Sequences in Linear Time},
AUTHOR = {Hoffmann, Kurt and Mehlhorn, Kurt and Rosenstiehl, Pierre and Tarjan, Robert E.},
LANGUAGE = {eng},
NUMBER = {A 84/09},
LOCALID = {Local-ID: C1256428004B93B8-7BABD1D5D59C6B5BC12571BF004F763B-mehlhorn93k},
INSTITUTION = {Fachbereich Informatik, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1984},
DATE = {1984},
}
Endnote
%0 Report
%A Hoffmann, Kurt
%A Mehlhorn, Kurt
%A Rosenstiehl, Pierre
%A Tarjan, Robert E.
%+ External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
%T Sorting Jordan Sequences in Linear Time :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AF08-E
%F EDOC: 344618
%F OTHER: Local-ID: C1256428004B93B8-7BABD1D5D59C6B5BC12571BF004F763B-mehlhorn93k
%Y Fachbereich Informatik, Universität des Saarlandes
%C Saarbrücken
%D 1984
Local Routing of Two-terminal Nets is Easy
M. Kaufmann and K. Mehlhorn
Technical Report, 1984
M. Kaufmann and K. Mehlhorn
Technical Report, 1984
Export
BibTeX
@techreport{Kaufmann84/12,
TITLE = {Local Routing of Two-terminal Nets is Easy},
AUTHOR = {Kaufmann, Michael and Mehlhorn, Kurt},
LANGUAGE = {eng},
NUMBER = {A 84/12},
INSTITUTION = {Universit{\"a}t des Saarlandes / Fachbereich Informatik},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1984},
DATE = {1984},
}
Endnote
%0 Report
%A Kaufmann, Michael
%A Mehlhorn, Kurt
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Local Routing of Two-terminal Nets is Easy :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0019-EFE9-2
%Y Universität des Saarlandes / Fachbereich Informatik
%C Saarbrücken
%D 1984
1983
VLSI Complexity, Efficient VLSI Algorithms and the HILL Design System
K. Mehlhorn and T. Lengauer
Technical Report, 1983
K. Mehlhorn and T. Lengauer
Technical Report, 1983
Export
BibTeX
@techreport{,
TITLE = {{VLSI} Complexity, Efficient {VLSI} Algorithms and the {HILL} Design System},
AUTHOR = {Mehlhorn, Kurt and Lengauer, Thomas},
LANGUAGE = {eng},
NUMBER = {A 83/03},
INSTITUTION = {Fachbereich 10 -- Angewandte Mathematik und Informatik, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1983},
DATE = {1983},
TYPE = {Report},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%A Lengauer, Thomas
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T VLSI Complexity, Efficient VLSI Algorithms and the HILL Design System :
%G eng
%U http://hdl.handle.net/21.11116/0000-000E-48BA-0
%Y Fachbereich 10 - Angewandte Mathematik und Informatik, Universität des Saarlandes
%C Saarbrücken
%D 1983
%B Report
AT2-optimal VLSI for Integer Division and Integer Square Rooting
K. Mehlhorn
Technical Report, 1983
K. Mehlhorn
Technical Report, 1983
Export
BibTeX
@techreport{mehlhorn83j,
TITLE = {{$AT^2$}-optimal {VLSI} for Integer Division and Integer Square Rooting},
AUTHOR = {Mehlhorn, Kurt},
LANGUAGE = {eng},
NUMBER = {83/02},
LOCALID = {Local-ID: C1256428004B93B8-08E23B3B72D49256C12571C2005FDC16-mehlhorn83j},
INSTITUTION = {Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1983},
DATE = {1983},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T AT²-optimal VLSI for Integer Division and Integer Square Rooting :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AF38-2
%F EDOC: 344639
%F OTHER: Local-ID: C1256428004B93B8-08E23B3B72D49256C12571C2005FDC16-mehlhorn83j
%Y Universität des Saarlandes
%C Saarbrücken
%D 1983
1980
Lower bounds on the efficiency of transforming static data structures into dynamic structures
K. Mehlhorn
Technical Report, 1980
K. Mehlhorn
Technical Report, 1980
Export
BibTeX
@techreport{mehlhorn80z,
TITLE = {Lower bounds on the efficiency of transforming static data structures into dynamic structures},
AUTHOR = {Mehlhorn, Kurt},
LANGUAGE = {eng},
NUMBER = {A 80/05},
LOCALID = {Local-ID: C1256428004B93B8-B794D441A2E44241C125721A00550AED-mehlhorn80z},
INSTITUTION = {Fachbereich Informatik, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1980},
DATE = {1980},
TYPE = {Report},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Lower bounds on the efficiency of transforming static data structures into dynamic structures :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AF74-9
%F EDOC: 344754
%F OTHER: Local-ID: C1256428004B93B8-B794D441A2E44241C125721A00550AED-mehlhorn80z
%Y Fachbereich Informatik, Universität des Saarlandes
%C Saarbrücken
%D 1980
%B Report
1979
On the Isomorphism of two Algorithms: Hu/Tucker and Garsia/Wachs
K. Mehlhorn and M. Tsagarakis
Technical Report, 1979
K. Mehlhorn and M. Tsagarakis
Technical Report, 1979
Export
BibTeX
@techreport{,
TITLE = {On the Isomorphism of two Algorithms: Hu/Tucker and Garsia/Wachs},
AUTHOR = {Mehlhorn, Kurt and Tsagarakis, Marcos},
LANGUAGE = {eng},
NUMBER = {A 79/01},
INSTITUTION = {Fachbereich 10 -- Angewandte Mathematik und Informatik, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1979},
DATE = {1979},
TYPE = {Report},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%A Tsagarakis, Marcos
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T On the Isomorphism of two Algorithms: Hu/Tucker and Garsia/Wachs :
%G eng
%U http://hdl.handle.net/21.11116/0000-000E-4701-1
%Y Fachbereich 10 - Angewandte Mathematik und Informatik, Universität des Saarlandes
%C Saarbrücken
%D 1979
%B Report
1978
Codes: Unequal Probabilities, Unequal Letter Cost
D. Altenkamp and K. Mehlhorn
Technical Report, 1978
D. Altenkamp and K. Mehlhorn
Technical Report, 1978
Export
BibTeX
@techreport{mehlhorn78xx,
TITLE = {Codes: Unequal Probabilities, Unequal Letter Cost},
AUTHOR = {Altenkamp, Doris and Mehlhorn, Kurt},
LANGUAGE = {eng},
NUMBER = {A 78/18},
LOCALID = {Local-ID: C1256428004B93B8-CCF0ADCE913AB9C1C12571EE004980CD-mehlhorn78xx},
INSTITUTION = {Fachbereich Informatik, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1978},
DATE = {1978},
TYPE = {Report},
}
Endnote
%0 Report
%A Altenkamp, Doris
%A Mehlhorn, Kurt
%+ Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Codes: Unequal Probabilities, Unequal Letter Cost :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AF97-E
%F EDOC: 344712
%F OTHER: Local-ID: C1256428004B93B8-CCF0ADCE913AB9C1C12571EE004980CD-mehlhorn78xx
%Y Fachbereich Informatik, Universität des Saarlandes
%C Saarbrücken
%D 1978
%B Report
Complexity Arguments in Algebraic Language Theory
H. Alt and K. Mehlhorn
Technical Report, 1978
H. Alt and K. Mehlhorn
Technical Report, 1978
Export
BibTeX
@techreport{mehlhorn78w,
TITLE = {Complexity Arguments in Algebraic Language Theory},
AUTHOR = {Alt, Helmut and Mehlhorn, Kurt},
LANGUAGE = {eng},
NUMBER = {A 78/10},
LOCALID = {Local-ID: C1256428004B93B8-D9304BAAF05DC341C12571EE006659AC-mehlhorn78w},
INSTITUTION = {Fachbereich Informatik, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1978},
DATE = {1978},
TYPE = {Report},
}
Endnote
%0 Report
%A Alt, Helmut
%A Mehlhorn, Kurt
%+ Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Complexity Arguments in Algebraic Language Theory :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AFA3-1
%F EDOC: 344724
%F OTHER: Local-ID: C1256428004B93B8-D9304BAAF05DC341C12571EE006659AC-mehlhorn78w
%Y Fachbereich Informatik, Universität des Saarlandes
%C Saarbrücken
%D 1978
%B Report
On the Average Number of Rebalancing Operations in Weight-Balanced Trees
N. Blum and K. Mehlhorn
Technical Report, 1978
N. Blum and K. Mehlhorn
Technical Report, 1978
Export
BibTeX
@techreport{mehlhorn78t,
TITLE = {On the Average Number of Rebalancing Operations in Weight-Balanced Trees},
AUTHOR = {Blum, Norbert and Mehlhorn, Kurt},
LANGUAGE = {eng},
NUMBER = {A 78/06},
LOCALID = {Local-ID: C1256428004B93B8-2F9C105B21ECFA15C12571EE0065ACD5-mehlhorn78t},
INSTITUTION = {Fachbereich Informatik, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1978},
DATE = {1978},
TYPE = {Report},
}
Endnote
%0 Report
%A Blum, Norbert
%A Mehlhorn, Kurt
%+ Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T On the Average Number of Rebalancing Operations in Weight-Balanced Trees :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AFA1-5
%F EDOC: 344723
%F OTHER: Local-ID: C1256428004B93B8-2F9C105B21ECFA15C12571EE0065ACD5-mehlhorn78t
%Y Fachbereich Informatik, Universität des Saarlandes
%C Saarbrücken
%D 1978
%B Report
Effiziente Algorithmen: Ein Beispiel
K. Mehlhorn
Technical Report, 1978a
K. Mehlhorn
Technical Report, 1978a
Export
BibTeX
@techreport{mehlhorn78,
TITLE = {Effiziente Algorithmen: Ein Beispiel},
AUTHOR = {Mehlhorn, Kurt},
LANGUAGE = {eng},
NUMBER = {A 78/02},
LOCALID = {Local-ID: C1256428004B93B8-475BE2D5B3E9A720C12571EE00637FDB-mehlhorn78},
INSTITUTION = {Fachbereich Informatik, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1978},
DATE = {1978},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Effiziente Algorithmen: Ein Beispiel :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AF9D-2
%F EDOC: 344721
%F OTHER: Local-ID: C1256428004B93B8-475BE2D5B3E9A720C12571EE00637FDB-mehlhorn78
%Y Fachbereich Informatik, Universität des Saarlandes
%C Saarbrücken
%D 1978
%P 28 p.
Sorting Presorted Files
K. Mehlhorn
Technical Report, 1978b
K. Mehlhorn
Technical Report, 1978b
Export
BibTeX
@techreport{mehlhorn78u,
TITLE = {Sorting Presorted Files},
AUTHOR = {Mehlhorn, Kurt},
LANGUAGE = {eng},
NUMBER = {A 78/12},
LOCALID = {Local-ID: C1256428004B93B8-11148402580FF5B1C12571EE00669A5D-mehlhorn78u},
INSTITUTION = {Fachbereich Informatik, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1978},
DATE = {1978},
TYPE = {Report},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Sorting Presorted Files :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AFA5-E
%F EDOC: 344725
%F OTHER: Local-ID: C1256428004B93B8-11148402580FF5B1C12571EE00669A5D-mehlhorn78u
%Y Fachbereich Informatik, Universität des Saarlandes
%C Saarbrücken
%D 1978
%B Report
An efficient algorithm for constructing nearly optimal prefix codes
K. Mehlhorn
Technical Report, 1978c
K. Mehlhorn
Technical Report, 1978c
Export
BibTeX
@techreport{mehlhorn78y,
TITLE = {An efficient algorithm for constructing nearly optimal prefix codes},
AUTHOR = {Mehlhorn, Kurt},
LANGUAGE = {eng},
NUMBER = {A 13/78},
LOCALID = {Local-ID: C1256428004B93B8-A0F7D7C47933A7D2C12571EE0051A8D8-mehlhorn78y},
INSTITUTION = {Fachbereich Informatik, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1978},
DATE = {1978},
TYPE = {Report},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T An efficient algorithm for constructing nearly optimal prefix codes :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AF9B-6
%F EDOC: 344714
%F OTHER: Local-ID: C1256428004B93B8-A0F7D7C47933A7D2C12571EE0051A8D8-mehlhorn78y
%Y Fachbereich Informatik, Universität des Saarlandes
%C Saarbrücken
%D 1978
%B Report
Arbitrary Weight Changes in Dynamic Trees
K. Mehlhorn
Technical Report, 1978d
K. Mehlhorn
Technical Report, 1978d
Export
BibTeX
@techreport{mehlhorn78z,
TITLE = {Arbitrary Weight Changes in Dynamic Trees},
AUTHOR = {Mehlhorn, Kurt},
LANGUAGE = {eng},
NUMBER = {A 78/04},
LOCALID = {Local-ID: C1256428004B93B8-B10ACF025929A77FC12571EE0063BFE1-mehlhorn78z},
INSTITUTION = {Fachbereich Informatik, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1978},
DATE = {1978},
TYPE = {Report},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Arbitrary Weight Changes in Dynamic Trees :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AF9F-D
%F EDOC: 344722
%F OTHER: Local-ID: C1256428004B93B8-B10ACF025929A77FC12571EE0063BFE1-mehlhorn78z
%Y Fachbereich Informatik, Universität des Saarlandes
%C Saarbrücken
%D 1978
%B Report
1976
Binary Search Trees: Average and Worst Case Behavior
R. Güttler, K. Mehlhorn and W. Schneider
Technical Report, 1976
R. Güttler, K. Mehlhorn and W. Schneider
Technical Report, 1976
Export
BibTeX
@techreport{mehlhorn76y,
TITLE = {Binary Search Trees: Average and Worst Case Behavior},
AUTHOR = {G{\"u}ttler, Reiner and Mehlhorn, Kurt and Schneider, Wolfgang},
LANGUAGE = {eng},
NUMBER = {A 76/01},
LOCALID = {Local-ID: C1256428004B93B8-DC5C301480A4D2D3C12571EE0062D2BB-mehlhorn76y},
INSTITUTION = {Fachbereich Informatik, Universi{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1976},
DATE = {1976},
TYPE = {Report},
}
Endnote
%0 Report
%A Güttler, Reiner
%A Mehlhorn, Kurt
%A Schneider, Wolfgang
%+ Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Max Planck Society
%T Binary Search Trees: Average and Worst Case Behavior :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AFC6-4
%F EDOC: 344718
%F OTHER: Local-ID: C1256428004B93B8-DC5C301480A4D2D3C12571EE0062D2BB-mehlhorn76y
%Y Fachbereich Informatik, Universiät des Saarlandes
%C Saarbrücken
%D 1976
%B Report
Top down parsing of macro grammars (preliminary report)
M. Heydthausen and K. Mehlhorn
Technical Report, 1976
M. Heydthausen and K. Mehlhorn
Technical Report, 1976
Export
BibTeX
@techreport{mehlhorn76z,
TITLE = {Top down parsing of macro grammars (preliminary report)},
AUTHOR = {Heydthausen, Manfred and Mehlhorn, Kurt},
LANGUAGE = {eng},
NUMBER = {A 76/03},
LOCALID = {Local-ID: C1256428004B93B8-FF55806DAD143A65C12571EE006346A7-mehlhorn76z},
INSTITUTION = {Fachbereich Informatik, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1976},
DATE = {1976},
TYPE = {Report},
}
Endnote
%0 Report
%A Heydthausen, Manfred
%A Mehlhorn, Kurt
%+ Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Top down parsing of macro grammars (preliminary report) :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AFCA-B
%F EDOC: 344720
%F OTHER: Local-ID: C1256428004B93B8-FF55806DAD143A65C12571EE006346A7-mehlhorn76z
%Y Fachbereich Informatik, Universität des Saarlandes
%C Saarbrücken
%D 1976
%B Report
Dynamic Binary Search Trees : Extended Abstracts
K. Mehlhorn
Technical Report, 1976a
K. Mehlhorn
Technical Report, 1976a
Export
BibTeX
@techreport{mehlhorn76w,
TITLE = {Dynamic Binary Search Trees : Extended Abstracts},
AUTHOR = {Mehlhorn, Kurt},
LANGUAGE = {eng},
NUMBER = {A 76/11},
LOCALID = {Local-ID: C1256428004B93B8-4DB45610584D49AEC125721B0034C6DA-mehlhorn76w},
INSTITUTION = {Fachbereich Informatik, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1976},
DATE = {1976},
TYPE = {Report},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Dynamic Binary Search Trees : Extended Abstracts :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AFCE-3
%F EDOC: 344757
%F OTHER: Local-ID: C1256428004B93B8-4DB45610584D49AEC125721B0034C6DA-mehlhorn76w
%Y Fachbereich Informatik, Universität des Saarlandes
%C Saarbrücken
%D 1976
%B Report
Dynamic Binary Search
K. Mehlhorn
Technical Report, 1976b
K. Mehlhorn
Technical Report, 1976b
Export
BibTeX
@techreport{mehlhorn76x,
TITLE = {Dynamic Binary Search},
AUTHOR = {Mehlhorn, Kurt},
LANGUAGE = {eng},
NUMBER = {A 76/02},
LOCALID = {Local-ID: C1256428004B93B8-ACF3094A758009F6C12571EE00627387-mehlhorn76x},
INSTITUTION = {Fachbereich Informatik, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1976},
DATE = {1976},
TYPE = {Report},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Dynamic Binary Search :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AFC4-8
%F EDOC: 344717
%F OTHER: Local-ID: C1256428004B93B8-ACF3094A758009F6C12571EE00627387-mehlhorn76x
%Y Fachbereich Informatik, Universität des Saarlandes
%C Saarbrücken
%D 1976
%B Report
An improved lower bound on the formula complexity of context-free recognition
K. Mehlhorn
Technical Report, 1976c
K. Mehlhorn
Technical Report, 1976c
Export
BibTeX
@techreport{mehlhorn76v,
TITLE = {An improved lower bound on the formula complexity of context-free recognition},
AUTHOR = {Mehlhorn, Kurt},
LANGUAGE = {eng},
LOCALID = {Local-ID: C1256428004B93B8-CD622EEA82E2D4C9C125721B00327D32-mehlhorn76v},
INSTITUTION = {Fachbereich Informatik, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1976},
DATE = {1976},
TYPE = {Report},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T An improved lower bound on the formula complexity of context-free recognition :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AFCC-7
%F EDOC: 344756
%F OTHER: Local-ID: C1256428004B93B8-CD622EEA82E2D4C9C125721B00327D32-mehlhorn76v
%Y Fachbereich Informatik, Universität des Saarlandes
%C Saarbrücken
%D 1976
%B Report
1975
Untere Schranken für den Platzbedarf bei der kontext-freien Analyse
H. Alt and K. Mehlhorn
Technical Report, 1975
H. Alt and K. Mehlhorn
Technical Report, 1975
Export
BibTeX
@techreport{mehlhorn75,
TITLE = {{Untere Schranken f{\"u}r den Platzbedarf bei der kontext-freien Analyse}},
AUTHOR = {Alt, Helmut and Mehlhorn, Kurt},
LANGUAGE = {deu},
NUMBER = {A 75/13},
LOCALID = {Local-ID: C1256428004B93B8-769ABA454C51A8F2C12571EE00623B1C-mehlhorn75},
INSTITUTION = {Fachbereich Informatik, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1975},
DATE = {1975},
TYPE = {Report},
}
Endnote
%0 Report
%A Alt, Helmut
%A Mehlhorn, Kurt
%+ Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Untere Schranken für den Platzbedarf bei der kontext-freien Analyse :
%G deu
%U http://hdl.handle.net/11858/00-001M-0000-0014-AFDC-3
%F EDOC: 344716
%F OTHER: Local-ID: C1256428004B93B8-769ABA454C51A8F2C12571EE00623B1C-mehlhorn75
%Y Fachbereich Informatik, Universität des Saarlandes
%C Saarbrücken
%D 1975
%B Report
Bracket-Languages are Recognizable in Logarithmic Space
K. Mehlhorn
Technical Report, 1975
K. Mehlhorn
Technical Report, 1975
Export
BibTeX
@techreport{mehlhorn75z,
TITLE = {Bracket-Languages are Recognizable in Logarithmic Space},
AUTHOR = {Mehlhorn, Kurt},
LANGUAGE = {eng},
NUMBER = {A 75/12},
LOCALID = {Local-ID: C1256428004B93B8-004CFA1819B81136C12571EE0062008F-mehlhorn75z},
INSTITUTION = {Fachbereich Informatik, Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {1975},
DATE = {1975},
TYPE = {Report},
}
Endnote
%0 Report
%A Mehlhorn, Kurt
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Bracket-Languages are Recognizable in Logarithmic Space :
%G eng
%U http://hdl.handle.net/11858/00-001M-0000-0014-AFDA-7
%F EDOC: 344715
%F OTHER: Local-ID: C1256428004B93B8-004CFA1819B81136C12571EE0062008F-mehlhorn75z
%Y Fachbereich Informatik, Universität des Saarlandes
%C Saarbrücken
%D 1975
%B Report