Treffer: Efficient GPU-algorithms for the combination of evidence in Dempster–Shafer theory.
Weitere Informationen
Combination rules in the Dempster–Shafer theory aim to summarize multiple corpuses of evidence that come from different sources. However, these summarizations are computationally demanding as they usually require working with large amounts of information, which prevents their use in real life problems. In this work, different algorithms are proposed and compared in order to determine the fastest techniques to combine information under the Dempster–Shafer theory framework. These algorithms are created for Dempster's original combination rule and also for other modifications of this rule. Also, functions for combining sources using averaging combination rules are provided. The algorithms proposed in this work are designed to be executed in a Graphical Processing Unit (GPU) and have been implemented using Python and CUDA. The use of a GPU, which can execute multiple tasks in parallel, makes the algorithms faster than classic algorithms developed to be executed in a CPU. Results show the feasibility of the implementations proposed in this work that, using Python and CUDA, are able to combine corpuses of evidence for frames of discernment up to 28 elements in seconds. • Open source algorithms for the aggregation of information using combination rules in the frame of Dempster–Shafer theory are provided. • Combination rules algorithms designed for the parallel computational capability of Graphical Processing Units (GPUs) are efficient and thus suitable to be applied in real problems. • Algorithms for applying combination rules to frames of discernment up to 28 elements obtaining results in seconds. [ABSTRACT FROM AUTHOR]
Copyright of Future Generation Computer Systems is the property of Elsevier B.V. and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)