Cardiff University | Prifysgol Caerdydd ORCA
Online Research @ Cardiff 
WelshClear Cookie - decide language by browser settings

Toward a standard for the evaluation of PET-Auto-Segmentation methods following the recommendations of AAPM task group No. 211: Requirements and implementation

Berthon, Beatrice, Spezi, Emiliano ORCID: https://orcid.org/0000-0002-1452-8813, Galavis, Paulina, Shepherd, Tony, Apte, Aditya, Hatt, Mathieu, Fayad, Hadi, De Bernard, Elisabetta, Soffientini, Chiara D., Schmidtlein, C. Ross, El Naqa, Issam, Jeraj, Robert, Lu, Wei, Das, Shiva, Zaidi, Habib, Mawlawi, Osma R., Visvikis, Dimitris, Lee, John A. and Kirov, Assen S. 2017. Toward a standard for the evaluation of PET-Auto-Segmentation methods following the recommendations of AAPM task group No. 211: Requirements and implementation. Medical Physics 44 (8) , pp. 4098-4111. 10.1002/mp.12312

[thumbnail of Berthon_et_al-2017-Medical_Physics.pdf]
Preview
PDF - Published Version
Available under License Creative Commons Attribution.

Download (594kB) | Preview

Abstract

PURPOSE: The aim of this paper is to define the requirements and describe the design and implementation of a standard benchmark tool for evaluation and validation of PET-auto-segmentation (PET-AS) algorithms. This work follows the recommendations of Task Group 211 (TG211) appointed by the American Association of Physicists in Medicine (AAPM). METHODS: The recommendations published in the AAPM TG211 report were used to derive a set of required features and to guide the design and structure of a benchmarking software tool. These items included the selection of appropriate representative data and reference contours obtained from established approaches and the description of available metrics. The benchmark was designed in a way that it could be extendable by inclusion of bespoke segmentation methods, while maintaining its main purpose of being a standard testing platform for newly developed PET-AS methods. An example of implementation of the proposed framework, named PETASset, was built. In this work, a selection of PET-AS methods representing common approaches to PET image segmentation was evaluated within PETASset for the purpose of testing and demonstrating the capabilities of the software as a benchmark platform. RESULTS: A selection of clinical, physical, and simulated phantom data, including 'best estimates' reference contours from macroscopic specimens, simulation template, and CT scans was built into the PETASset application database. Specific metrics such as Dice Similarity Coefficient (DSC), Positive Predictive Value (PPV), and Sensitivity (S), were included to allow the user to compare the results of any given PET-AS algorithm to the reference contours. In addition, a tool to generate structured reports on the evaluation of the performance of PET-AS algorithms against the reference contours was built. The variation of the metric agreement values with the reference contours across the PET-AS methods evaluated for demonstration were between 0.51 and 0.83, 0.44 and 0.86, and 0.61 and 1.00 for DSC, PPV, and the S metric, respectively. Examples of agreement limits were provided to show how the software could be used to evaluate a new algorithm against the existing state-of-the art. CONCLUSIONS: PETASset provides a platform that allows standardizing the evaluation and comparison of different PET-AS methods on a wide range of PET datasets. The developed platform will be available to users willing to evaluate their PET-AS methods and contribute with more evaluation datasets.

Item Type: Article
Date Type: Publication
Status: Published
Schools: Engineering
Subjects: T Technology > TJ Mechanical engineering and machinery
Uncontrolled Keywords: conformity index; outlining assessment; PET/CT; PET segmentation
Publisher: Wiley
ISSN: 0094-2405
Date of First Compliant Deposit: 21 April 2017
Date of Acceptance: 15 April 2017
Last Modified: 04 May 2023 16:58
URI: https://orca.cardiff.ac.uk/id/eprint/100032

Citation Data

Cited 30 times in Scopus. View in Scopus. Powered By Scopus® Data

Actions (repository staff only)

Edit Item Edit Item

Downloads

Downloads per month over past year

View more statistics