BACKGROUND: Agreement between planners and treating radiation oncologists (ROs) on plan quality criteria is essential for consistent planning. Differences between ROs and planning medical physicists (MPs) in perceived quality of head and neck cancer plans were assessed.
MATERIALS AND METHODS: Five ROs and four MPs scored 65 plans for in total 15 patients. For each patient, the clinical (CLIN) plan and two or four alternative plans, generated with automated multi-criteria optimization (MCO), were included. There was always one MCO plan aiming at maximally adhering to clinical plan requirements, while the other MCO plans had a lower aimed quality. Scores were given as follows: 1-7 and 1-2, not acceptable; 3-5, acceptable if further planning would not resolve perceived weaknesses; and 6-7, straightway acceptable. One MP and one RO repeated plan scoring for intra-observer variation assessment.
RESULTS: For the 36 unique observer pairs, the median percentage of plans for which the two observers agreed on a plan score (100% = 65 plans) was 27.7% [6.2, 40.0]. In the repeat scoring, agreements between first and second scoring were 52.3% and 40.0%, respectively. With a binary division between unacceptable (scores 1 and 2) and acceptable (3-7) plans, the median inter-observer agreement percentage was 78.5% [63.1, 86.2], while intra-observer agreements were 96.9% and 86.2%. There were no differences in observed agreements between RO-RO, MP-MP, and RO-MP pairs. Agreements for the highest-quality, automatically generated MCO plans were higher than for the CLIN plans.
CONCLUSIONS: Inter-observer differences in plan quality scores were substantial and could result in inconsistencies in generated treatment plans. Agreements among ROs were not better than between ROs and MPs, despite large differences in training and clinical role. High-quality automatically generated plans showed the best score agreements.