Abstract
The increase of publicly available bioactivity data has led to the extensive development and usage of in silico bioactivity prediction algorithms. A particularly popular approach for such analyses is the multiclass Naïve Bayes, whose output is commonly processed by applying empirically-derived likelihood score thresholds. In this work, we describe a systematic way for deriving score cut-offs on a per-protein target basis and compare their performance with global thresholds on a large scale using both 5-fold cross-validation (ChEMBL 14, 189k ligand-protein pairs over 477 protein targets) and external validation (WOMBAT, 63k pairs, 421 targets). The individual protein target cut-offs derived were compared to global cut-offs ranging from -10 to 40 in score bouts of 2.5. The results indicate that individual thresholds had equal or better performance in all comparisons with global thresholds, ranging from 95% of protein targets to 57.96%. It is shown that local thresholds behave differently for particular families of targets (CYPs, GPCRs, Kinases and TFs). Furthermore, we demonstrate the discrepancy in performance when we move away from the training dataset chemical space, using Tanimoto similarity as a metric (from 0 to 1 in steps of 0.2). Finally, the individual protein score cut-offs derived for the in silico bioactivity application used in this work are released, as well as the reproducible and transferable KNIME workflows used to carry out the analysis.
Keywords: Cheminformatics, in silico bioactivity prediction, likelihood score thresholds.