By combining convolutional neural networks (CNNs) with linear models, researchers from the University of British Columbia and their colleagues report in Genome Biology that they have developed an interpretable deep learning model for genomic tasks. On their own, convolutional neural networks are not easily interpretable or transparent, the researchers note, while adding that linear models are. With their ExplaiNN — for explainable neural networks — tool, the UBC team combined the two. "Inspired by the recently introduced [neural additive models], the architecture of ExplaiNN is based on multiple, simple, independent CNN units that recognize sequence patterns. The outputs of these units are subsequently combined in a similar manner to classical regression analysis techniques," the researchers say. They add that they benchmarked ExplaiNN against other tools, finding it performed comparably on sequence-based prediction of transcription factor binding, chromatin accessibility analysis, de novo motif discovery, and more, and did so in less time.
ExplaiNN Tool Combines Convolutional Neural Networks, Linear Models
Jun 29, 2023
What's Popular?