Enhancing high-content imaging for studying microtubule networks at large-scale

Hao Chih Lee, Sarah T. Cherng, Riccardo Miotto, Joel T. Dudley

Research output: Contribution to journalConference articlepeer-review

1 Scopus citations

Abstract

Given the crucial role of microtubules for cell survival, many researchers have found success using microtubule-targeting agents in the search for effective cancer therapeutics. Understanding microtubule responses to targeted interventions requires that the microtubule network within cells can be consistently observed across a large sample of images. However, fluorescence noise sources captured simultaneously with biological signals while using wide-field microscopes can obfuscate fine microtubule structures. Such requirements are particularly challenging for high-throughput imaging, where researchers must make decisions related to the trade-off between imaging quality and speed. Here, we propose a computational framework to enhance the quality of high-throughput imaging data to achieve fast speed and high quality simultaneously. Using CycleGAN, we learn an image model from low-throughput, high-resolution images to enhance features, such as microtubule networks in high-throughput low-resolution images. We show that CycleGAN is effective in identifying microtubules with 0.93+ AUC-ROC and that these results are robust to different kinds of image noise. We further apply CycleGAN to quantify the changes in microtubule density as a result of the application of drug compounds, and show that the quantified responses correspond well with known drug effects.

Original languageEnglish
Pages (from-to)592-613
Number of pages22
JournalProceedings of Machine Learning Research
Volume106
StatePublished - 2019
Event4th Machine Learning for Healthcare Conference, MLHC 2019 - Ann Arbor, United States
Duration: 9 Aug 201910 Aug 2019

Fingerprint

Dive into the research topics of 'Enhancing high-content imaging for studying microtubule networks at large-scale'. Together they form a unique fingerprint.

Cite this