Workshop on Clusters, Clouds and Grids for Life Sciences

In conjunction with CCGrid 2015 - 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, May 4-7, 2015, Shenzhen, Guangdong, China

Scaling Machine Learning for Target Prediction in Drug Discovery using Apache Spark

Abstract

In the context of drug discovery, a key problem is the identification of candidate molecules that affect proteins associated with diseases. Inside Janssen Pharmaceutica, the Chemogenomics project aims to derive new candidates from existing experiments through a set of machine learning predictor programs, written in single-node C++. These programs take a long time to run and are inherently parallel, but do not use multiple nodes. We show how we reimplemented the pipeline using Apache Spark, which enabled us to lift the existing programs to a multi-node cluster without making changes to the predictors. We have benchmarked our Spark pipeline against the original, which shows the expected linear speedup as nodes are added. In addition, our pipeline generates fewer intermediate files while allowing easier checkpointing and monitoring.