With the advances in deep learning analysis a question arises - if for a robust network learning cycle, the shift between raw data and image-based data, is of importance. In the current work, we start exploring this topic by focusing on the following case study: categorizing cancer cells from holographic images. The problem in the categorization process is the time consumption of the transformation from off-axis image holograms to OPD maps. While there have been attempts for fast transformation, they still take non-negligible time. We propose a novel approach for fast classification that skips the pre-processing of creating OPD maps and directly uses the raw data of holographic images. Our dataset contains two separate image acquisitions of primary cancer cells (SW480) and metastatic cancer cells (SW620) of colorectal adenocarcinoma imaged during flow. We extracted the OPD maps of those cells and used them to train and evaluate a ResNet model to create baseline results. Our convolutional neural network (CNN) model is based on Y-Net approach: during training synthetic OPD images are created from input holograms, using the real OPD maps while simultaneously classifying the cells. During inference time only the classification branch operates to further reduce running time. This approach saves computational time by over 90%.