Deep neural network and Machine learning are a latestemerging concept in the field of data science. Due to multilayer hierarchical feature extraction in conjunction with control variables like number of hidden layers, activation functions, and variable parameters like learning rates, initial weights, and decay functions, deep network models perform better than machine learning techniques. While most of these parameter control the learning dynamics or complexity of representation a neural network can deal with, it is only activation function which introduces non-linearity in a network and current state of activation function poses multiple challenges to both practitioners and researchers some of which are: • Vanishing & Exploding gradients during back-propagation • Zero-mean and range of outputs • Compute complexity of function • Predictive performance Due to this reason our objective in current work in focused to explain with reasoning and experiments the landscape of activation functions available. According to a recent study, we have enough cutting-edge activation functions to modify the architecture of the well-known deep network model. Building on top of widely adopted ResNet-18 network architecture in this study. Subsequently, we evaluate the effectiveness of ResNet-18 for image classification using various activation functions.