Abstract
Background: Deep neural networks have become the state of the art technology for real- world classification tasks due to their ability to learn better feature representations at each layer. However, the added accuracy that is associated with the deeper layers comes at a huge cost of computation, energy and added latency.
Objective: The implementations of such architectures in resource constraint IoT devices are computationally prohibitive due to its computational and memory requirements. These factors are particularly severe in IoT domain. In this paper, we propose the Adaptive Deep Neural Network (ADNN) which gets split across the compute hierarchical layers i.e. edge, fog and cloud with all splits having one or more exit locations.
Methods: At every location, the data sample adaptively chooses to exit from the NN (based on confidence criteria) or get fed into deeper layers housed across different compute layers. Design of ADNN, an adaptive deep neural network which results in fast and energy- efficient decision making (inference).
Joint optimization of all the exit points in ADNN such that the overall loss is minimized.
Results: Experiments on MNIST dataset show that 41.9% of samples exit at the edge location (correctly classified) and 49.7% of samples exit at fog layer. Similar results are obtained on fashion MNIST dataset with only 19.4% of the samples requiring the entire neural network layers. With this architecture, most of the data samples are locally processed and classified while maintaining the classification accuracy and also keeping in check the communication, energy and latency requirements for time sensitive IoT applications.
Conclusion: We investigated the approach of distributing the layers of the deep neural network across edge, fog and the cloud computing devices wherein data samples adaptively choose the exit points to classify themselves based on the confidence criteria (threshold). The results show that the majority of the data samples are classified within the private network of the user (edge, fog) while only a few samples require the entire layers of ADNN for classification.
Keywords: Deep neural networks, fog computing, Internet of things, ADNN, MNIST, fog layer.
Graphical Abstract
[http://dx.doi.org/10.1038/scientificamerican0991-94 PMID: 1675486]
[http://dx.doi.org/10.1109/MNET.2018.1700202]
[http://dx.doi.org/10.1109/COMST.2017.2694469]
[http://dx.doi.org/10.1016/j.jss.2012.06.011]
[http://dx.doi.org/10.1109/ICDCS.2017.226]
[http://dx.doi.org/10.1109/IJCNN.2015.7280601]
[http://dx.doi.org/10.1109/MSP.2012.2211477]
[http://dx.doi.org/10.1109/5.726791]
[http://dx.doi.org/10.1109/MPRV.2009.82]