Skip to content

DiNeSys is a distributed system built for deep learning network profiling on cloud-edge systems, developed with Tensorflow (CNN computational part) Apache Thrift (client/server structure)

License

Notifications You must be signed in to change notification settings

gbossi/DistributedNeuralSystem

Repository files navigation

Distributed Neural System

DiNeSys is a distributed system built for deep learning network profiling on cloud-edge systems, developed with Tensorflow (CNN computational part) Apache Thrift (client/server structure). The system supports a cluster of connected clients and coordinate their behaviors, in order to coordinate image computation. The central master server by following a stateful approach, provides a frame of reference to all the clients connected. The controller creates and monitors all the tests and the Mobile Edges and Cloud Servers jointly compute the Convolutional Neural Network.

All the elements are connected to the master server through the log service and the controller service. These two services provides different function to the connected elements depending on the element type, since the controller element has the task to control an experiment and to download all the log at the end, while the Mobile Edges and the Cloud Server connected to each other has the task to perform the experiment.

Experimental Scenarios

Scenario A: Limited Edge Computing (Low Power Device)

Setup: We deploy a low-power single-board computer (Raspberry Pi) at the edge to collect sensor data and process a small part of the DNN effeciently. Data Processing: A great part of unprocessed data will be sent to the Cloud server. Computing Power Edge: Since the limited on-device processing, it will relies heavily on cloud-based resources.

Scenario B: Moderate Edge Computing (Mid-Range Device)

Setup: We deploy a device with moderate processing power at the edge (Odroid N2). This allows to process a greater part of the DNN effeciently. Data Processing: Pre-processed data is sent to the central server, reducing the amount of raw data transferred. Computing Power Edge: Balance between on-device processing and cloud-based analysis.

Scenario C: High Edge Computing (High-Power Device)

Setup: We deploy a powerful device at the edge, like a dedicated GPU graphic card (Nvidia TX2). This allows for running most of the DNN layers directly on the edge device. Data Processing: Minimal data transfer to the central server, focusing on data synchronization and remote monitoring. Computing Power Edge: Primarily relies on on-device processing for DNN computation.

Expected Outcomes:

We expect to see a correlation between increasing computing power at the edge and improved amount of processed image per seconds. Scenario 3 might achieve faster response times and more tailored recommendations due to on-device DNN processing. Scenarios 1 and 2 might face limitations in DNN processing due to reliance on the cloud server.

Results

To prove the expected outcomes the VGG16 DNN have been cut in all possible ways, to divide the computing efforts between the Edge computer and the Cloud computer accordingly. In the next graphs, the computing time for each the results of this study is presented for each cut. The expected outcomes is confirmed.

Scenario A: Limited Edge Computing (Low Power Device)

image

Scenario B: Moderate Edge Computing (Mid-Range Device)

image

Scenario C: High Edge Computing (High-Power Device)

image

About

DiNeSys is a distributed system built for deep learning network profiling on cloud-edge systems, developed with Tensorflow (CNN computational part) Apache Thrift (client/server structure)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published