In Nov. 26, I attended the third Big Data Symposium organized at the University of Aizu. Here are some notes from the presentations.
- The presentations focus on methods for generating/processing/delivering Big Data. This is very necessary because huge amount of data are being generated by Internet-connected devices.
- As for content generation, a method for constructing 3D videos from 2D videos using motion parallax, which states that among objects moving at the same speed, the remote ones seem to move at lower speed, is presented. The same parallax is also applied for mean-shift detection and segmentation of images.
- As for content processing, in-memory computing, a new architecture that can significantly reduce the processing time is presented. The basic idea is to store the data in the memory (RAM) so that the data can be accessed much more quicker than the traditional way of accessing HDD, thereby reduce the processing time. To support in-memory computing, a new kind of memory, called storage class memory (SCM), has been developed. As the name suggests, SCM have very huge capacity (can up to several terabytes) that can store all application data in it.
- As for content delivery, one of the key challenge is how to delivery the data of different applications with different requirements. For example, real-time applications such as searching, streaming video usually have very stringent requirement on the delay. To address this challenge in the presence of Big Data, Fog computing architecture has been proposed. The basic idea of Fog computing is to put additional processing nodes closer to the users so that 1) the delay can be reduced and 2) the load of the central server can be reduced too. For example, the additional nodes can perform some pre-processing to eliminate redundancy in the raw data before sending to the central server for processing.
No comments:
Post a Comment