Critically evaluate and discuss the challenges and opportunities of implementing big data which could have arisen in view of social networking revolution.
COMMENT IF ANYTHING NEEDED.
With the advent of Internet of Things (IoT) and Web 2.0 technologies, there has been a tremendous growth in the amount of data generated. This chapter emphasizes on the need for big data, technological advancements, tools and techniques being used to process big data are discussed. Technological improvements and limitations of existing storage techniques are alsopresented. Since, the traditional technologies like Relational Database Management System (RDBMS) have their own limitations to handle big data, new technologies have been developed to handle them and to derive useful insights.
With the digitization of most of the processes, emergence of different social network platforms, blogs, deployment of different kind of sensors, adoption of hand-held digital devices, wearable devices and explosion in the usage of Internet, huge amount of data are being generated on continuous basis. No one can deny that Internet has changed the way businesses operate, functioning of the government, education and lifestyle of people around the world. Today, thistrend is in a transformative stage, where the rate of data generation is very high and the type of data being generated surpasses the capability of existing data storage techniques. It cannot be denied that these data carry a lot more information than ever before due to the emergence and adoption of Internet.
Over the past two decades, there is a tremendous growth in data. This trend can be observed in almost every field. According to a report by International Data Corporation (IDC), a research company claims that between 2012 and 2020, the amount of information in the digital universe will grow by 35 trillion gigabytes (1 gigabyte equivalent to 40 (four-drawer) file cabinets of text, or two music CDs). That‟s on par with the number of stars in the physical universe! (Forsyth, 2012).
In the mid-2000s, the emergence of social media, cloud computing, and processing power (through multi-core processors and GPUs) contributed to the rise of big data (Manovich, 2011; Agneeswaran, 2012). As of December 2015, Facebook has an average of 1.04 billion daily active users, 934 million mobile daily active users, available in 70 languages, 125 billion friend connections, 205 billion photos uploaded every day 30 billion pieces of content, 2.7 billion likes, and comments are being posted and 130 average number of friends per Facebook user (Facebook, 2015). This has created new pathways to study social and cultural dynamics.Though big data has gained attention due to the emergence of the Internet, but it cannot be compared with it. It is beyond the Internet, though, Web makes it easier to collect and share knowledge as well data in raw form. Big Data is about how these data can be stored, processed, and comprehended such that it can be used for predicting the future course of action with a great precision and acceptable time delay.
The current and emerging focus of big data analytics is to explore traditional techniques such asrule-based systems, pattern mining, decision trees and other data mining techniques to develop business rules even on the large data sets efficiently. It can be achieved by either developing algorithms that uses distributed data storage, in-memory computation or by using cluster computing for parallel computation. Earlier these processes were carried out using grid computing, which was overtaken by cloud computing in recent days.
The concept of big data dates back to the year 2001, where the challenges of increasing data were addressed with a 3Vs model by Laney (2001). 3Vs, also known as the dimensions of big data, represent the increasing Volume, Variety, and Velocity of data (Assunção et al., 2015). The model was not originally used to define big data but later has been used eventually by various enterprises including Microsoft and IBM to define the same (Meijer, 2011).
In 2010, Apache Hadoop defined big data as “datasets, which could not be captured, managed, and processed by general computers within an acceptable scope” (p.173, Chen et al., 2014). Following this, in 2011, McKinsey Global Institute defined big data as "datasets whose size is beyond the ability of typical database software tools to capture, store, manage, and analyze" p.1 (Manyika et al., 2011). International Data Corporation (IDC) defines “big data technologies as a new generation of technologies and architectures, designed to economically extract value from very large volumes of a wide variety of data, by enabling high-velocity capture, discovery, and/or analysis” (p. 6, Gantz and Reinsel, 2011).
Digitization of content by industries is the new source of data (Villars et al., 2011). Advancements in technology also lead to high rate of data generation. For example, one of the biggest surveys in Astronomy, Sloan Digital Sky Survey (SDSS) has recorded a total of 25TB data during their first (2000-2005) and second surveys (2005-2008) combined. With the advancements in the resolution of the telescope, the amount of data collected at the end of their third survey (2008-14) is 100 TB. Use of “smart” instrumentation is another source of big data. Smart meters in the energy sector record the electricity utilization measurement every 15 minutes as compared to monthly readings before. The data produced from Social Media sectors are Blog posts, tweets, social networking sites, log details which is used to analyze the customer behavior patterns.
Tools that are being used to collect data encompass various digital devices (for example, mobile devices, camera, wearable devices, and smart watches) and applications that generate enormous data in the form of logs, text, voice, images, and video. In order to process these data, several researchers are coming up with new techniques that help better representation of the unstructured data, which makes sense in big data context to gain useful insights that may not have been envisioned earlier.
R: is an open-source statistical computing language that provides a wide variety of statistical and graphical techniques to derive insights from the data. It has an effective data handling and storage facility and supports vector operations with a suite of operators for faster processing. It has all the features of a standard programming language and supports conditional arguments, loops, and user-defined functions.
Despite the growth in these technologies and algorithms to handle big data, there are there are
few limitations, which are discussed in this section.
1. Scalability and Storage Issues: The rate of increase in data is much faster than the existing processing systems. The storage systems are not capable enough to store these data (Chen et al., 2014; Li and Lu, 2014; Kaisler et al., 2013; Assunção et al., 2015). There is a need to develop a processing system that not only caters to today's needs but also future needs.
2. Timeliness of Analysis: The value of the data decreases over time. Most of the applications like fraud detection in telecom, insurance and banking, require real time or near real time analysis of the transactional data (Chen et al., 2014; Li and Lu, 2014).
3. Representation of Heterogeneous Data: Data obtained from various sources are heterogeneous in nature. Unstructured data like Images, videos and social media data cannot be stored and processed using traditional tools like SQL. Smartphones now record and share images, audios and videos at an incredibly increasing rate, forcing our brains to process more. However, the process for representing images, audios and videos lacks efficient storage and processing (Chen et al., 2014; Li and Lu, 2014; Cuzzocrea et al., 2011).
4. Data Analytics System: Traditional RDBMS are suitable only for structured data and they lack scalability and expandability. Though non-relational databases are used for processing unstructured data, but there exist problems with their performances. There is a need to design a system that combines the benefits of both relational and non-relational database systems to ensure flexibility (Chen et al., 2014; Li and Lu, 2014;).
Get Answers For Free
Most questions answered within 1 hours.