Many companies have been analyzing external and internal data extensively for a long time. However, in many areas, additional technology must be used to meet the challenges that arise with large and rapidly changing, often poly-structured, amounts of data. Because big data doesn’t just mean another dimension of data volumes. Big data stands for volume, variety and velocity (VVV).
Big Data – Volume
According to Wikipedia, there are 1.8 zettabytes (1021 bytes = 1.8 trillion gigabytes) of digital information on earth (as of 2011), which grew by 62 percent in 2009 alone. If you wanted to burn the entire amount of data on DVD, you would need a stack of DVDs that would stretch from the earth to the moon and back again.
Classic and much-cited examples of big data are information from the social media sector, such as Facebook and Twitter. No matter how relevant this data may be, there are usually a large number of internal and external data pools that are much more obvious. Here, too, detailing, combining and historizing often lead to amounts of data that are difficult to deal with satisfactorily using traditional approaches. The segmentation / partitioning and prioritization of the data stocks is a decisive step in solving this problem.
Industry 4.0 and IoT in particular mean that the amount of data is increasing again significantly.
Big Data – Variety
The multitude of information technology solutions, globalization and the speed at which systems change lead to frequent changes in the area of information sources. Relational database systems, the basis of many data warehouse systems, however, expect pre-structured data. Big data solutions, on the other hand, only allow the data to be structured at the time of query / analysis (ELT instead of ETL).
The processing of documents, such as JSON, of different structures enable the established databases by using appropriate NoSQL modes. As a rule, no new technologies are required for this application.
Big Data – Velocity
Within a few seconds you will receive the result of extensive analyzes that show you exactly where investments are worthwhile, which customer groups are relevant for a campaign or how you can optimize processes. A promise that classic data warehouse systems can deliver up to a certain amount of data.
The amount of data, which now has to be handled in seconds, often overwhelms these systems. The combination of classic data warehouse solutions, main memory-based and batch processing systems, however, also solves this problem. However, it is also important to change the perspective on relevant aspects of the information to be provided. Our team of specialists will be happy to assist you in implementing and establishing these changes in your company.
Big Data – Value
The expectations of big data, which result from the promises of the various providers, are high. We would be happy to conduct feasibility studies and free consultations with you on the topic of big data and related topics. In this way you ensure that the benefits of your planned solution are preserved.
Big Data – Consulting
Open Logic Systems is your service provider for big data solutions. We support you in the selection of suitable tools and methods, as well as the technological implementation. Examples of the technologies we use are:
- Oracle Exadata
- Talend ETL
- Apache Drill
The presentation / visualization of the information in an understandable way also requires new approaches due to the diverse characteristics. We have been dealing with these questions (visual analytics), which go hand in hand with the topic of big data, for some time. We would be happy to introduce you to solutions that we have already successfully implemented for well-known customers.
We are happy to support you in analyzing your data using artificial intelligence.
The best thing to do is call +49 2547 93998 0 or send an email to email@example.com.