over 4 years ago
What is Apache Hadoop technology and how can it applied to Big Data solutions?
The under-lying technology for Apache Hadoop was originally invented by Google years ago so they could make sense of and index the textual and structural information that was being collected and then present it to users in a useful format. At the time, there was no other software available to allow Google achieve these goals, so they developed their own system. These innovations were then used in the used in the development of Nutch, which was an open source project, and Apache Hadoop was a later spin off of that.
Apache Hadoop was built to solve problems where one has large amounts of data which doesn’t fit nicely into tables. It’s for users who want to run deep and complex analytics, such as clustering and targeting. Around eighty percent of data worldwide is unstructured, and most businesses don’t use this information to their advantage.
Apache Hadoop relies on an internally redundant data structure and can be deployed to industry standard servers as opposed to expensive data storage systems and this means that you can store information / data not previously viable. With Hadoop, no data set is too big, meaning that businesses and organisations can now find vale in data that was recently considered unusable.