Big Data analytics is the analysis of data sets that often contain many types of data. It’s designed to help companies extract value from huge volumes of data. The goal: find hidden patterns, market trends and other useful business information to enable more accurate and timely business decisions.

Mainstream data handling methods used for relational databases cannot process unstructured such as images, email, sound files and EDI messages, which are common elements of Big Data. And these traditional methods can’t handle the processing demands posed by very large data sets and the need for frequent or continuous updates.

So, many organizations are giving Hadoop and related tools a serious look. TIBCO™ has added TERR, its enterprise-grade analytics engine, to Hadoop. The combination provides an open source software framework that supports processing of large and diverse data sets across clustered systems.

What Puts the Oomph into Big Data Analytics Solutions?

In general, the ability to work very fast on very large volumes of data. Specifically, effective solutions would:

    • Provide visibility into many forms of information.
    • Transform data into actionable information—quickly.
    • Operate at high-speeds and enables frequent updates.
    • Perform well on very large data sets.
    • Be compatible with many software applications.

Here’s how the Big Data-crunching capabilities of Hadoop and TERR provide all of these capabilities and more.

Analyze many types of information
Apache™ Hadoop® is known for its ability to handle unstructured data such as weblogs, sensor information and text. And, a Spotfire®-Hadoop connector enables data professionals to combine and analyze information from Hadoop clusters and structured data from business applications such as Oracle® ERP.

Analyze the largest data sets efficiently
Memory management is a major limitation of open source R. Many R functions quickly consume available memory when applied to bigger data sets. This slows performance out of proportion to data size. More efficient TERR memory management keeps performance directly proportional to increases of the size of data sets.

Get faster results
TERR is faster, more scalable, and more robust than open source R. This superior performance enables you to process Hadoop data more quickly and more reliably. As performance gains are multiplied across the nodes, this approach produces analytic answers much more quickly and with fewer resources than open source R.
The results are impressive: for many common operations, TERR is 2 to 10 times faster than open source R on small to moderate-sized data sets. On large data sets, processing speed can be 10 to 100 times faster than open source R.

Expand analytics capabilities
TERR was built to be fully compatible with hundreds of R-language packages. The broad coverage of core R functionality and CRAN packages give analysts access to cutting-edge analytics in a production environment

TERR-Hadoop Integration Benefits

With TERR- Hadoop deployments, you can gain valuable insights via high-volume, high-speed data analysis. Here are some of the business and technical benefits you’ll discover:

  • Extract value from previously inaccessible data. Big Data analytics enables you to process and analyze unstructured and transactional data, which used to be out of reach.
  • Get faster time to analysis. Make more timely decisions and react more quickly to changes in your business environment.
  • Gain sophisticated insights into your business. Smoking-hot TERR performance and more robust memory management enable you to analyze Big Data for a more detailed and complete picture of your organization.
  • Improve your analytics capabilities. Take advantage of the wide range of up-to-date analytics packages written in R.

Interested to learn more about the power of Hadoop and TERR?

Contact one of our passionate data scientists to help answer any questions.