Yes, advanced Big Data analytics tools are powerful. But connecting and making them work together can challenge your patience and technical ability.

Not long ago, Syntelli data engineers helped a client set up TIBCO™ Spotfire® Server to authenticate database connections with the MIT Kerberos protocol. They used Kerberos to secure an HBase cluster so that users could visualize data securely. The problem: establishing a connection with was uncharted territory. No one had developed a successful way to do it.

Why HBase and Apache Phoenix Work Well Together

Consultants wanted to connect the server running Spotfire Server software to an HBase cluster that also ran Apache Phoenix®.  Why? Because HBase and Phoenix are a very effective combination.

Fast and flexible. HBase is built for flexible, high-volume, high-speed data processing. It enables you to store data in HDFS directly or through HBase. Part of the Hadoop ecosystem, HBase:

  • Was developed to work with Hortonworks Hadoop clusters.
  • Is a NoSQL data store.
  • Is a distributed column-oriented database built on top of HDFS.
  • Uses horizontally scalability to provide fast lookups in the largest tables.

Provides quick random access to huge amounts of structured data.

Easy queries and management. Apache Phoenix is the open source SQL query engine for Apache HBase. It’s designed to be compatible with Hortonworks software. And, the latest version of Phoenix supports the Parquet files present in HBase. Phoenix:

  • Is accessed as a JDBC driver.
  • Queries and manages HBase tables by using SQL.
  • Takes the SQL query, compiles it to into a series of HBase scans and orchestrates the scans to produce regular JDBC result sets.

Access the full-length white paper on connecting TIBCO Spotfire to a Kerberos-secured cluster:

White Paper CTA Spotfire to Kerberos


Mukund PortraitHrishikesh Mukund
Analytics Associate
About Hrishikesh: Hrishikesh provides solutions to big-data problems related to Hadoop ecosystem components like MapReduce, YARN, HDFS, HBASE, Oozie, Sqoop, Hive, Pig and Flume and provide architectural solutions to configure the Hadoop ecosystem from scratch. Currently, he is focused on integrating BI tools like Spotfire into the Hadoop Ecosystem. He received his M.S in Computer Science engineering from University of North Carolina at Charlotte and he was a Teaching Assistant for Algorithms and Data Structures and Mobile Application Development. Hrishikesh’s future goals and interests are to integrate Big-Data with mobile devices, Visual Analytics and learning different Hadoop Architectures.

Connect with Hrishikesh on LinkedIn: [social_list linkedin_url=”https://www.linkedin.com/in/hrishimukund”]