Azure - HDInsight Hbase Data Insertion Failed


Problem:

Currently we are migrating our IoT platform as a PAAS service. We are using HDInsight Hbase for all IoT data insertion. Now i am able to create and delete tables in the HBase from java application. But i am not able insert or select any data from the HDInight Hbase table. Please suggest me if anything is missing in code level.

HBase Insert Java Code:

// TODO Auto-generated method stub
   // define some people
 Configuration config = HBaseConfiguration.create();

 // Example of setting zookeeper values for HDInsight
 // in code instead of an hbase-site.xml file
 //
  config.set("hbase.zookeeper.quorum",
             "zk1:2181,zk2:2181,zk3:2181");
 config.set("hbase.zookeeper.property.clientPort", "2181");
 config.set("hbase.cluster.distributed", "true");
 //
 //NOTE: Actual zookeeper host names can be found using Ambari:
 //curl -u admin:PASSWORD -G "https://CLUSTERNAME.azurehdinsight.net/api/v1/clusters/CLUSTERNAME/hosts"

 //Linux-based HDInsight clusters use /hbase-unsecure as the znode parent
 config.set("zookeeper.znode.parent","/hbase-unsecure");
 System.out.println("1 - " + config);
 String[][] people = {
     { "1", "Marcel", "Haddad", "[email protected]"},
     { "2", "Franklin", "Holtz", "[email protected]" },
     { "3", "Dwayne", "McKee", "[email protected]" },
     { "4", "Rae", "Schroeder", "[email protected]" },
     { "5", "Rosalie", "burton", "[email protected]"},
     { "6", "Gabriela", "Ingram", "[email protected]"} };

 HTable table = new HTable(config, "people");
 System.out.println("2 - " + table);
 // Add each person to the table
 //   Use the `name` column family for the name
 //   Use the `contactinfo` column family for the email
 for (int i = 0; i< people.length; i++) {
     Put person = new Put(Bytes.toBytes(people[i][0]));
     person.add(Bytes.toBytes("name"), Bytes.toBytes("first"), Bytes.toBytes(people[i][1]));
     person.add(Bytes.toBytes("name"), Bytes.toBytes("last"), Bytes.toBytes(people[i][2]));
     person.add(Bytes.toBytes("contactinfo"), Bytes.toBytes("email"), Bytes.toBytes(people[i][3]));
     System.out.println("3 - " + person);
     table.put(person);
     System.out.println("4 - " + table);
 }
 // flush commits and close the table
 System.out.println("5 - " + table);
 table.flushCommits();
 table.close();
 System.out.println("6 - " + table);

Error :

2083 [main] INFO  o.a.h.h.z.RecoverableZooKeeper - Process identifier=hconnection-0x524d6d96 connecting to ZooKeeper ensemble=zk0-bdtrin.un52sso10ikejkjuhwkxemgbfa.rx.internal.cloudapp.net:2181,zk4-bdtrin.un52sso10ikejkjuhwkxemgbfa.rx.internal.cloudapp.net:2181,zk1-bdtrin.un52sso10ikejkjuhwkxemgbfa.rx.internal.cloudapp.net:2181
7616 [main] WARN  o.a.h.c.Configuration - hbase-site.xml:an attempt to override final parameter: dfs.support.append;  Ignoring.
7616 [main] WARN  o.a.h.h.u.DynamicClassLoader - Failed to identify the fs of dir /hbase/lib, ignored
java.io.IOException: No FileSystem for scheme: wasb
      at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2584) ~[hadoop-common-2.6.1.jar:?]
      at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591) ~[hadoop-common-2.6.1.jar:?]
      at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91) ~[hadoop-common-2.6.1.jar:?]
      at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630) ~[hadoop-common-2.6.1.jar:?]
      at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) ~[hadoop-common-2.6.1.jar:?]
      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) ~[hadoop-common-2.6.1.jar:?]
      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169) ~[hadoop-common-2.6.1.jar:?]
      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:354) ~[hadoop-common-2.6.1.jar:?]
      at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) ~[hadoop-common-2.6.1.jar:?]
      at org.apache.hadoop.hbase.util.DynamicClassLoader.<init>(DynamicClassLoader.java:104) [hbase-common-1.1.0.jar:1.1.0]
      at org.apache.hadoop.hbase.protobuf.ProtobufUtil.<clinit>(ProtobufUtil.java:238) [hbase-client-1.1.0.jar:1.1.0]
      at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64) [hbase-client-1.1.0.jar:1.1.0]
      at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:75) [hbase-client-1.1.0.jar:1.1.0]
      at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:105) [hbase-client-1.1.0.jar:1.1.0]
      at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:879) [hbase-client-1.1.0.jar:1.1.0]
      at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:635) [hbase-client-1.1.0.jar:1.1.0]
      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_151]
      at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) [?:1.8.0_151]
      at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [?:1.8.0_151]
      at java.lang.reflect.Constructor.newInstance(Constructor.java:423) [?:1.8.0_151]
      at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238) [hbase-client-1.1.0.jar:1.1.0]
      at org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:420) [hbase-client-1.1.0.jar:1.1.0]
      at org.apache.hadoop.hbase.client.ConnectionManager.createConnectionInternal(ConnectionManager.java:329) [hbase-client-1.1.0.jar:1.1.0]
      at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:144) [hbase-client-1.1.0.jar:1.1.0]
      at com.trinity.iot.storm.topology.HbaseTest.main(HbaseTest.java:34) [classes/:?]
7697 [main] WARN  o.a.h.c.Configuration - hbase-site.xml:an attempt to override final parameter: dfs.support.append;  Ignoring.
7750 [main] INFO  o.a.h.h.z.RecoverableZooKeeper - Process identifier=hconnection-0x2d0399f4 connecting to ZooKeeper ensemble=zk1:2181,zk2:2181,zk3:2181

hbase-site.xml

 <configuration>

    <property>
      <name>dfs.domain.socket.path</name>
      <value>/var/lib/hadoop-hdfs/dn_socket</value>
    </property>

    <property>
      <name>dfs.support.append</name>
      <value>false</value>
    </property>

    <property>
      <name>hbase.bucketcache.combinedcache.enabled</name>
      <value>true</value>
    </property>

    <property>
      <name>hbase.bucketcache.ioengine</name>
      <value>file:/mnt/hbase/cache.data</value>
    </property>

    <property>
      <name>hbase.bucketcache.percentage.in.combinedcache</name>
      <value></value>
    </property>

    <property>
      <name>hbase.bucketcache.size</name>
      <value>81920</value>
    </property>

    <property>
      <name>hbase.bulkload.staging.dir</name>
      <value>/apps/hbase/staging</value>
    </property>

    <property>
      <name>hbase.client.keyvalue.maxsize</name>
      <value>1048576</value>
    </property>

    <property>
      <name>hbase.client.retries.number</name>
      <value>35</value>
    </property>

    <property>
      <name>hbase.client.scanner.caching</name>
      <value>100</value>
    </property>

    <property>
      <name>hbase.cluster.distributed</name>
      <value>true</value>
    </property>

    <property>
      <name>hbase.coprocessor.master.classes</name>
      <value></value>
    </property>

    <property>
      <name>hbase.coprocessor.region.classes</name>
      <value>org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint</value>
    </property>

    <property>
      <name>hbase.custom-extensions.root</name>
      <value>/hdp/ext/2.6/hbase</value>
    </property>

    <property>
      <name>hbase.defaults.for.version.skip</name>
      <value>true</value>
    </property>

    <property>
      <name>hbase.fs.shutdown.hook.wait</name>
      <value>600000</value>
    </property>

    <property>
      <name>hbase.hregion.majorcompaction</name>
      <value>0</value>
    </property>

    <property>
      <name>hbase.hregion.majorcompaction.jitter</name>
      <value>0.50</value>
    </property>

    <property>
      <name>hbase.hregion.max.filesize</name>
      <value>10737418240</value>
    </property>

    <property>
      <name>hbase.hregion.memstore.block.multiplier</name>
      <value>4</value>
    </property>

    <property>
      <name>hbase.hregion.memstore.flush.size</name>
      <value>134217728</value>
    </property>

    <property>
      <name>hbase.hregion.memstore.mslab.enabled</name>
      <value>true</value>
    </property>

    <property>
      <name>hbase.hstore.blockingStoreFiles</name>
      <value>100</value>
    </property>

    <property>
      <name>hbase.hstore.compaction.max</name>
      <value>10</value>
    </property>

    <property>
      <name>hbase.hstore.compaction.max.size</name>
      <value>32212254720</value>
    </property>

    <property>
      <name>hbase.hstore.compactionThreshold</name>
      <value>3</value>
    </property>

    <property>
      <name>hbase.local.dir</name>
      <value>${hbase.tmp.dir}/local</value>
    </property>

    <property>
      <name>hbase.master.distributed.log.splitting</name>
      <value>true</value>
    </property>

    <property>
      <name>hbase.master.info.bindAddress</name>
      <value>0.0.0.0</value>
    </property>

    <property>
      <name>hbase.master.info.port</name>
      <value>16010</value>
    </property>

    <property>
      <name>hbase.master.namespace.init.timeout</name>
      <value>2400000</value>
    </property>

    <property>
      <name>hbase.master.port</name>
      <value>16000</value>
    </property>

    <property>
      <name>hbase.master.ui.readonly</name>
      <value>false</value>
    </property>

    <property>
      <name>hbase.master.wait.on.regionservers.timeout</name>
      <value>30000</value>
    </property>

    <property>
      <name>hbase.region.server.rpc.scheduler.factory.class</name>
      <value>org.apache.hadoop.hbase.ipc.PhoenixRpcSchedulerFactory</value>
    </property>

    <property>
      <name>hbase.regionserver.executor.openregion.threads</name>
      <value>20</value>
    </property>

    <property>
      <name>hbase.regionserver.global.memstore.size</name>
      <value>0.4</value>
    </property>

    <property>
      <name>hbase.regionserver.handler.count</name>
      <value>100</value>
    </property>

    <property>
      <name>hbase.regionserver.hlog.blocksize</name>
      <value>134217728</value>
    </property>

    <property>
      <name>hbase.regionserver.info.port</name>
      <value>16030</value>
    </property>

    <property>
      <name>hbase.regionserver.optionalcacheflushinterval</name>
      <value>7200000</value>
    </property>

    <property>
      <name>hbase.regionserver.port</name>
      <value>16020</value>
    </property>

    <property>
      <name>hbase.regionserver.wal.codec</name>
      <value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
    </property>

    <property>
      <name>hbase.rest.port</name>
      <value>8090</value>
    </property>

    <property>
      <name>hbase.rootdir</name>
      <value>/hbase</value>
    </property>

    <property>
      <name>hbase.rpc.protection</name>
      <value>authentication</value>
    </property>

    <property>
      <name>hbase.rpc.timeout</name>
      <value>90000</value>
    </property>

    <property>
      <name>hbase.rs.cacheblocksonwrite</name>
      <value>true</value>
    </property>

    <property>
      <name>hbase.security.authentication</name>
      <value>simple</value>
    </property>

    <property>
      <name>hbase.security.authorization</name>
      <value>false</value>
    </property>

    <property>
      <name>hbase.shutdown.hook</name>
      <value>true</value>
    </property>

    <property>
      <name>hbase.superuser</name>
      <value>hbase</value>
    </property>

    <property>
      <name>hbase.tmp.dir</name>
      <value>/tmp/hbase-${user.name}</value>
    </property>

    <property>
      <name>hbase.zookeeper.property.clientPort</name>
      <value>2181</value>
    </property>

    <property>
      <name>hbase.zookeeper.quorum</name>
      <value>zk01,zk2,zk3</value>
    </property>

    <property>
      <name>hbase.zookeeper.useMulti</name>
      <value>true</value>
    </property>

    <property>
      <name>hfile.block.cache.size</name>
      <value>0.40</value>
    </property>

    <property>
      <name>hfile.index.block.max.size</name>
      <value>131072</value>
    </property>

    <property>
      <name>io.storefile.bloom.block.size</name>
      <value>131072</value>
    </property>

    <property>
      <name>phoenix.functions.allowUserDefinedFunctions</name>
      <value>true</value>
    </property>

    <property>
      <name>phoenix.query.timeoutMs</name>
      <value>60000</value>
    </property>

    <property>
      <name>zookeeper.recovery.retry</name>
      <value>6</value>
    </property>

    <property>
      <name>zookeeper.session.timeout</name>
      <value>120000</value>
    </property>

    <property>
      <name>zookeeper.znode.parent</name>
      <value>/hbase-unsecure</value>
    </property>

  </configuration>

pom.xml:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>HDInsight-HbaseTest</groupId>
  <artifactId>HDInsight-HbaseTest</artifactId>
  <version>1</version>
  <name>HDInsight-HbaseTest</name>
  <description>HDInsight-HbaseTest</description>

  <dependencies>
   <dependency>
     <groupId>org.apache.hbase</groupId>
     <artifactId>hbase-client</artifactId>
     <version>1.1.2</version>
 </dependency>
 <dependency>
     <groupId>org.apache.phoenix</groupId>
     <artifactId>phoenix-core</artifactId>
     <version>4.4.0-HBase-1.1</version>
 </dependency>
 <!-- https://mvnrepository.com/artifact/jdk.tools/jdk.tools -->
        <dependency>
            <groupId>jdk.tools</groupId>
            <artifactId>jdk.tools</artifactId>
            <version>1.8.0_151</version>
            <scope>system</scope>
            <systemPath>${JAVA_HOME}/lib/tools.jar</systemPath>
        </dependency>
  </dependencies>

   <build>
    <!--  <sourceDirectory>src</sourceDirectory> -->
     <resources>
     <resource>
         <directory>src/main/resources</directory>
         <filtering>false</filtering>
         <includes>
         <include>hbase-site.xml</include>
         </includes>
     </resource>
     </resources>
     <plugins>
     <plugin>
         <groupId>org.apache.maven.plugins</groupId>
         <artifactId>maven-compiler-plugin</artifactId>
                 <version>3.3</version>
         <configuration>
             <source>1.8</source>
             <target>1.8</target>
         </configuration>
         </plugin>
     <plugin>
         <groupId>org.apache.maven.plugins</groupId>
         <artifactId>maven-shade-plugin</artifactId>
         <version>2.3</version>
         <configuration>
         <transformers>
             <transformer implementation="org.apache.maven.plugins.shade.resource.ApacheLicenseResourceTransformer">
             </transformer>
         </transformers>
         </configuration>
         <executions>
         <execution>
             <phase>package</phase>
             <goals>
             <goal>shade</goal>
             </goals>
         </execution>
         </executions>
     </plugin>
     </plugins>
 </build>
</project>

Solution:

Can you try giving FQDN for zookeeper quorum?

Config.set("hbase.zookeeper.quorum", "zk1:2181,zk2:2181,zk3:2181");

Recent Tips

  1. SignalR overwriting OnConnected(), OnDisconnected()
  2. DatePickerDialog displays with two borders
  3. "type 'double' is not a subtype of type 'int' in type cast" error in flutter. What should i do?
  4. hiding the autocomplete list when user click outside the textbox is not working as expected
  5. JSF IceFaces basic problem with redisplaying input value
  6. How to validate material ui TextField in reactjs?
  7. Go and MongoDB connection won't work with panic log "no reachable server"
  8. WordPress Posts Pagination Not Working
  9. F# sprintf won't print in interactive console
  10. Spring Integration get FTP files recursively with outbound-gateway
  11. Jade mixins not getting working from external file
  12. Can not access defined exports from the webpack bundle?
  13. Completely new to Node.js - API Programming
  14. Formatting Compare-Object Ouput
  15. Add dynamically added textbox value from User Control to main form
  16. Create a ByteBuf in Netty 4.0
  17. Is it possible to do computation before super() in the constructor?
  18. Q-learning Updating Frequency
  19. Wrong reload order when using Gulp and browserSync
  20. I use hugo build static page. But don't have content
  21. How to change background color and set bar colors based on conditional formatting in VBA?
  22. Problem when comparing two numeric values in SAS
  23. Is ACE reactor timer managment thread safe?
  24. Why Express res.render dumps the render output (EJS template) in console?
  25. Define generic typescript sort function of a certain type
  26. Eclipse RCP: TableViewer setInput from another view
  27. Migrating data from RDBMS to Arango DB: Bulk upload
  28. How can I check the existence of a key/file on an Amazon S3 Bucket using AWS iOS SDK v2?
  29. Can exponentially long bitstrings be stored in (and retrieved from) qubits reliably?
  30. Django refuses to save the date shown on form