HBase实战(4):使用JAVA操作分布式集群HBASE
Hbase开发测试程序在windows 10的IDEA中,Vmvare虚拟机部署Hadoop、hbase等集群,虚拟机操作系统linux。将通过windows本地IDEA连接虚拟机的Hbase系统,进行操作。
1,更改C:\Windows\System32\drivers\etc 的HOSTS文件
1 2 3 4 5
| 1192.168.189.1 master
2192.168.189.2 worker1
3192.168.189.3 worker2
4192.168.189.4 worker3
5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43
| 1Microsoft Windows [版本 10.0.16299.371]
2(c) 2017 Microsoft Corporation。保留所有权利。
3
4C:\Users\lenovo>ping master
5
6正在 Ping master [192.168.189.1] 具有 32 字节的数据:
7来自 192.168.189.1 的回复: 字节=32 时间<1ms TTL=64
8来自 192.168.189.1 的回复: 字节=32 时间<1ms TTL=64
9
10192.168.189.1 的 Ping 统计信息:
11 数据包: 已发送 = 2,已接收 = 2,丢失 = 0 (0% 丢失),
12往返行程的估计时间(以毫秒为单位):
13 最短 = 0ms,最长 = 0ms,平均 = 0ms
14Control-C
15^C
16C:\Users\lenovo>ping worker1
17
18正在 Ping worker1 [192.168.189.2] 具有 32 字节的数据:
19来自 192.168.189.2 的回复: 字节=32 时间<1ms TTL=64
20来自 192.168.189.2 的回复: 字节=32 时间<1ms TTL=64
21来自 192.168.189.2 的回复: 字节=32 时间<1ms TTL=64
22来自 192.168.189.2 的回复: 字节=32 时间<1ms TTL=64
23
24192.168.189.2 的 Ping 统计信息:
25 数据包: 已发送 = 4,已接收 = 4,丢失 = 0 (0% 丢失),
26往返行程的估计时间(以毫秒为单位):
27 最短 = 0ms,最长 = 0ms,平均 = 0ms
28
29C:\Users\lenovo>ping worker3
30
31正在 Ping worker3 [192.168.189.4] 具有 32 字节的数据:
32来自 192.168.189.4 的回复: 字节=32 时间<1ms TTL=64
33来自 192.168.189.4 的回复: 字节=32 时间<1ms TTL=64
34来自 192.168.189.4 的回复: 字节=32 时间<1ms TTL=64
35来自 192.168.189.4 的回复: 字节=32 时间<1ms TTL=64
36
37192.168.189.4 的 Ping 统计信息:
38 数据包: 已发送 = 4,已接收 = 4,丢失 = 0 (0% 丢失),
39往返行程的估计时间(以毫秒为单位):
40 最短 = 0ms,最长 = 0ms,平均 = 0ms
41
42C:\Users\lenovo>
43 |
2,新建maven项目,编写pom.xml文件。下载HBASE的依赖包。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189
| 1<?xml version="1.0" encoding="UTF-8"?>
2<project xmlns="http://maven.apache.org/POM/4.0.0"
3 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
4 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
5 <modelVersion>4.0.0</modelVersion>
6
7 <groupId>noc_hbase_test</groupId>
8 <artifactId>noc_hbase_test</artifactId>
9 <version>1.0-SNAPSHOT</version>
10
11 <properties>
12 <scala.version>2.11.8</scala.version>
13 <spark.version>2.2.1</spark.version>
14 <jedis.version>2.8.2</jedis.version>
15 <fastjson.version>1.2.14</fastjson.version>
16 <jetty.version>9.2.5.v20141112</jetty.version>
17 <container.version>2.17</container.version>
18 <java.version>1.8</java.version>
19 <hbase.version>1.2.0</hbase.version>
20 </properties>
21
22
23 <repositories>
24 <repository>
25 <id>scala-tools.org</id>
26 <name>Scala-Tools Maven2 Repository</name>
27 <url>http://scala-tools.org/repo-releases</url>
28 </repository>
29 </repositories>
30
31 <pluginRepositories>
32 <pluginRepository>
33 <id>scala-tools.org</id>
34 <name>Scala-Tools Maven2 Repository</name>
35 <url>http://scala-tools.org/repo-releases</url>
36 </pluginRepository>
37 </pluginRepositories>
38
39 <dependencies>
40 <!-- https://mvnrepository.com/artifact/org.apache.hbase/hbase -->
41 <!-- https://mvnrepository.com/artifact/org.apache.hbase/hbase -->
42 <!-- hbase依赖包 -->
43 <dependency>
44 <groupId>org.apache.hbase</groupId>
45 <artifactId>hbase-client</artifactId>
46 <version>${hbase.version}</version>
47 <exclusions>
48 <exclusion>
49 <groupId>org.slf4j</groupId>
50 <artifactId>slf4j-log4j12</artifactId>
51 </exclusion>
52 </exclusions>
53 </dependency>
54 <dependency>
55 <groupId>org.apache.hbase</groupId>
56 <artifactId>hbase-common</artifactId>
57 <version>${hbase.version}</version>
58 <exclusions>
59 <exclusion>
60 <groupId>org.slf4j</groupId>
61 <artifactId>slf4j-log4j12</artifactId>
62 </exclusion>
63 </exclusions>
64 </dependency>
65 <dependency>
66 <groupId>org.apache.hbase</groupId>
67 <artifactId>hbase-server</artifactId>
68 <version>${hbase.version}</version>
69 <exclusions>
70 <exclusion>
71 <groupId>org.slf4j</groupId>
72 <artifactId>slf4j-log4j12</artifactId>
73 </exclusion>
74 </exclusions>
75 </dependency>
76
77 <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-common -->
78 <dependency>
79 <groupId>org.apache.hadoop</groupId>
80 <artifactId>hadoop-common</artifactId>
81 <version>2.6.0</version>
82 </dependency>
83
84 <dependency>
85 <groupId>org.apache.hadoop</groupId>
86 <artifactId>hadoop-client</artifactId>
87 <version>2.6.0</version>
88 </dependency>
89
90 <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-hdfs -->
91 <dependency>
92 <groupId>org.apache.hadoop</groupId>
93 <artifactId>hadoop-hdfs</artifactId>
94 <version>2.6.0</version>
95 </dependency>
96
97
98 </dependencies>
99
100 <build>
101 <plugins>
102 <plugin>
103 <artifactId>maven-assembly-plugin</artifactId>
104 <configuration>
105 <classifier>dist</classifier>
106 <appendAssemblyId>true</appendAssemblyId>
107 <descriptorRefs>
108 <descriptor>jar-with-dependencies</descriptor>
109 </descriptorRefs>
110 </configuration>
111 <executions>
112 <execution>
113 <id>make-assembly</id>
114 <phase>package</phase>
115 <goals>
116 <goal>single</goal>
117 </goals>
118 </execution>
119 </executions>
120 </plugin>
121
122 <plugin>
123 <artifactId>maven-compiler-plugin</artifactId>
124 <configuration>
125 <source>1.7</source>
126 <target>1.7</target>
127 </configuration>
128 </plugin>
129
130 <plugin>
131 <groupId>net.alchim31.maven</groupId>
132 <artifactId>scala-maven-plugin</artifactId>
133 <version>3.2.2</version>
134 <executions>
135 <execution>
136 <id>scala-compile-first</id>
137 <phase>process-resources</phase>
138 <goals>
139 <goal>compile</goal>
140 </goals>
141 </execution>
142 </executions>
143 <configuration>
144 <scalaVersion>${scala.version}</scalaVersion>
145 <recompileMode>incremental</recompileMode>
146 <useZincServer>true</useZincServer>
147 <args>
148 <arg>-unchecked</arg>
149 <arg>-deprecation</arg>
150 <arg>-feature</arg>
151 </args>
152 <jvmArgs>
153 <jvmArg>-Xms1024m</jvmArg>
154 <jvmArg>-Xmx1024m</jvmArg>
155 </jvmArgs>
156 <javacArgs>
157 <javacArg>-source</javacArg>
158 <javacArg>${java.version}</javacArg>
159 <javacArg>-target</javacArg>
160 <javacArg>${java.version}</javacArg>
161 <javacArg>-Xlint:all,-serial,-path</javacArg>
162 </javacArgs>
163 </configuration>
164 </plugin>
165
166 <plugin>
167 <groupId>org.antlr</groupId>
168 <artifactId>antlr4-maven-plugin</artifactId>
169 <version>4.3</version>
170 <executions>
171 <execution>
172 <id>antlr</id>
173 <goals>
174 <goal>antlr4</goal>
175 </goals>
176 <phase>none</phase>
177 </execution>
178 </executions>
179 <configuration>
180 <outputDirectory>src/test/java</outputDirectory>
181 <listener>true</listener>
182 <treatWarningsAsErrors>true</treatWarningsAsErrors>
183 </configuration>
184 </plugin>
185 </plugins>
186 </build>
187
188</project>
189 |
3.在IDEA项目下面放上linux环境配置hadoop和hbase配置文件,hbase-site.xml和hdfs-site.xml.
hbase-site.xml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
| 1<configuration>
2
3 <property>
4 <name>hbase.rootdir</name>
5 <value>hdfs://master:9000/hbase</value>
6 </property>
7
8 <property>
9 <name>hbase.cluster.distributed</name>
10 <value>true</value>
11 </property>
12
13 <property>
14 <name>hbase.zookeeper.quorum</name>
15 <value>192.168.189.1:2181,192.168.189.2:2181,192.168.189.3:2181</value>
16 </property>
17
18
19<property>
20 <name>hbase.master.info.port</name>
21 <value>60010</value>
22</property>
23
24
25 </configuration>
26
27
28 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
| 1 <configuration>
2 <property>
3 <name>dfs.replication</name>
4 <value>3</value>
5 </property>
6 <property>
7 <name>dfs.namenode.name.dir</name>
8 <value>/usr/local/hadoop-2.6.0/tmp/dfs/name</value>
9 </property>
10 <property>
11 <name>dfs.datanode.data.dir</name>
12 <value> /usr/local/hadoop-2.6.0/tmp/dfs/data</value>
13 </property>
14</configuration>
15 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
| 1package HbaseTest;
2import org.apache.hadoop.conf.Configuration;
3import org.apache.hadoop.hbase.HTableDescriptor;
4import org.apache.hadoop.hbase.client.Connection;
5import org.apache.hadoop.hbase.client.Admin;
6
7import java.io.IOException;
8
9public class HbaseMyTest {
10 public static Configuration configuration;
11 public static Connection connection;
12 public static Admin admin;
13 public static void main(String[] args) throws IOException {
14 listTables();
15 }
16
17 public static void listTables() throws IOException {
18 HbaseUtils.init();
19 HTableDescriptor hTableDescriptors[] = admin.listTables();
20 for (HTableDescriptor hTableDescriptor : hTableDescriptors) {
21 System.out.println("IDEA本地程序查询Hbase的表名: "+hTableDescriptor.getNameAsString());
22 }
23 HbaseUtils.close();
24 }
25
26}
27
28 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
| 1package HbaseTest;
2
3import org.apache.hadoop.hbase.HBaseConfiguration;
4import org.apache.hadoop.hbase.client.ConnectionFactory;
5
6import java.io.IOException;
7
8public class HbaseUtils {
9 public static void init() {
10 HbaseMyTest.configuration = HBaseConfiguration.create();
11 HbaseMyTest.configuration.set("hbase.zookeeper.property.clientPort", "2181");
12 HbaseMyTest.configuration.set("hbase.zookeeper.quorum", "192.168.189.1,192.168.189.2,192.168.189.3");
13 HbaseMyTest.configuration.set("hbase.master", "192.168.189.1:60000");
14
15 try {
16 HbaseMyTest.connection = ConnectionFactory.createConnection(HbaseMyTest.configuration);
17 HbaseMyTest.admin = HbaseMyTest.connection.getAdmin();
18 } catch (IOException e) {
19 e.printStackTrace();
20 }
21 }
22
23 public static void close() {
24 try {
25 if (null != HbaseMyTest.admin)
26 HbaseMyTest.admin.close();
27 if (null != HbaseMyTest.connection)
28 HbaseMyTest.connection.close();
29 } catch (IOException e) {
30 e.printStackTrace();
31 }
32
33 }
34}
35
36 |
运行结果为: