文章轉(zhuǎn)載自公眾號(hào):Java極客技術(shù) 作者:鴨血粉絲
今天這篇文章給大家介紹一個(gè)用于服務(wù)注冊(cè)發(fā)現(xiàn)和管理配置的開(kāi)源組件-- Consul 。接下來(lái)讓我們一起來(lái)看一下它的功能吧。
背景
目前分布式系統(tǒng)架構(gòu)已經(jīng)基本普及,很多項(xiàng)目都是基于分布式架構(gòu)的,以往的單機(jī)模式基本已經(jīng)不適應(yīng)當(dāng)下互聯(lián)網(wǎng)行業(yè)的發(fā)展。隨著分布式項(xiàng)目的普及,項(xiàng)目服務(wù)實(shí)例數(shù)目的增加,服務(wù)的注冊(cè)與發(fā)現(xiàn)功能就成了一項(xiàng)必不可少的架構(gòu)。服務(wù)的注冊(cè)與發(fā)現(xiàn)的功能,有很多開(kāi)源方案。包括早期的zookeeper
,百度的disconf
,阿里的diamond
,基于Go語(yǔ)言的ETCD
,Spring
集成的Eureka
,以及前文提到的 Nacos
還有本文的主角Consul
。這里不對(duì)上面提到的進(jìn)行比較,本文僅介紹Consul
,詳細(xì)的對(duì)比,說(shuō)明網(wǎng)上有很多資料,可以參考,例如:服務(wù)發(fā)現(xiàn)比較:Consul vs Zookeeper vs Etcd vs Eureka說(shuō)到服務(wù)的注冊(cè)與發(fā)現(xiàn)主要是下面兩個(gè)主要功能:
- 服務(wù)注冊(cè)與發(fā)現(xiàn)
- 配置中心即分布式項(xiàng)目統(tǒng)一配置管理
Consul 服務(wù)端配置使用
- 下載相應(yīng)版本解壓,并將可執(zhí)行文件復(fù)制到/usr/local/consul目錄下
- 創(chuàng)建一個(gè)service的配置文件
silence$ sudo mkdir /etc/consul.d silence$ echo '{"service":{"name": "web", "tags": ["rails"], "port": 80}}' | sudo tee /etc/consul.d/web.json
- 啟動(dòng)代理
silence$ /usr/local/consul/consul agent -dev -node consul_01 -config-dir=/etc/consul.d/ -ui
-dev 參數(shù)代表本地測(cè)試環(huán)境啟動(dòng);-node 參數(shù)表示自定義集群名稱;-config-drir 參數(shù)表示services的注冊(cè)配置文件目錄,即上面創(chuàng)建的文件夾-ui 啟動(dòng)自帶的web-ui管理頁(yè)面
- 集群成員查詢方式
silence-pro:~ silence$ /usr/local/consul/consul members
- HTTP協(xié)議數(shù)據(jù)查詢
silence-pro:~ silence$ curl http://127.0.0.1:8500/v1/catalog/service/web
[
{
"ID": "ab1e3577-1b24-d254-f55e-9e8437956009",
"Node": "consul_01",
"Address": "127.0.0.1",
"Datacenter": "dc1",
"TaggedAddresses": {
"lan": "127.0.0.1",
"wan": "127.0.0.1"
},
"NodeMeta": {
"consul-network-segment": ""
},
"ServiceID": "web",
"ServiceName": "web",
"ServiceTags": [
"rails"
],
"ServiceAddress": "",
"ServicePort": 80,
"ServiceEnableTagOverride": false,
"CreateIndex": 6,
"ModifyIndex": 6
}
]
silence-pro:~ silence$
- web-ui管理 Consul Web UI
Consul的web-ui可以用來(lái)進(jìn)行服務(wù)狀態(tài)的查看,集群節(jié)點(diǎn)的檢查,訪問(wèn)列表的控制以及KV存儲(chǔ)系統(tǒng)的設(shè)置,相對(duì)于Eureka和ETCD,Consul的web-ui要好用的多。(Eureka和ETCD將在下一篇文章中簡(jiǎn)單介紹。)
7.KV存儲(chǔ)的數(shù)據(jù)導(dǎo)入和導(dǎo)出
silence-pro:consul silence$ ./consul kv import @temp.json
silence-pro:consul silence$ ./consul kv export redis/
temp.json文件內(nèi)容格式如下,一般是管理頁(yè)面配置后先導(dǎo)出保存文件,以后需要再導(dǎo)入該文件
[
{
"key": "redis/config/password",
"flags": 0,
"value": "MTIzNDU2"
},
{
"key": "redis/config/username",
"flags": 0,
"value": "U2lsZW5jZQ=="
},
{
"key": "redis/zk/",
"flags": 0,
"value": ""
},
{
"key": "redis/zk/password",
"flags": 0,
"value": "NDU0NjU="
},
{
"key": "redis/zk/username",
"flags": 0,
"value": "ZGZhZHNm"
}
]
Consul的KV存儲(chǔ)系統(tǒng)是一種類似zk的樹(shù)形節(jié)點(diǎn)結(jié)構(gòu),用來(lái)存儲(chǔ)相關(guān)key/value鍵值對(duì)信息的,我們可以使用KV存儲(chǔ)系統(tǒng)來(lái)實(shí)現(xiàn)上面提到的配置中心,將統(tǒng)一的配置信息保存在KV存儲(chǔ)系統(tǒng)里面,方便各個(gè)實(shí)例獲取并使用同一配置。而且更改配置后各個(gè)服務(wù)可以自動(dòng)拉取最新配置,不需要重啟服務(wù)。
Consul Java 客戶端使用
- maven pom依賴增加,版本可自由更換
<dependency>
<groupId>com.orbitz.consul</groupId>
<artifactId>consul-client</artifactId>
<version>0.12.3</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
- Consul 基本工具類,根據(jù)需要相應(yīng)擴(kuò)展
package com.coocaa.consul.consul.demo;
import com.google.common.base.Optional;
import com.google.common.net.HostAndPort;
import com.orbitz.consul.*;
import com.orbitz.consul.model.agent.ImmutableRegCheck;
import com.orbitz.consul.model.agent.ImmutableRegistration;
import com.orbitz.consul.model.health.ServiceHealth;
import java.net.MalformedURLException;
import java.net.URI;
import java.util.List;
public class ConsulUtil {
private static Consul consul = Consul.builder().withHostAndPort(HostAndPort.fromString("127.0.0.1:8500")).build();
/**
* 服務(wù)注冊(cè)
*/
public static void serviceRegister() {
AgentClient agent = consul.agentClient();
try {
/**
* 注意該注冊(cè)接口:
* 需要提供一個(gè)健康檢查的服務(wù)URL,以及每隔多長(zhǎng)時(shí)間訪問(wèn)一下該服務(wù)(這里是3s)
*/
agent.register(8080, URI.create("http://localhost:8080/health").toURL(), 3, "tomcat", "tomcatID", "dev");
} catch (MalformedURLException e) {
e.printStackTrace();
}
}
/**
* 服務(wù)獲取
*
* @param serviceName
*/
public static void findHealthyService(String serviceName) {
HealthClient healthClient = consul.healthClient();
List<ServiceHealth> serviceHealthList = healthClient.getHealthyServiceInstances(serviceName).getResponse();
serviceHealthList.forEach((response) -> {
System.out.println(response);
});
}
/**
* 存儲(chǔ)KV
*/
public static void storeKV(String key, String value) {
KeyValueClient kvClient = consul.keyValueClient();
kvClient.putValue(key, value);
}
/**
* 根據(jù)key獲取value
*/
public static String getKV(String key) {
KeyValueClient kvClient = consul.keyValueClient();
Optional<String> value = kvClient.getValueAsString(key);
if (value.isPresent()) {
return value.get();
}
return "";
}
/**
* 找出一致性的節(jié)點(diǎn)(應(yīng)該是同一個(gè)DC中的所有server節(jié)點(diǎn))
*/
public static List<String> findRaftPeers() {
StatusClient statusClient = consul.statusClient();
return statusClient.getPeers();
}
/**
* 獲取leader
*/
public static String findRaftLeader() {
StatusClient statusClient = consul.statusClient();
return statusClient.getLeader();
}
public static void main(String[] args) {
AgentClient agentClient = consul.agentClient();
agentClient.deregister("tomcatID");
}
}
temp.json 和 ConsulUtil.java 文件以及上傳到 GitHub 資源庫(kù),回復(fù)【源碼倉(cāng)庫(kù)】獲取代碼地址。
3.通過(guò)上面的基本工具類可以實(shí)現(xiàn)服務(wù)的注冊(cè)和KV數(shù)據(jù)的獲取與存儲(chǔ)功能
Consul集群搭建
- 三臺(tái)主機(jī)Consul下載安裝,我這里沒(méi)有物理主機(jī),所以通過(guò)三臺(tái)虛擬機(jī)來(lái)實(shí)現(xiàn)。虛擬機(jī)IP分192.168.231.145,192.168.231.146,192.168.231.147
- 將145和146兩臺(tái)主機(jī)作為Server模式啟動(dòng),147作為Client模式啟動(dòng),Server和Client只是針對(duì)Consul集群來(lái)說(shuō)的,跟服務(wù)沒(méi)有任何關(guān)系!
- Server模式啟動(dòng)145,節(jié)點(diǎn)名稱設(shè)為n1,數(shù)據(jù)中心統(tǒng)一用dc1
[root@centos145 consul]# ./consul agent -server -bootstrap-expect 2 -data-dir /tmp/consul -node=n1 -bind=192.168.231.145 -datacenter=dc1
bootstrap_expect = 2: A cluster with 2 servers will provide no failure tolerance. See https://www.consul.io/docs/internals/consensus.html#deployment-table
bootstrap_expect > 0: expecting 2 servers
==> Starting Consul agent...
==> Consul agent running!
Version: 'v1.0.1'
Node ID: '6cc74ff7-7026-cbaa-5451-61f02114cd25'
Node name: 'n1'
Datacenter: 'dc1' (Segment: '<all>')
Server: true (Bootstrap: false)
Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, DNS: 8600)
Cluster Addr: 192.168.231.145 (LAN: 8301, WAN: 8302)
Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false
==> Log data will now stream in as it occurs:
2017/12/06 23:26:21 [INFO] raft: Initial configuration (index=0): []
2017/12/06 23:26:21 [INFO] serf: EventMemberJoin: n1.dc1 192.168.231.145
2017/12/06 23:26:21 [INFO] serf: EventMemberJoin: n1 192.168.231.145
2017/12/06 23:26:21 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
2017/12/06 23:26:21 [INFO] raft: Node at 192.168.231.145:8300 [Follower] entering Follower state (Leader: "")
2017/12/06 23:26:21 [INFO] consul: Adding LAN server n1 (Addr: tcp/192.168.231.145:8300) (DC: dc1)
2017/12/06 23:26:21 [INFO] consul: Handled member-join event for server "n1.dc1" in area "wan"
2017/12/06 23:26:21 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
2017/12/06 23:26:21 [INFO] agent: Started HTTP server on 127.0.0.1:8500 (tcp)
2017/12/06 23:26:21 [INFO] agent: started state syncer
2017/12/06 23:26:28 [ERR] agent: failed to sync remote state: No cluster leader
2017/12/06 23:26:30 [WARN] raft: no known peers, aborting election
2017/12/06 23:26:49 [ERR] agent: Coordinate update error: No cluster leader
2017/12/06 23:26:54 [ERR] agent: failed to sync remote state: No cluster leader
2017/12/06 23:27:24 [ERR] agent: Coordinate update error: No cluster leader
2017/12/06 23:27:27 [ERR] agent: failed to sync remote state: No cluster leader
2017/12/06 23:27:56 [ERR] agent: Coordinate update error: No cluster leader
2017/12/06 23:28:02 [ERR] agent: failed to sync remote state: No cluster leader
2017/12/06 23:28:27 [ERR] agent: failed to sync remote state: No cluster leader
2017/12/06 23:28:33 [ERR] agent: Coordinate update error: No cluster leader
目前只啟動(dòng)了145,所以還沒(méi)有集群
4.Server模式啟動(dòng)146,節(jié)點(diǎn)名稱用n2,并在n2上啟用了web-ui管理頁(yè)面功能
[root@centos146 consul]# ./consul agent -server -bootstrap-expect 2 -data-dir /tmp/consul -node=n2 -bind=192.168.231.146 -datacenter=dc1 -ui
bootstrap_expect = 2: A cluster with 2 servers will provide no failure tolerance. See https://www.consul.io/docs/internals/consensus.html#deployment-table
bootstrap_expect > 0: expecting 2 servers
==> Starting Consul agent...
==> Consul agent running!
Version: 'v1.0.1'
Node ID: 'eb083280-c403-668f-e193-60805c7c856a'
Node name: 'n2'
Datacenter: 'dc1' (Segment: '<all>')
Server: true (Bootstrap: false)
Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, DNS: 8600)
Cluster Addr: 192.168.231.146 (LAN: 8301, WAN: 8302)
Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false
==> Log data will now stream in as it occurs:
2017/12/06 23:28:30 [INFO] raft: Initial configuration (index=0): []
2017/12/06 23:28:30 [INFO] serf: EventMemberJoin: n2.dc1 192.168.231.146
2017/12/06 23:28:31 [INFO] serf: EventMemberJoin: n2 192.168.231.146
2017/12/06 23:28:31 [INFO] raft: Node at 192.168.231.146:8300 [Follower] entering Follower state (Leader: "")
2017/12/06 23:28:31 [INFO] consul: Adding LAN server n2 (Addr: tcp/192.168.231.146:8300) (DC: dc1)
2017/12/06 23:28:31 [INFO] consul: Handled member-join event for server "n2.dc1" in area "wan"
2017/12/06 23:28:31 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
2017/12/06 23:28:31 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
2017/12/06 23:28:31 [INFO] agent: Started HTTP server on 127.0.0.1:8500 (tcp)
2017/12/06 23:28:31 [INFO] agent: started state syncer
2017/12/06 23:28:38 [ERR] agent: failed to sync remote state: No cluster leader
2017/12/06 23:28:39 [WARN] raft: no known peers, aborting election
2017/12/06 23:28:57 [ERR] agent: Coordinate update error: No cluster leader
2017/12/06 23:29:11 [ERR] agent: failed to sync remote state: No cluster leader
2017/12/06 23:29:30 [ERR] agent: Coordinate update error: No cluster leader
2017/12/06 23:29:38 [ERR] agent: failed to sync remote state: No cluster leader
2017/12/06 23:29:57 [ERR] agent: Coordinate update error: No cluster leader
同樣沒(méi)有集群發(fā)現(xiàn),此時(shí)n1和n2都啟動(dòng)起來(lái),但是互相并不知道集群的存在!
5.將n1節(jié)點(diǎn)加入n2
[silence@centos145 consul]$ ./consul join 192.168.231.146
此時(shí)n1和n2都打印發(fā)現(xiàn)了集群的日志信息
6.這個(gè)時(shí)候n1和n2兩個(gè)節(jié)點(diǎn)已經(jīng)是一個(gè)集群里面的Server模式的節(jié)點(diǎn)了
7.Client模式啟動(dòng)147
[root@centos147 consul]# ./consul agent -data-dir /tmp/consul -node=n3 -bind=192.168.231.147 -datacenter=dc1
==> Starting Consul agent...
==> Consul agent running!
Version: 'v1.0.1'
Node ID: 'be7132c3-643e-e5a2-9c34-cad99063a30e'
Node name: 'n3'
Datacenter: 'dc1' (Segment: '')
Server: false (Bootstrap: false)
Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, DNS: 8600)
Cluster Addr: 192.168.231.147 (LAN: 8301, WAN: 8302)
Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false
==> Log data will now stream in as it occurs:
2017/12/06 23:36:46 [INFO] serf: EventMemberJoin: n3 192.168.231.147
2017/12/06 23:36:46 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
2017/12/06 23:36:46 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
2017/12/06 23:36:46 [INFO] agent: Started HTTP server on 127.0.0.1:8500 (tcp)
2017/12/06 23:36:46 [INFO] agent: started state syncer
2017/12/06 23:36:46 [WARN] manager: No servers available
2017/12/06 23:36:46 [ERR] agent: failed to sync remote state: No known Consul servers
2017/12/06 23:37:08 [WARN] manager: No servers available
2017/12/06 23:37:08 [ERR] agent: failed to sync remote state: No known Consul servers
2017/12/06 23:37:36 [WARN] manager: No servers available
2017/12/06 23:37:36 [ERR] agent: failed to sync remote state: No known Consul servers
2017/12/06 23:38:02 [WARN] manager: No servers available
2017/12/06 23:38:02 [ERR] agent: failed to sync remote state: No known Consul servers
2017/12/06 23:38:22 [WARN] manager: No servers available
2017/12/06 23:38:22 [ERR] agent: failed to sync remote state: No known Consul servers
8.在n3上面將節(jié)點(diǎn)n3加入集群
[silence@centos147 consul]$ ./consul join 192.168.231.145
9.再次查看集群節(jié)點(diǎn)信息
10.此時(shí)三個(gè)節(jié)點(diǎn)的Consul集群搭建成功了!其實(shí)n1和n2是Server模式啟動(dòng),n3是Client模式啟動(dòng)。
11.關(guān)于Consul的Server模式和Client模式主要的區(qū)別是這樣的,一個(gè)Consul集群通過(guò)啟動(dòng)的參數(shù)-bootstrap-expect
來(lái)控制這個(gè)集群的Server節(jié)點(diǎn)個(gè)數(shù),Server模式的節(jié)點(diǎn)會(huì)維護(hù)集群的狀態(tài),并且如果某個(gè)Server節(jié)點(diǎn)退出了集群,則會(huì)觸發(fā)Leader重新選舉機(jī)制,在會(huì)剩余的Server模式節(jié)點(diǎn)中重新選舉一個(gè)Leader;而Client模式的節(jié)點(diǎn)的加入和退出很自由。
12.在n2中啟動(dòng)web-ui
以上就是W3Cschool編程獅
關(guān)于分布式服務(wù)注冊(cè)發(fā)現(xiàn)與統(tǒng)一配置管理之 Consul的相關(guān)介紹了,希望對(duì)大家有所幫助。