目标

在单台CentOS7服务器上搭建一个单副本、双分片集群,用于分片集群的开发测试。

1. 安装clickhouse

安装最新的LTS版本,通过以下链接查看所有LTS版本:https://repo.yandex.ru/clickhouse/rpm/lts/

yum install yum-utils
rpm --import https://repo.clickhouse.tech/CLICKHOUSE-KEY.GPG
yum-config-manager --add-repo https://repo.clickhouse.tech/rpm/stable/x86_64
yum install -y https://repo.yandex.ru/clickhouse/rpm/lts/clickhouse-common-static-22.8.5.29.x86_64.rpm
yum install -y https://repo.yandex.ru/clickhouse/rpm/lts/clickhouse-client-22.8.5.29.x86_64.rpm
yum install -y https://repo.yandex.ru/clickhouse/rpm/lts/clickhouse-server-22.8.5.29.x86_64.rpm

2. 创建相关目录

# 创建两个分片的数据目录
mkdir /work/clickhouse
mkdir /work/clickhouse2
chmod 700 /work/clickhouse
chmod 700 /work/clickhouse2
chown clickhouse:clickhouse /work/clickhouse
chown clickhouse:clickhouse /work/clickhouse2

# 创建分片2的日志、PID目录和配置目录
mkdir /run/clickhouse-server
mkdir /run/clickhouse-server2
chown clickhouse:clickhouse /run/clickhouse-server
chown clickhouse:clickhouse /run/clickhouse-server2

mkdir /var/log/clickhouse-server2
chown clickhouse:clickhouse /var/log/clickhouse-server2

mkdir /etc/clickhouse-server2/
chmod 700 /etc/clickhouse-server2/
chmod clickhouse:clickhouse /etc/clickhouse-server2/
rsync -avhP /etc/clickhouse-server/* /etc/clickhouse-server2/

# 创建分片3的keeper目录
mkdir /etc/clickhouse-server3/
chmod 700 /etc/clickhouse-server3/

3. 修改分片1的配置

vim /etc/clickhouse-server/config.xml

<!-- 监听所有IP -->
<listen_host>0.0.0.0</listen_host>

<!-- 修改所有数据相关目录 -->
<path>/work/clickhouse/</path>

<tmp_path>/work/clickhouse/tmp/</tmp_path>

<user_files_path>/work/clickhouse/user_files/</user_files_path>

<local_directory>
    <!-- Path to folder where users created by SQL commands are stored. -->
    <path>/work/clickhouse/access/</path>
</local_directory>

<format_schema_path>/work/clickhouse/format_schemas/</format_schema_path>

<!-- 修改remote_server -->
    <remote_servers>
        <!-- Test only shard config for testing distributed storage -->
        <two_shards>
             <shard>
                 <replica>
                     <host>localhost</host>
                     <port>9000</port>
                     <password>YOUR_PASSWORD</password>
                 </replica>
             </shard>
             <shard>
                 <replica>
                     <host>localhost</host>
                     <port>9010</port>
                     <password>YOUR_PASSWORD</password>
                 </replica>
             </shard>
        </two_shards>
    </remote_servers>

<!-- 增加Keeper配置 -->
<zookeeper>
    <node>
        <host>localhost</host>
        <port>2171</port>
    </node>
    <node>
        <host>localhost</host>
        <port>2181</port>
    </node>
    <node>
        <host>localhost</host>
        <port>2191</port>
    </node>
</zookeeper>

<!-- 增加macros配置 -->
<macros>
    <shard>shard1</shard>
    <replica>replica1</replica>
</macros>

4. 修改分片2的配置

vim /etc/clickhouse-server2/config.xml

<!-- 监听所有IP -->
<listen_host>0.0.0.0</listen_host>

<!-- 将所有端口+10,避免和分片1冲突 -->
    <http_port>8133</http_port>
    <tcp_port>9010</tcp_port>
    <mysql_port>9014</mysql_port>
    <postgresql_port>9015</postgresql_port>
    <interserver_http_port>9019</interserver_http_port>

<!-- 修改所有数据相关目录 -->
<path>/work/clickhouse2/</path>

<tmp_path>/work/clickhouse2/tmp/</tmp_path>

<user_files_path>/work/clickhouse2/user_files/</user_files_path>

<local_directory>
    <!-- Path to folder where users created by SQL commands are stored. -->
    <path>/work/clickhouse2/access/</path>
</local_directory>

<format_schema_path>/work/clickhouse2/format_schemas/</format_schema_path>

<!-- 修改remote_server -->
    <remote_servers>
        <!-- Test only shard config for testing distributed storage -->
        <two_shards>
             <shard>
                 <replica>
                     <host>localhost</host>
                     <port>9000</port>
                     <password>YOUR_PASSWORD</password>
                 </replica>
             </shard>
             <shard>
                 <replica>
                     <host>localhost</host>
                     <port>9010</port>
                     <password>YOUR_PASSWORD</password>
                 </replica>
             </shard>
        </two_shards>
    </remote_servers>

<!-- 增加Keeper配置 -->
<zookeeper>
    <node>
        <host>localhost</host>
        <port>2171</port>
    </node>
    <node>
        <host>localhost</host>
        <port>2181</port>
    </node>
    <node>
        <host>localhost</host>
        <port>2191</port>
    </node>
</zookeeper>

<!-- 增加macros配置 -->
<macros>
    <shard>shard2</shard>   <!-- 注意:这里与分片1的配置不同 -->
    <replica>replica1</replica>
</macros>

注意:remote_server中的password,需要和你users.xml中的password一致,否则会出现DB::Exception: default: Authentication failed: password is incorrect or there is no user with such name. (AUTHENTICATION_FAILED)的错误提示。

当然,如果你的user用户名不是default,还可以加入<user></user>标签,指定用户名

5. 增加Clickhouse-Keeper配置

Clickhouse-Keeper需要至少3个节点,所以我们无法直接通过Clickhouse-Server配置直接内置开启。我们配置3个独立的Clickhouse-Keeper配置文件,通过supervisor来管理Keeper进程

mkdir /etc/clickhouse-keeper/
cd /etc/clickhouse-keeper/
touch keeper1.xml
touch keeper2.xml
touch keeper3.xml
cd .. && chown -R clickhouse:clickhouse clickhouse-keeper

keeper1.xml

<?xml version="1.0"?>
<clickhouse>
<keeper_server>
    <tcp_port>2171</tcp_port>
    <server_id>1</server_id>
    <log_storage_path>/work/clickhouse/coordination/log</log_storage_path>
    <snapshot_storage_path>/work/clickhouse/coordination/snapshots</snapshot_storage_path>
    <coordination_settings>
        <operation_timeout_ms>10000</operation_timeout_ms>
        <session_timeout_ms>30000</session_timeout_ms>
        <raft_logs_level>trace</raft_logs_level>
    </coordination_settings>

    <raft_configuration>
        <server>
            <id>1</id>
            <hostname>localhost</hostname>
            <port>9444</port>
        </server>
        <server>
            <id>2</id>
            <hostname>localhost</hostname>
            <port>9454</port>
        </server>
        <server>
            <id>3</id>
            <hostname>localhost</hostname>
            <port>9464</port>
        </server>
    </raft_configuration>
</keeper_server>
</clickhouse>

keeper2.xml

<?xml version="1.0"?>
<clickhouse>
<keeper_server>
    <tcp_port>2181</tcp_port>
    <server_id>2</server_id>
    <log_storage_path>/work/clickhouse2/coordination/log</log_storage_path>
    <snapshot_storage_path>/work/clickhouse2/coordination/snapshots</snapshot_storage_path>
    <coordination_settings>
        <operation_timeout_ms>10000</operation_timeout_ms>
        <session_timeout_ms>30000</session_timeout_ms>
        <raft_logs_level>trace</raft_logs_level>
    </coordination_settings>

    <raft_configuration>
        <server>
            <id>1</id>
            <hostname>localhost</hostname>
            <port>9444</port>
        </server>
        <server>
            <id>2</id>
            <hostname>localhost</hostname>
            <port>9454</port>
        </server>
        <server>
            <id>3</id>
            <hostname>localhost</hostname>
            <port>9464</port>
        </server>
    </raft_configuration>
</keeper_server>
</clickhouse>

keeper3.xml

<?xml version="1.0"?>
<clickhouse>
<keeper_server>
    <tcp_port>2191</tcp_port>
    <server_id>3</server_id>
    <log_storage_path>/work/clickhouse3/coordination/log</log_storage_path>
    <snapshot_storage_path>/work/clickhouse3/coordination/snapshots</snapshot_storage_path>
    <coordination_settings>
        <operation_timeout_ms>10000</operation_timeout_ms>
        <session_timeout_ms>30000</session_timeout_ms>
        <raft_logs_level>trace</raft_logs_level>
    </coordination_settings>

    <raft_configuration>
        <server>
            <id>1</id>
            <hostname>localhost</hostname>
            <port>9444</port>
        </server>
        <server>
            <id>2</id>
            <hostname>localhost</hostname>
            <port>9454</port>
        </server>
        <server>
            <id>3</id>
            <hostname>localhost</hostname>
            <port>9464</port>
        </server>
    </raft_configuration>
</keeper_server>
</clickhouse>

6. 使用supervisor管理clickhouse-server和clickhouse-keeper

/etc/supervisord.d/创建clickhouse1.iniclickhouse2.inikeeper1.inikeeper2.inikeeper3.ini文件

分片1:clickhouse1.ini

[program:clickhouse1]
command=/usr/bin/clickhouse-server --config=/etc/clickhouse-server/config.xml --pid-file=/run/clickhouse-server/clickhouse-server.pid
autostart=true
user=clickhouse

分片2:clickhouse2.ini

[program:clickhouse2]
command=/usr/bin/clickhouse-server --config=/etc/clickhouse-server2/config.xml --pid-file=/run/clickhouse-server2/clickhouse-server.pid
autostart=true
user=clickhouse

keeper1.ini

[program:keeper1]
command=/usr/bin/clickhouse-keeper --config /etc/clickhouse-keeper/keeper1.xml
autostart=true
user=clickhouse

keeper2.ini

[program:keeper2]
command=/usr/bin/clickhouse-keeper --config /etc/clickhouse-keeper/keeper2.xml
autostart=true
user=clickhouse

keeper3.ini

[program:keeper3]
command=/usr/bin/clickhouse-keeper --config /etc/clickhouse-keeper/keeper3.xml
autostart=true
user=clickhouse

启动:supervisorctl update

7. 查看集群状态

查看端口监听、服务进程:

netstat -nlpt | grep clickhouse
ps aux | grep clickhouse

查看日志:

tail -f /var/log/clickhouse-server/clickhouse-server.err.log
tail -f /var/log/clickhouse-server2/clickhouse-server.err.log

查看状态:

select * from system.clusters;
select * from system.macros;

如果您觉得您在我这里学到了新姿势,博主支持转载,姿势本身就是用来相互学习的。同时,本站文章如未注明均为 hisune 原创 请尊重劳动成果 转载请注明 转自: CentOS7单机搭建clickhouse单副本双分片教程 - hisune.com