Like Share Discussion Bookmark Smile

J.J. Huang   2023-09-15   Docker Docker Compose   瀏覽次數:次   DMCA.com Protection Status

Docker Compose - ClickHouse (single)

前言

  • ClickHouse是列為導向(column-oriented)儲存的資料庫系統(database management system, DBMS),並用來於OLAP(online analytical processing)線上分析處理的查詢。
  • ClickHouse是由俄羅斯IT公司YandexYandex.Metrica網路分析服務開發的,且ClickHouse允許分析即時更新的資料,以及該資料庫系統以高效能為目標。

如要深入了解列為導向和行為導向的說明,參考官方文件:什麼是ClickHouse?

另外這邊使用Clickhouse的開源簡單商業智能應用程序和SQL編輯器工具。spoonest/clickhouse-tabix-web-client

Docker Image

此處直接使用Docker hub上的clickhouse/clickhouse-server:23.5.3.24
另外有在多掛Docker hub上的spoonest/clickhouse-tabix-web-client

Docker Compose

  • docker-compose.yml
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    version: "3.3"
    services:
    clickhouse:
    image: clickhouse/clickhouse-server:23.5.3.24
    container_name: clickhouse
    ulimits:
    nproc: 65535
    nofile:
    soft: 262144
    hard: 262144
    ports:
    - "8123:8123"
    - "9000:9000"
    - "9004:9004" # mysql
    - "9005:9005" # postgres
    - "9009:9009"
    tabix-web-client:
    image: spoonest/clickhouse-tabix-web-client
    container_name: tabix-web-client
    ports:
    - "8080:80"
    説明:
    • 版本定義為3.3
    • 定義services名稱為clickhouse
    • 使用clickhouse/clickhouse-server:23.5.3.24映像。
    • 容器名稱定義clickhouse
    • 指定容器的限制值。
      • 指定最大處理程序 65535。
        • 打開文件數量。
          • 指定文件句柄數 262144(軟限制,應用可以隨時修改,不能超過硬限制)。
          • 指定文件句柄數 262144(系統硬限制,只能 root 用戶提高)。
    • 暴露端口81238123(格式為HOST:CONTAINER)。
    • 暴露端口90009000(格式為HOST:CONTAINER)。
    • 暴露端口90049004(格式為HOST:CONTAINER)。
    • 暴露端口90059005(格式為HOST:CONTAINER)。
    • 暴露端口90099009(格式為HOST:CONTAINER)。
    • 定義services名稱為tabix-web-client
    • 使用spoonest/clickhouse-tabix-web-client映像。
    • 容器名稱定義tabix-web-client
    • 暴露端口808080(格式為HOST:CONTAINER)。

ClickHouse 重要的檔案/路徑位置

  • clickhouse-server 主要設定檔:
    • /etc/clickhouse-server/config.xml
  • clickhouse-server 用戶設定檔:
    • /etc/clickhouse-server/users.xml
  • clickhouse-server 資料儲存路徑:
    • /var/lib/clickhouse
  • clickhouse-server Log檔路徑:
    • /var/log/clickhouse-server/clickhouse-server.log
  • clickhouse-server ErrLog檔路徑:
    • /var/log/clickhouse-server/clickhouse-server.err.log

ClickHouse 設定檔簡單說明:

以下設定檔皆使用內置的默認文件,並沒有使用volumes去掛載和修改配置。

  • /etc/clickhouse-server/config.xml

    • 該檔案主要配置文件,像是針對clickhouse的對外的port、用户設定、配置文件…等等。此處文件也適用默認配置。

      註:可參考官方文件Configuration FilesGlobal Server Settings

  • 【config.xml】<點擊開合>
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    182
    183
    184
    185
    186
    187
    188
    189
    190
    191
    192
    193
    194
    195
    196
    197
    198
    199
    200
    201
    202
    203
    204
    205
    206
    207
    208
    209
    210
    211
    212
    213
    214
    215
    216
    217
    218
    219
    220
    221
    222
    223
    224
    225
    226
    227
    228
    229
    230
    231
    232
    233
    234
    235
    236
    237
    238
    239
    240
    241
    242
    243
    244
    245
    246
    247
    248
    249
    250
    251
    252
    253
    254
    255
    256
    257
    258
    259
    260
    261
    262
    263
    264
    265
    266
    267
    268
    269
    270
    271
    272
    273
    274
    275
    276
    277
    278
    279
    280
    281
    282
    283
    284
    285
    286
    287
    288
    289
    290
    291
    292
    293
    294
    295
    296
    297
    298
    299
    300
    301
    302
    303
    304
    305
    306
    307
    308
    309
    310
    311
    312
    313
    314
    315
    316
    317
    318
    319
    320
    321
    322
    323
    324
    325
    326
    327
    328
    329
    330
    331
    332
    333
    334
    335
    336
    337
    338
    339
    340
    341
    342
    343
    344
    345
    346
    347
    348
    349
    350
    351
    352
    353
    354
    355
    356
    357
    358
    359
    360
    361
    362
    363
    364
    365
    366
    367
    368
    369
    370
    371
    372
    373
    374
    375
    376
    377
    378
    379
    380
    381
    382
    383
    384
    385
    386
    387
    388
    389
    390
    391
    392
    393
    394
    395
    396
    397
    398
    399
    400
    401
    402
    403
    404
    405
    406
    407
    408
    409
    410
    411
    412
    413
    414
    415
    416
    417
    418
    419
    420
    421
    422
    423
    424
    425
    426
    427
    428
    429
    430
    431
    432
    433
    434
    435
    436
    437
    438
    439
    440
    441
    442
    443
    444
    445
    446
    447
    448
    449
    450
    451
    452
    453
    454
    455
    456
    457
    458
    459
    460
    461
    462
    463
    464
    465
    466
    467
    468
    469
    470
    471
    472
    473
    474
    475
    476
    477
    478
    479
    480
    481
    482
    483
    484
    485
    486
    487
    488
    489
    490
    491
    492
    493
    494
    495
    496
    497
    498
    499
    500
    501
    502
    503
    504
    505
    506
    507
    508
    509
    510
    511
    512
    513
    514
    515
    516
    517
    518
    519
    520
    521
    522
    523
    524
    525
    526
    527
    528
    529
    530
    531
    532
    533
    534
    535
    536
    537
    538
    539
    540
    541
    542
    543
    544
    545
    546
    547
    548
    549
    550
    551
    552
    553
    554
    555
    556
    557
    558
    559
    560
    561
    562
    563
    564
    565
    566
    567
    568
    569
    570
    571
    572
    573
    574
    575
    576
    577
    578
    579
    580
    581
    582
    583
    584
    585
    586
    587
    588
    589
    590
    591
    592
    593
    594
    595
    596
    597
    598
    599
    600
    601
    602
    603
    604
    605
    606
    607
    608
    609
    610
    611
    612
    613
    614
    615
    616
    617
    618
    619
    620
    621
    622
    623
    624
    625
    626
    627
    628
    629
    630
    631
    632
    633
    634
    635
    636
    637
    638
    639
    640
    641
    642
    643
    644
    645
    646
    647
    648
    649
    650
    651
    652
    653
    654
    655
    656
    657
    658
    659
    660
    661
    662
    663
    664
    665
    666
    667
    668
    669
    670
    671
    672
    673
    674
    675
    676
    677
    678
    679
    680
    681
    682
    683
    684
    685
    686
    687
    688
    689
    690
    691
    692
    693
    694
    695
    696
    697
    698
    699
    700
    701
    702
    703
    704
    705
    706
    707
    708
    709
    710
    711
    712
    713
    714
    715
    716
    717
    718
    719
    720
    721
    722
    723
    724
    725
    726
    727
    728
    729
    730
    731
    732
    733
    734
    735
    736
    737
    738
    739
    740
    741
    742
    743
    744
    745
    746
    747
    748
    749
    750
    751
    752
    753
    754
    755
    756
    757
    758
    759
    760
    761
    762
    763
    764
    765
    766
    767
    768
    769
    770
    771
    772
    773
    774
    775
    776
    777
    778
    779
    780
    781
    782
    783
    784
    785
    786
    787
    788
    789
    790
    791
    792
    793
    794
    795
    796
    797
    798
    799
    800
    801
    802
    803
    804
    805
    806
    807
    808
    809
    810
    811
    812
    813
    814
    815
    816
    817
    818
    819
    820
    821
    822
    823
    824
    825
    826
    827
    828
    829
    830
    831
    832
    833
    834
    835
    836
    837
    838
    839
    840
    841
    842
    843
    844
    845
    846
    847
    848
    849
    850
    851
    852
    853
    854
    855
    856
    857
    858
    859
    860
    861
    862
    863
    864
    865
    866
    867
    868
    869
    870
    871
    872
    873
    874
    875
    876
    877
    878
    879
    880
    881
    882
    883
    884
    885
    886
    887
    888
    889
    890
    891
    892
    893
    894
    895
    896
    897
    898
    899
    900
    901
    902
    903
    904
    905
    906
    907
    908
    909
    910
    911
    912
    913
    914
    915
    916
    917
    918
    919
    920
    921
    922
    923
    924
    925
    926
    927
    928
    929
    930
    931
    932
    933
    934
    935
    936
    937
    938
    939
    940
    941
    942
    943
    944
    945
    946
    947
    948
    949
    950
    951
    952
    953
    954
    955
    956
    957
    958
    959
    960
    961
    962
    963
    964
    965
    966
    967
    968
    969
    970
    971
    972
    973
    974
    975
    976
    977
    978
    979
    980
    981
    982
    983
    984
    985
    986
    987
    988
    989
    990
    991
    992
    993
    994
    995
    996
    997
    998
    999
    1000
    1001
    1002
    1003
    1004
    1005
    1006
    1007
    1008
    1009
    1010
    1011
    1012
    1013
    1014
    1015
    1016
    1017
    1018
    1019
    1020
    1021
    1022
    1023
    1024
    1025
    1026
    1027
    1028
    1029
    1030
    1031
    1032
    1033
    1034
    1035
    1036
    1037
    1038
    1039
    1040
    1041
    1042
    1043
    1044
    1045
    1046
    1047
    1048
    1049
    1050
    1051
    1052
    1053
    1054
    1055
    1056
    1057
    1058
    1059
    1060
    1061
    1062
    1063
    1064
    1065
    1066
    1067
    1068
    1069
    1070
    1071
    1072
    1073
    1074
    1075
    1076
    1077
    1078
    1079
    1080
    1081
    1082
    1083
    1084
    1085
    1086
    1087
    1088
    1089
    1090
    1091
    1092
    1093
    1094
    1095
    1096
    1097
    1098
    1099
    1100
    1101
    1102
    1103
    1104
    1105
    1106
    1107
    1108
    1109
    1110
    1111
    1112
    1113
    1114
    1115
    1116
    1117
    1118
    1119
    1120
    1121
    1122
    1123
    1124
    1125
    1126
    1127
    1128
    1129
    1130
    1131
    1132
    1133
    1134
    1135
    1136
    1137
    1138
    1139
    1140
    1141
    1142
    1143
    1144
    1145
    1146
    1147
    1148
    1149
    1150
    1151
    1152
    1153
    1154
    1155
    1156
    1157
    1158
    1159
    1160
    1161
    1162
    1163
    1164
    1165
    1166
    1167
    1168
    1169
    1170
    1171
    1172
    1173
    1174
    1175
    1176
    1177
    1178
    1179
    1180
    1181
    1182
    1183
    1184
    1185
    1186
    1187
    1188
    1189
    1190
    1191
    1192
    1193
    1194
    1195
    1196
    1197
    1198
    1199
    1200
    1201
    1202
    1203
    1204
    1205
    1206
    1207
    1208
    1209
    1210
    1211
    1212
    1213
    1214
    1215
    1216
    1217
    1218
    1219
    1220
    1221
    1222
    1223
    1224
    1225
    1226
    1227
    1228
    1229
    1230
    1231
    1232
    1233
    1234
    1235
    1236
    1237
    1238
    1239
    1240
    1241
    1242
    1243
    1244
    1245
    1246
    1247
    1248
    1249
    1250
    1251
    1252
    1253
    1254
    1255
    1256
    1257
    1258
    1259
    1260
    1261
    1262
    1263
    1264
    1265
    1266
    1267
    1268
    1269
    1270
    1271
    1272
    1273
    1274
    1275
    1276
    1277
    1278
    1279
    1280
    1281
    1282
    1283
    1284
    1285
    1286
    1287
    1288
    1289
    1290
    1291
    1292
    1293
    1294
    1295
    1296
    1297
    1298
    1299
    1300
    1301
    1302
    1303
    1304
    1305
    1306
    1307
    1308
    1309
    1310
    1311
    1312
    1313
    1314
    1315
    1316
    1317
    1318
    1319
    1320
    1321
    1322
    1323
    1324
    1325
    1326
    1327
    1328
    1329
    1330
    1331
    1332
    1333
    1334
    1335
    1336
    1337
    1338
    1339
    1340
    1341
    1342
    1343
    1344
    1345
    1346
    1347
    1348
    1349
    1350
    1351
    1352
    1353
    1354
    1355
    1356
    1357
    1358
    1359
    1360
    1361
    1362
    1363
    1364
    1365
    1366
    1367
    1368
    1369
    1370
    1371
    1372
    1373
    1374
    1375
    1376
    1377
    1378
    1379
    1380
    1381
    1382
    1383
    1384
    1385
    1386
    1387
    1388
    1389
    1390
    1391
    1392
    1393
    1394
    1395
    1396
    1397
    1398
    1399
    1400
    1401
    1402
    1403
    1404
    1405
    1406
    1407
    1408
    1409
    1410
    1411
    1412
    1413
    1414
    1415
    1416
    1417
    1418
    1419
    1420
    1421
    1422
    1423
    1424
    1425
    1426
    1427
    1428
    1429
    1430
    1431
    1432
    1433
    1434
    1435
    1436
    1437
    1438
    1439
    1440
    1441
    1442
    1443
    1444
    1445
    1446
    1447
    1448
    1449
    1450
    1451
    1452
    1453
    1454
    1455
    1456
    1457
    1458
    1459
    1460
    1461
    1462
    1463
    1464
    1465
    1466
    1467
    1468
    1469
    1470
    1471
    1472
    1473
    1474
    1475
    1476
    1477
    1478
    1479
    1480
    1481
    1482
    1483
    1484
    1485
    1486
    1487
    1488
    1489
    1490
    1491
    1492
    1493
    1494
    1495
    1496
    1497
    1498
    1499
    1500
    1501
    1502
    1503
    1504
    1505
    1506
    1507
    1508
    1509
    1510
    1511
    1512
    1513
    1514
    1515
    1516
    1517
    1518
    1519
    1520
    1521
    1522
    1523
    1524
    1525
    1526
    1527
    1528
    1529
    1530
    1531
    1532
    1533
    1534
    1535
    1536
    1537
    1538
    1539
    1540
    1541
    1542
    1543
    1544
    1545
    1546
    1547
    1548
    1549
    1550
    1551
    1552
    1553
    1554
    1555
    1556
    1557
    1558
    1559
    1560
    1561
    1562
    1563
    1564
    1565
    1566
    1567
    1568
    1569
    1570
    1571
    <!--
    NOTE: User and query level settings are set up in "users.xml" file.
    If you have accidentally specified user-level settings here, server won't start.
    You can either move the settings to the right place inside "users.xml" file
    or add <skip_check_for_incorrect_settings>1</skip_check_for_incorrect_settings> here.
    -->
    <clickhouse>
    <logger>
    <!-- Possible levels [1]:

    - none (turns off logging)
    - fatal
    - critical
    - error
    - warning
    - notice
    - information
    - debug
    - trace
    - test (not for production usage)

    [1]: https://github.com/pocoproject/poco/blob/poco-1.9.4-release/Foundation/include/Poco/Logger.h#L105-L114
    -->
    <level>trace</level>
    <log>/var/log/clickhouse-server/clickhouse-server.log</log>
    <errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
    <!-- Rotation policy
    See https://github.com/pocoproject/poco/blob/poco-1.9.4-release/Foundation/include/Poco/FileChannel.h#L54-L85
    -->
    <size>1000M</size>
    <count>10</count>
    <!-- <console>1</console> --> <!-- Default behavior is autodetection (log to console if not daemon mode and is tty) -->

    <!-- Per level overrides (legacy):

    For example to suppress logging of the ConfigReloader you can use:
    NOTE: levels.logger is reserved, see below.
    -->
    <!--
    <levels>
    <ConfigReloader>none</ConfigReloader>
    </levels>
    -->

    <!-- Per level overrides:

    For example to suppress logging of the RBAC for default user you can use:
    (But please note that the logger name maybe changed from version to version, even after minor upgrade)
    -->
    <!--
    <levels>
    <logger>
    <name>ContextAccess (default)</name>
    <level>none</level>
    </logger>
    <logger>
    <name>DatabaseOrdinary (test)</name>
    <level>none</level>
    </logger>
    </levels>
    -->
    <!-- Structured log formatting:
    You can specify log format(for now, JSON only). In that case, the console log will be printed
    in specified format like JSON.
    For example, as below:
    {"date_time":"1650918987.180175","thread_name":"#1","thread_id":"254545","level":"Trace","query_id":"","logger_name":"BaseDaemon","message":"Received signal 2","source_file":"../base/daemon/BaseDaemon.cpp; virtual void SignalListener::run()","source_line":"192"}
    To enable JSON logging support, please uncomment the entire <formatting> tag below.

    a) You can modify key names by changing values under tag values inside <names> tag.
    For example, to change DATE_TIME to MY_DATE_TIME, you can do like:
    <date_time>MY_DATE_TIME</date_time>
    b) You can stop unwanted log properties to appear in logs. To do so, you can simply comment out (recommended)
    that property from this file.
    For example, if you do not want your log to print query_id, you can comment out only <query_id> tag.
    However, if you comment out all the tags under <names>, the program will print default values for as
    below.
    -->
    <!-- <formatting>
    <type>json</type>
    <names>
    <date_time>date_time</date_time>
    <thread_name>thread_name</thread_name>
    <thread_id>thread_id</thread_id>
    <level>level</level>
    <query_id>query_id</query_id>
    <logger_name>logger_name</logger_name>
    <message>message</message>
    <source_file>source_file</source_file>
    <source_line>source_line</source_line>
    </names>
    </formatting> -->
    </logger>

    <!-- Add headers to response in options request. OPTIONS method is used in CORS preflight requests. -->
    <!-- It is off by default. Next headers are obligate for CORS.-->
    <!-- http_options_response>
    <header>
    <name>Access-Control-Allow-Origin</name>
    <value>*</value>
    </header>
    <header>
    <name>Access-Control-Allow-Headers</name>
    <value>origin, x-requested-with</value>
    </header>
    <header>
    <name>Access-Control-Allow-Methods</name>
    <value>POST, GET, OPTIONS</value>
    </header>
    <header>
    <name>Access-Control-Max-Age</name>
    <value>86400</value>
    </header>
    </http_options_response -->

    <!-- It is the name that will be shown in the clickhouse-client.
    By default, anything with "production" will be highlighted in red in query prompt.
    -->
    <!--display_name>production</display_name-->

    <!-- Port for HTTP API. See also 'https_port' for secure connections.
    This interface is also used by ODBC and JDBC drivers (DataGrip, Dbeaver, ...)
    and by most of web interfaces (embedded UI, Grafana, Redash, ...).
    -->
    <http_port>8123</http_port>

    <!-- Port for interaction by native protocol with:
    - clickhouse-client and other native ClickHouse tools (clickhouse-benchmark, clickhouse-copier);
    - clickhouse-server with other clickhouse-servers for distributed query processing;
    - ClickHouse drivers and applications supporting native protocol
    (this protocol is also informally called as "the TCP protocol");
    See also 'tcp_port_secure' for secure connections.
    -->
    <tcp_port>9000</tcp_port>

    <!-- Compatibility with MySQL protocol.
    ClickHouse will pretend to be MySQL for applications connecting to this port.
    -->
    <mysql_port>9004</mysql_port>

    <!-- Compatibility with PostgreSQL protocol.
    ClickHouse will pretend to be PostgreSQL for applications connecting to this port.
    -->
    <postgresql_port>9005</postgresql_port>

    <!-- HTTP API with TLS (HTTPS).
    You have to configure certificate to enable this interface.
    See the openSSL section below.
    -->
    <!-- <https_port>8443</https_port> -->

    <!-- Native interface with TLS.
    You have to configure certificate to enable this interface.
    See the openSSL section below.
    -->
    <!-- <tcp_port_secure>9440</tcp_port_secure> -->

    <!-- Native interface wrapped with PROXYv1 protocol
    PROXYv1 header sent for every connection.
    ClickHouse will extract information about proxy-forwarded client address from the header.
    -->
    <!-- <tcp_with_proxy_port>9011</tcp_with_proxy_port> -->

    <!-- Port for communication between replicas. Used for data exchange.
    It provides low-level data access between servers.
    This port should not be accessible from untrusted networks.
    See also 'interserver_http_credentials'.
    Data transferred over connections to this port should not go through untrusted networks.
    See also 'interserver_https_port'.
    -->
    <interserver_http_port>9009</interserver_http_port>

    <!-- Port for communication between replicas with TLS.
    You have to configure certificate to enable this interface.
    See the openSSL section below.
    See also 'interserver_http_credentials'.
    -->
    <!-- <interserver_https_port>9010</interserver_https_port> -->

    <!-- Hostname that is used by other replicas to request this server.
    If not specified, then it is determined analogous to 'hostname -f' command.
    This setting could be used to switch replication to another network interface
    (the server may be connected to multiple networks via multiple addresses)
    -->

    <!--
    <interserver_http_host>example.clickhouse.com</interserver_http_host>
    -->

    <!-- You can specify credentials for authenthication between replicas.
    This is required when interserver_https_port is accessible from untrusted networks,
    and also recommended to avoid SSRF attacks from possibly compromised services in your network.
    -->
    <!--<interserver_http_credentials>
    <user>interserver</user>
    <password></password>
    </interserver_http_credentials>-->

    <!-- Listen specified address.
    Use :: (wildcard IPv6 address), if you want to accept connections both with IPv4 and IPv6 from everywhere.
    Notes:
    If you open connections from wildcard address, make sure that at least one of the following measures applied:
    - server is protected by firewall and not accessible from untrusted networks;
    - all users are restricted to subset of network addresses (see users.xml);
    - all users have strong passwords, only secure (TLS) interfaces are accessible, or connections are only made via TLS interfaces.
    - users without password have readonly access.
    See also: https://www.shodan.io/search?query=clickhouse
    -->
    <!-- <listen_host>::</listen_host> -->


    <!-- Same for hosts without support for IPv6: -->
    <!-- <listen_host>0.0.0.0</listen_host> -->

    <!-- Default values - try listen localhost on IPv4 and IPv6. -->
    <!--
    <listen_host>::1</listen_host>
    <listen_host>127.0.0.1</listen_host>
    -->

    <!-- <interserver_listen_host>::</interserver_listen_host> -->
    <!-- Listen host for communication between replicas. Used for data exchange -->
    <!-- Default values - equal to listen_host -->

    <!-- Don't exit if IPv6 or IPv4 networks are unavailable while trying to listen. -->
    <!-- <listen_try>0</listen_try> -->

    <!-- Allow multiple servers to listen on the same address:port. This is not recommended.
    -->
    <!-- <listen_reuse_port>0</listen_reuse_port> -->

    <!-- <listen_backlog>4096</listen_backlog> -->

    <max_connections>4096</max_connections>

    <!-- For 'Connection: keep-alive' in HTTP 1.1 -->
    <keep_alive_timeout>3</keep_alive_timeout>

    <!-- gRPC protocol (see src/Server/grpc_protos/clickhouse_grpc.proto for the API) -->
    <!-- <grpc_port>9100</grpc_port> -->
    <grpc>
    <enable_ssl>false</enable_ssl>

    <!-- The following two files are used only if enable_ssl=1 -->
    <ssl_cert_file>/path/to/ssl_cert_file</ssl_cert_file>
    <ssl_key_file>/path/to/ssl_key_file</ssl_key_file>

    <!-- Whether server will request client for a certificate -->
    <ssl_require_client_auth>false</ssl_require_client_auth>

    <!-- The following file is used only if ssl_require_client_auth=1 -->
    <ssl_ca_cert_file>/path/to/ssl_ca_cert_file</ssl_ca_cert_file>

    <!-- Default transport compression type (can be overridden by client, see the transport_compression_type field in QueryInfo).
    Supported algorithms: none, deflate, gzip, stream_gzip -->
    <transport_compression_type>none</transport_compression_type>

    <!-- Default transport compression level. Supported levels: 0..3 -->
    <transport_compression_level>0</transport_compression_level>

    <!-- Send/receive message size limits in bytes. -1 means unlimited -->
    <max_send_message_size>-1</max_send_message_size>
    <max_receive_message_size>-1</max_receive_message_size>

    <!-- Enable if you want very detailed logs -->
    <verbose_logs>false</verbose_logs>
    </grpc>

    <!-- Used with https_port and tcp_port_secure. Full ssl options list: https://github.com/ClickHouse-Extras/poco/blob/master/NetSSL_OpenSSL/include/Poco/Net/SSLManager.h#L71 -->
    <openSSL>
    <server> <!-- Used for https server AND secure tcp port -->
    <!-- openssl req -subj "/CN=localhost" -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout /etc/clickhouse-server/server.key -out /etc/clickhouse-server/server.crt -->
    <!-- <certificateFile>/etc/clickhouse-server/server.crt</certificateFile>
    <privateKeyFile>/etc/clickhouse-server/server.key</privateKeyFile> -->
    <!-- dhparams are optional. You can delete the <dhParamsFile> element.
    To generate dhparams, use the following command:
    openssl dhparam -out /etc/clickhouse-server/dhparam.pem 4096
    Only file format with BEGIN DH PARAMETERS is supported.
    -->
    <!-- <dhParamsFile>/etc/clickhouse-server/dhparam.pem</dhParamsFile>-->
    <verificationMode>none</verificationMode>
    <loadDefaultCAFile>true</loadDefaultCAFile>
    <cacheSessions>true</cacheSessions>
    <disableProtocols>sslv2,sslv3</disableProtocols>
    <preferServerCiphers>true</preferServerCiphers>
    </server>

    <client> <!-- Used for connecting to https dictionary source and secured Zookeeper communication -->
    <loadDefaultCAFile>true</loadDefaultCAFile>
    <cacheSessions>true</cacheSessions>
    <disableProtocols>sslv2,sslv3</disableProtocols>
    <preferServerCiphers>true</preferServerCiphers>
    <!-- Use for self-signed: <verificationMode>none</verificationMode> -->
    <invalidCertificateHandler>
    <!-- Use for self-signed: <name>AcceptCertificateHandler</name> -->
    <name>RejectCertificateHandler</name>
    </invalidCertificateHandler>
    </client>
    </openSSL>

    <!-- Default root page on http[s] server. For example load UI from https://tabix.io/ when opening http://localhost:8123 -->
    <!--
    <http_server_default_response><![CDATA[<html ng-app="SMI2"><head><base href="http://ui.tabix.io/"></head><body><div ui-view="" class="content-ui"></div><script src="http://loader.tabix.io/master.js"></script></body></html>]]></http_server_default_response>
    -->

    <!-- The maximum number of query processing threads, excluding threads for retrieving data from remote servers, allowed to run all queries.
    This is not a hard limit. In case if the limit is reached the query will still get at least one thread to run.
    Query can upscale to desired number of threads during execution if more threads become available.
    -->
    <concurrent_threads_soft_limit_num>0</concurrent_threads_soft_limit_num>
    <concurrent_threads_soft_limit_ratio_to_cores>0</concurrent_threads_soft_limit_ratio_to_cores>

    <!-- Maximum number of concurrent queries. -->
    <max_concurrent_queries>100</max_concurrent_queries>

    <!-- Maximum memory usage (resident set size) for server process.
    Zero value or unset means default. Default is "max_server_memory_usage_to_ram_ratio" of available physical RAM.
    If the value is larger than "max_server_memory_usage_to_ram_ratio" of available physical RAM, it will be cut down.

    The constraint is checked on query execution time.
    If a query tries to allocate memory and the current memory usage plus allocation is greater
    than specified threshold, exception will be thrown.

    It is not practical to set this constraint to small values like just a few gigabytes,
    because memory allocator will keep this amount of memory in caches and the server will deny service of queries.
    -->
    <max_server_memory_usage>0</max_server_memory_usage>

    <!-- Maximum number of threads in the Global thread pool.
    This will default to a maximum of 10000 threads if not specified.
    This setting will be useful in scenarios where there are a large number
    of distributed queries that are running concurrently but are idling most
    of the time, in which case a higher number of threads might be required.
    -->

    <max_thread_pool_size>10000</max_thread_pool_size>

    <!-- Configure other thread pools: -->
    <!--
    <background_buffer_flush_schedule_pool_size>16</background_buffer_flush_schedule_pool_size>
    <background_pool_size>16</background_pool_size>
    <background_merges_mutations_concurrency_ratio>2</background_merges_mutations_concurrency_ratio>
    <background_merges_mutations_scheduling_policy>round_robin</background_merges_mutations_scheduling_policy>
    <background_move_pool_size>8</background_move_pool_size>
    <background_fetches_pool_size>8</background_fetches_pool_size>
    <background_common_pool_size>8</background_common_pool_size>
    <background_schedule_pool_size>128</background_schedule_pool_size>
    <background_message_broker_schedule_pool_size>16</background_message_broker_schedule_pool_size>
    <background_distributed_schedule_pool_size>16</background_distributed_schedule_pool_size>
    -->

    <!-- On memory constrained environments you may have to set this to value larger than 1.
    -->
    <max_server_memory_usage_to_ram_ratio>0.9</max_server_memory_usage_to_ram_ratio>

    <!-- Simple server-wide memory profiler. Collect a stack trace at every peak allocation step (in bytes).
    Data will be stored in system.trace_log table with query_id = empty string.
    Zero means disabled.
    -->
    <total_memory_profiler_step>4194304</total_memory_profiler_step>

    <!-- Collect random allocations and deallocations and write them into system.trace_log with 'MemorySample' trace_type.
    The probability is for every alloc/free regardless to the size of the allocation.
    Note that sampling happens only when the amount of untracked memory exceeds the untracked memory limit,
    which is 4 MiB by default but can be lowered if 'total_memory_profiler_step' is lowered.
    You may want to set 'total_memory_profiler_step' to 1 for extra fine grained sampling.
    -->
    <total_memory_tracker_sample_probability>0</total_memory_tracker_sample_probability>

    <!-- Set limit on number of open files (default: maximum). This setting makes sense on Mac OS X because getrlimit() fails to retrieve
    correct maximum value. -->
    <!-- <max_open_files>262144</max_open_files> -->

    <!-- Size of cache of uncompressed blocks of data, used in tables of MergeTree family.
    In bytes. Cache is single for server. Memory is allocated only on demand.
    Cache is used when 'use_uncompressed_cache' user setting turned on (off by default).
    Uncompressed cache is advantageous only for very short queries and in rare cases.

    Note: uncompressed cache can be pointless for lz4, because memory bandwidth
    is slower than multi-core decompression on some server configurations.
    Enabling it can sometimes paradoxically make queries slower.
    -->
    <uncompressed_cache_size>8589934592</uncompressed_cache_size>

    <!-- Approximate size of mark cache, used in tables of MergeTree family.
    In bytes. Cache is single for server. Memory is allocated only on demand.
    You should not lower this value.
    -->
    <mark_cache_size>5368709120</mark_cache_size>


    <!-- If you enable the `min_bytes_to_use_mmap_io` setting,
    the data in MergeTree tables can be read with mmap to avoid copying from kernel to userspace.
    It makes sense only for large files and helps only if data reside in page cache.
    To avoid frequent open/mmap/munmap/close calls (which are very expensive due to consequent page faults)
    and to reuse mappings from several threads and queries,
    the cache of mapped files is maintained. Its size is the number of mapped regions (usually equal to the number of mapped files).
    The amount of data in mapped files can be monitored
    in system.metrics, system.metric_log by the MMappedFiles, MMappedFileBytes metrics
    and in system.asynchronous_metrics, system.asynchronous_metrics_log by the MMapCacheCells metric,
    and also in system.events, system.processes, system.query_log, system.query_thread_log, system.query_views_log by the
    CreatedReadBufferMMap, CreatedReadBufferMMapFailed, MMappedFileCacheHits, MMappedFileCacheMisses events.
    Note that the amount of data in mapped files does not consume memory directly and is not accounted
    in query or server memory usage - because this memory can be discarded similar to OS page cache.
    The cache is dropped (the files are closed) automatically on removal of old parts in MergeTree,
    also it can be dropped manually by the SYSTEM DROP MMAP CACHE query.
    -->
    <mmap_cache_size>1000</mmap_cache_size>

    <!-- Cache size in bytes for compiled expressions.-->
    <compiled_expression_cache_size>134217728</compiled_expression_cache_size>

    <!-- Cache size in elements for compiled expressions.-->
    <compiled_expression_cache_elements_size>10000</compiled_expression_cache_elements_size>

    <!-- Path to data directory, with trailing slash. -->
    <path>/var/lib/clickhouse/</path>

    <!-- Multi-disk configuration example: -->
    <!--
    <storage_configuration>
    <disks>
    <default>
    <keep_free_space_bytes>0</keep_free_space_bytes>
    </default>
    <data>
    <path>/data/</path>
    <keep_free_space_bytes>0</keep_free_space_bytes>
    </data>
    <s3>
    <type>s3</type>
    <endpoint>http://path/to/endpoint</endpoint>
    <access_key_id>your_access_key_id</access_key_id>
    <secret_access_key>your_secret_access_key</secret_access_key>
    </s3>
    <blob_storage_disk>
    <type>azure_blob_storage</type>
    <storage_account_url>http://account.blob.core.windows.net</storage_account_url>
    <container_name>container</container_name>
    <account_name>account</account_name>
    <account_key>pass123</account_key>
    <metadata_path>/var/lib/clickhouse/disks/blob_storage_disk/</metadata_path>
    <cache_enabled>true</cache_enabled>
    <cache_path>/var/lib/clickhouse/disks/blob_storage_disk/cache/</cache_path>
    <skip_access_check>false</skip_access_check>
    </blob_storage_disk>
    </disks>

    <policies>
    <all>
    <volumes>
    <main>
    <disk>default</disk>
    <disk>data</disk>
    <disk>s3</disk>
    <disk>blob_storage_disk</disk>

    <max_data_part_size_bytes></max_data_part_size_bytes>
    <max_data_part_size_ratio></max_data_part_size_ratio>
    <perform_ttl_move_on_insert>true</perform_ttl_move_on_insert>
    <prefer_not_to_merge>false</prefer_not_to_merge>
    <load_balancing>round_robin</load_balancing>
    </main>
    </volumes>
    <move_factor>0.2</move_factor>
    </all>
    </policies>
    </storage_configuration>
    -->


    <!-- Path to temporary data for processing hard queries. -->
    <tmp_path>/var/lib/clickhouse/tmp/</tmp_path>

    <!-- Disable AuthType plaintext_password and no_password for ACL. -->
    <allow_plaintext_password>1</allow_plaintext_password>
    <allow_no_password>1</allow_no_password>
    <allow_implicit_no_password>1</allow_implicit_no_password>

    <!-- When a user does not specify a password type in the CREATE USER query, the default password type is used.
    Accepted values are: 'plaintext_password', 'sha256_password', 'double_sha1_password', 'bcrypt_password'.
    -->
    <default_password_type>sha256_password</default_password_type>

    <!-- Work factor for bcrypt_password authentication type-->
    <bcrypt_workfactor>12</bcrypt_workfactor>

    <!-- Complexity requirements for user passwords. -->
    <!-- <password_complexity>
    <rule>
    <pattern>.{12}</pattern>
    <message>be at least 12 characters long</message>
    </rule>
    <rule>
    <pattern>\p{N}</pattern>
    <message>contain at least 1 numeric character</message>
    </rule>
    <rule>
    <pattern>\p{Ll}</pattern>
    <message>contain at least 1 lowercase character</message>
    </rule>
    <rule>
    <pattern>\p{Lu}</pattern>
    <message>contain at least 1 uppercase character</message>
    </rule>
    <rule>
    <pattern>[^\p{L}\p{N}]</pattern>
    <message>contain at least 1 special character</message>
    </rule>
    </password_complexity> -->

    <!-- Policy from the <storage_configuration> for the temporary files.
    If not set <tmp_path> is used, otherwise <tmp_path> is ignored.

    Notes:
    - move_factor is ignored
    - keep_free_space_bytes is ignored
    - max_data_part_size_bytes is ignored
    - you must have exactly one volume in that policy
    -->
    <!-- <tmp_policy>tmp</tmp_policy> -->

    <!-- Directory with user provided files that are accessible by 'file' table function. -->
    <user_files_path>/var/lib/clickhouse/user_files/</user_files_path>

    <!-- LDAP server definitions. -->
    <ldap_servers>
    <!-- List LDAP servers with their connection parameters here to later 1) use them as authenticators for dedicated local users,
    who have 'ldap' authentication mechanism specified instead of 'password', or to 2) use them as remote user directories.
    Parameters:
    host - LDAP server hostname or IP, this parameter is mandatory and cannot be empty.
    port - LDAP server port, default is 636 if enable_tls is set to true, 389 otherwise.
    bind_dn - template used to construct the DN to bind to.
    The resulting DN will be constructed by replacing all '{user_name}' substrings of the template with the actual
    user name during each authentication attempt.
    user_dn_detection - section with LDAP search parameters for detecting the actual user DN of the bound user.
    This is mainly used in search filters for further role mapping when the server is Active Directory. The
    resulting user DN will be used when replacing '{user_dn}' substrings wherever they are allowed. By default,
    user DN is set equal to bind DN, but once search is performed, it will be updated with to the actual detected
    user DN value.
    base_dn - template used to construct the base DN for the LDAP search.
    The resulting DN will be constructed by replacing all '{user_name}' and '{bind_dn}' substrings
    of the template with the actual user name and bind DN during the LDAP search.
    scope - scope of the LDAP search.
    Accepted values are: 'base', 'one_level', 'children', 'subtree' (the default).
    search_filter - template used to construct the search filter for the LDAP search.
    The resulting filter will be constructed by replacing all '{user_name}', '{bind_dn}', and '{base_dn}'
    substrings of the template with the actual user name, bind DN, and base DN during the LDAP search.
    Note, that the special characters must be escaped properly in XML.
    verification_cooldown - a period of time, in seconds, after a successful bind attempt, during which a user will be assumed
    to be successfully authenticated for all consecutive requests without contacting the LDAP server.
    Specify 0 (the default) to disable caching and force contacting the LDAP server for each authentication request.
    enable_tls - flag to trigger use of secure connection to the LDAP server.
    Specify 'no' for plain text (ldap://) protocol (not recommended).
    Specify 'yes' for LDAP over SSL/TLS (ldaps://) protocol (recommended, the default).
    Specify 'starttls' for legacy StartTLS protocol (plain text (ldap://) protocol, upgraded to TLS).
    tls_minimum_protocol_version - the minimum protocol version of SSL/TLS.
    Accepted values are: 'ssl2', 'ssl3', 'tls1.0', 'tls1.1', 'tls1.2' (the default).
    tls_require_cert - SSL/TLS peer certificate verification behavior.
    Accepted values are: 'never', 'allow', 'try', 'demand' (the default).
    tls_cert_file - path to certificate file.
    tls_key_file - path to certificate key file.
    tls_ca_cert_file - path to CA certificate file.
    tls_ca_cert_dir - path to the directory containing CA certificates.
    tls_cipher_suite - allowed cipher suite (in OpenSSL notation).
    Example:
    <my_ldap_server>
    <host>localhost</host>
    <port>636</port>
    <bind_dn>uid={user_name},ou=users,dc=example,dc=com</bind_dn>
    <verification_cooldown>300</verification_cooldown>
    <enable_tls>yes</enable_tls>
    <tls_minimum_protocol_version>tls1.2</tls_minimum_protocol_version>
    <tls_require_cert>demand</tls_require_cert>
    <tls_cert_file>/path/to/tls_cert_file</tls_cert_file>
    <tls_key_file>/path/to/tls_key_file</tls_key_file>
    <tls_ca_cert_file>/path/to/tls_ca_cert_file</tls_ca_cert_file>
    <tls_ca_cert_dir>/path/to/tls_ca_cert_dir</tls_ca_cert_dir>
    <tls_cipher_suite>ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:AES256-GCM-SHA384</tls_cipher_suite>
    </my_ldap_server>
    Example (typical Active Directory with configured user DN detection for further role mapping):
    <my_ad_server>
    <host>localhost</host>
    <port>389</port>
    <bind_dn>EXAMPLE\{user_name}</bind_dn>
    <user_dn_detection>
    <base_dn>CN=Users,DC=example,DC=com</base_dn>
    <search_filter>(&amp;(objectClass=user)(sAMAccountName={user_name}))</search_filter>
    </user_dn_detection>
    <enable_tls>no</enable_tls>
    </my_ad_server>
    -->
    </ldap_servers>

    <!-- To enable Kerberos authentication support for HTTP requests (GSS-SPNEGO), for those users who are explicitly configured
    to authenticate via Kerberos, define a single 'kerberos' section here.
    Parameters:
    principal - canonical service principal name, that will be acquired and used when accepting security contexts.
    This parameter is optional, if omitted, the default principal will be used.
    This parameter cannot be specified together with 'realm' parameter.
    realm - a realm, that will be used to restrict authentication to only those requests whose initiator's realm matches it.
    This parameter is optional, if omitted, no additional filtering by realm will be applied.
    This parameter cannot be specified together with 'principal' parameter.
    Example:
    <kerberos />
    Example:
    <kerberos>
    <principal>HTTP/clickhouse.example.com@EXAMPLE.COM</principal>
    </kerberos>
    Example:
    <kerberos>
    <realm>EXAMPLE.COM</realm>
    </kerberos>
    -->

    <!-- Sources to read users, roles, access rights, profiles of settings, quotas. -->
    <user_directories>
    <users_xml>
    <!-- Path to configuration file with predefined users. -->
    <path>users.xml</path>
    </users_xml>
    <local_directory>
    <!-- Path to folder where users created by SQL commands are stored. -->
    <path>/var/lib/clickhouse/access/</path>
    </local_directory>

    <!-- To add an LDAP server as a remote user directory of users that are not defined locally, define a single 'ldap' section
    with the following parameters:
    server - one of LDAP server names defined in 'ldap_servers' config section above.
    This parameter is mandatory and cannot be empty.
    roles - section with a list of locally defined roles that will be assigned to each user retrieved from the LDAP server.
    If no roles are specified here or assigned during role mapping (below), user will not be able to perform any
    actions after authentication.
    role_mapping - section with LDAP search parameters and mapping rules.
    When a user authenticates, while still bound to LDAP, an LDAP search is performed using search_filter and the
    name of the logged in user. For each entry found during that search, the value of the specified attribute is
    extracted. For each attribute value that has the specified prefix, the prefix is removed, and the rest of the
    value becomes the name of a local role defined in ClickHouse, which is expected to be created beforehand by
    CREATE ROLE command.
    There can be multiple 'role_mapping' sections defined inside the same 'ldap' section. All of them will be
    applied.
    base_dn - template used to construct the base DN for the LDAP search.
    The resulting DN will be constructed by replacing all '{user_name}', '{bind_dn}', and '{user_dn}'
    substrings of the template with the actual user name, bind DN, and user DN during each LDAP search.
    scope - scope of the LDAP search.
    Accepted values are: 'base', 'one_level', 'children', 'subtree' (the default).
    search_filter - template used to construct the search filter for the LDAP search.
    The resulting filter will be constructed by replacing all '{user_name}', '{bind_dn}', '{user_dn}', and
    '{base_dn}' substrings of the template with the actual user name, bind DN, user DN, and base DN during
    each LDAP search.
    Note, that the special characters must be escaped properly in XML.
    attribute - attribute name whose values will be returned by the LDAP search. 'cn', by default.
    prefix - prefix, that will be expected to be in front of each string in the original list of strings returned by
    the LDAP search. Prefix will be removed from the original strings and resulting strings will be treated
    as local role names. Empty, by default.
    Example:
    <ldap>
    <server>my_ldap_server</server>
    <roles>
    <my_local_role1 />
    <my_local_role2 />
    </roles>
    <role_mapping>
    <base_dn>ou=groups,dc=example,dc=com</base_dn>
    <scope>subtree</scope>
    <search_filter>(&amp;(objectClass=groupOfNames)(member={bind_dn}))</search_filter>
    <attribute>cn</attribute>
    <prefix>clickhouse_</prefix>
    </role_mapping>
    </ldap>
    Example (typical Active Directory with role mapping that relies on the detected user DN):
    <ldap>
    <server>my_ad_server</server>
    <role_mapping>
    <base_dn>CN=Users,DC=example,DC=com</base_dn>
    <attribute>CN</attribute>
    <scope>subtree</scope>
    <search_filter>(&amp;(objectClass=group)(member={user_dn}))</search_filter>
    <prefix>clickhouse_</prefix>
    </role_mapping>
    </ldap>
    -->
    </user_directories>

    <access_control_improvements>
    <!-- Enables logic that users without permissive row policies can still read rows using a SELECT query.
    For example, if there two users A, B and a row policy is defined only for A, then
    if this setting is true the user B will see all rows, and if this setting is false the user B will see no rows.
    By default this setting is false for compatibility with earlier access configurations. -->
    <users_without_row_policies_can_read_rows>false</users_without_row_policies_can_read_rows>

    <!-- By default, for backward compatibility ON CLUSTER queries ignore CLUSTER grant,
    however you can change this behaviour by setting this to true -->
    <on_cluster_queries_require_cluster_grant>false</on_cluster_queries_require_cluster_grant>

    <!-- By default, for backward compatibility "SELECT * FROM system.<table>" doesn't require any grants and can be executed
    by any user. You can change this behaviour by setting this to true.
    If it's set to true then this query requires "GRANT SELECT ON system.<table>" just like as for non-system tables.
    Exceptions: a few system tables ("tables", "columns", "databases", and some constant tables like "one", "contributors")
    are still accessible for everyone; and if there is a SHOW privilege (e.g. "SHOW USERS") granted the corresponding system
    table (i.e. "system.users") will be accessible. -->
    <select_from_system_db_requires_grant>false</select_from_system_db_requires_grant>

    <!-- By default, for backward compatibility "SELECT * FROM information_schema.<table>" doesn't require any grants and can be
    executed by any user. You can change this behaviour by setting this to true.
    If it's set to true then this query requires "GRANT SELECT ON information_schema.<table>" just like as for ordinary tables. -->
    <select_from_information_schema_requires_grant>false</select_from_information_schema_requires_grant>

    <!-- By default, for backward compatibility a settings profile constraint for a specific setting inherit every not set field from
    previous profile. You can change this behaviour by setting this to true.
    If it's set to true then if settings profile has a constraint for a specific setting, then this constraint completely cancels all
    actions of previous constraint (defined in other profiles) for the same specific setting, including fields that are not set by new constraint.
    It also enables 'changeable_in_readonly' constraint type -->
    <settings_constraints_replace_previous>false</settings_constraints_replace_previous>

    <!-- Number of seconds since last access a role is stored in the Role Cache -->
    <role_cache_expiration_time_seconds>600</role_cache_expiration_time_seconds>
    </access_control_improvements>

    <!-- Default profile of settings. -->
    <default_profile>default</default_profile>

    <!-- Comma-separated list of prefixes for user-defined settings.
    The server will allow to set these settings, and retrieve them with the getSetting function.
    They are also logged in the query_log, similarly to other settings, but have no special effect.
    The "SQL_" prefix is introduced for compatibility with MySQL - these settings are being set be Tableau.
    -->
    <custom_settings_prefixes>SQL_</custom_settings_prefixes>

    <!-- System profile of settings. This settings are used by internal processes (Distributed DDL worker and so on). -->
    <!-- <system_profile>default</system_profile> -->

    <!-- Buffer profile of settings.
    This settings are used by Buffer storage to flush data to the underlying table.
    Default: used from system_profile directive.
    -->
    <!-- <buffer_profile>default</buffer_profile> -->

    <!-- Default database. -->
    <default_database>default</default_database>

    <!-- Server time zone could be set here.

    Time zone is used when converting between String and DateTime types,
    when printing DateTime in text formats and parsing DateTime from text,
    it is used in date and time related functions, if specific time zone was not passed as an argument.

    Time zone is specified as identifier from IANA time zone database, like UTC or Africa/Abidjan.
    If not specified, system time zone at server startup is used.

    Please note, that server could display time zone alias instead of specified name.
    Example: Zulu is an alias for UTC.
    -->
    <!-- <timezone>UTC</timezone> -->

    <!-- You can specify umask here (see "man umask"). Server will apply it on startup.
    Number is always parsed as octal. Default umask is 027 (other users cannot read logs, data files, etc; group can only read).
    -->
    <!-- <umask>022</umask> -->

    <!-- Perform mlockall after startup to lower first queries latency
    and to prevent clickhouse executable from being paged out under high IO load.
    Enabling this option is recommended but will lead to increased startup time for up to a few seconds.
    -->
    <mlock_executable>true</mlock_executable>

    <!-- Reallocate memory for machine code ("text") using huge pages. Highly experimental. -->
    <remap_executable>false</remap_executable>

    <![CDATA[
    Uncomment below in order to use JDBC table engine and function.

    To install and run JDBC bridge in background:
    * [Debian/Ubuntu]
    export MVN_URL=https://repo1.maven.org/maven2/com/clickhouse/clickhouse-jdbc-bridge/
    export PKG_VER=$(curl -sL $MVN_URL/maven-metadata.xml | grep '<release>' | sed -e 's|.*>\(.*\)<.*|\1|')
    wget https://github.com/ClickHouse/clickhouse-jdbc-bridge/releases/download/v$PKG_VER/clickhouse-jdbc-bridge_$PKG_VER-1_all.deb
    apt install --no-install-recommends -f ./clickhouse-jdbc-bridge_$PKG_VER-1_all.deb
    clickhouse-jdbc-bridge &

    * [CentOS/RHEL]
    export MVN_URL=https://repo1.maven.org/maven2/com/clickhouse/clickhouse-jdbc-bridge/
    export PKG_VER=$(curl -sL $MVN_URL/maven-metadata.xml | grep '<release>' | sed -e 's|.*>\(.*\)<.*|\1|')
    wget https://github.com/ClickHouse/clickhouse-jdbc-bridge/releases/download/v$PKG_VER/clickhouse-jdbc-bridge-$PKG_VER-1.noarch.rpm
    yum localinstall -y clickhouse-jdbc-bridge-$PKG_VER-1.noarch.rpm
    clickhouse-jdbc-bridge &

    Please refer to https://github.com/ClickHouse/clickhouse-jdbc-bridge#usage for more information.
    ]]>
    <!--
    <jdbc_bridge>
    <host>127.0.0.1</host>
    <port>9019</port>
    </jdbc_bridge>
    -->

    <!-- Configuration of clusters that could be used in Distributed tables.
    https://clickhouse.com/docs/en/operations/table_engines/distributed/
    -->
    <remote_servers>
    <!-- Test only shard config for testing distributed storage -->
    <test_shard_localhost>
    <!-- Inter-server per-cluster secret for Distributed queries
    default: no secret (no authentication will be performed)

    If set, then Distributed queries will be validated on shards, so at least:
    - such cluster should exist on the shard,
    - such cluster should have the same secret.

    And also (and which is more important), the initial_user will
    be used as current user for the query.

    Right now the protocol is pretty simple and it only takes into account:
    - cluster name
    - query

    Also it will be nice if the following will be implemented:
    - source hostname (see interserver_http_host), but then it will depends from DNS,
    it can use IP address instead, but then the you need to get correct on the initiator node.
    - target hostname / ip address (same notes as for source hostname)
    - time-based security tokens
    -->
    <!-- <secret></secret> -->

    <shard>
    <!-- Optional. Whether to write data to just one of the replicas. Default: false (write data to all replicas). -->
    <!-- <internal_replication>false</internal_replication> -->
    <!-- Optional. Shard weight when writing data. Default: 1. -->
    <!-- <weight>1</weight> -->
    <replica>
    <host>localhost</host>
    <port>9000</port>
    <!-- Optional. Priority of the replica for load_balancing. Default: 1 (less value has more priority). -->
    <!-- <priority>1</priority> -->
    </replica>
    </shard>
    </test_shard_localhost>
    <test_cluster_one_shard_three_replicas_localhost>
    <shard>
    <internal_replication>false</internal_replication>
    <replica>
    <host>127.0.0.1</host>
    <port>9000</port>
    </replica>
    <replica>
    <host>127.0.0.2</host>
    <port>9000</port>
    </replica>
    <replica>
    <host>127.0.0.3</host>
    <port>9000</port>
    </replica>
    </shard>
    <!--shard>
    <internal_replication>false</internal_replication>
    <replica>
    <host>127.0.0.1</host>
    <port>9000</port>
    </replica>
    <replica>
    <host>127.0.0.2</host>
    <port>9000</port>
    </replica>
    <replica>
    <host>127.0.0.3</host>
    <port>9000</port>
    </replica>
    </shard-->
    </test_cluster_one_shard_three_replicas_localhost>
    <parallel_replicas>
    <shard>
    <internal_replication>false</internal_replication>
    <replica>
    <host>127.0.0.1</host>
    <port>9000</port>
    </replica>
    <replica>
    <host>127.0.0.2</host>
    <port>9000</port>
    </replica>
    <replica>
    <host>127.0.0.3</host>
    <port>9000</port>
    </replica>
    <replica>
    <host>127.0.0.4</host>
    <port>9000</port>
    </replica>
    <replica>
    <host>127.0.0.5</host>
    <port>9000</port>
    </replica>
    <replica>
    <host>127.0.0.6</host>
    <port>9000</port>
    </replica>
    <replica>
    <host>127.0.0.7</host>
    <port>9000</port>
    </replica>
    <replica>
    <host>127.0.0.8</host>
    <port>9000</port>
    </replica>
    <replica>
    <host>127.0.0.9</host>
    <port>9000</port>
    </replica>
    <replica>
    <host>127.0.0.10</host>
    <port>9000</port>
    </replica>
    <!-- Unavailable replica -->
    <replica>
    <host>127.0.0.11</host>
    <port>1234</port>
    </replica>
    </shard>
    </parallel_replicas>
    <test_cluster_two_shards_localhost>
    <shard>
    <replica>
    <host>localhost</host>
    <port>9000</port>
    </replica>
    </shard>
    <shard>
    <replica>
    <host>localhost</host>
    <port>9000</port>
    </replica>
    </shard>
    </test_cluster_two_shards_localhost>
    <test_cluster_two_shards>
    <shard>
    <replica>
    <host>127.0.0.1</host>
    <port>9000</port>
    </replica>
    </shard>
    <shard>
    <replica>
    <host>127.0.0.2</host>
    <port>9000</port>
    </replica>
    </shard>
    </test_cluster_two_shards>
    <test_cluster_two_shards_internal_replication>
    <shard>
    <internal_replication>true</internal_replication>
    <replica>
    <host>127.0.0.1</host>
    <port>9000</port>
    </replica>
    </shard>
    <shard>
    <internal_replication>true</internal_replication>
    <replica>
    <host>127.0.0.2</host>
    <port>9000</port>
    </replica>
    </shard>
    </test_cluster_two_shards_internal_replication>
    <test_shard_localhost_secure>
    <shard>
    <replica>
    <host>localhost</host>
    <port>9440</port>
    <secure>1</secure>
    </replica>
    </shard>
    </test_shard_localhost_secure>
    <test_unavailable_shard>
    <shard>
    <replica>
    <host>localhost</host>
    <port>9000</port>
    </replica>
    </shard>
    <shard>
    <replica>
    <host>localhost</host>
    <port>1</port>
    </replica>
    </shard>
    </test_unavailable_shard>
    </remote_servers>

    <!-- The list of hosts allowed to use in URL-related storage engines and table functions.
    If this section is not present in configuration, all hosts are allowed.
    -->
    <!--<remote_url_allow_hosts>-->
    <!-- Host should be specified exactly as in URL. The name is checked before DNS resolution.
    Example: "clickhouse.com", "clickhouse.com." and "www.clickhouse.com" are different hosts.
    If port is explicitly specified in URL, the host:port is checked as a whole.
    If host specified here without port, any port with this host allowed.
    "clickhouse.com" -> "clickhouse.com:443", "clickhouse.com:80" etc. is allowed, but "clickhouse.com:80" -> only "clickhouse.com:80" is allowed.
    If the host is specified as IP address, it is checked as specified in URL. Example: "[2a02:6b8:a::a]".
    If there are redirects and support for redirects is enabled, every redirect (the Location field) is checked.
    Host should be specified using the host xml tag:
    <host>clickhouse.com</host>
    -->

    <!-- Regular expression can be specified. RE2 engine is used for regexps.
    Regexps are not aligned: don't forget to add ^ and $. Also don't forget to escape dot (.) metacharacter
    (forgetting to do so is a common source of error).
    -->
    <!--</remote_url_allow_hosts>-->

    <!-- If element has 'incl' attribute, then for it's value will be used corresponding substitution from another file.
    By default, path to file with substitutions is /etc/metrika.xml. It could be changed in config in 'include_from' element.
    Values for substitutions are specified in /clickhouse/name_of_substitution elements in that file.
    -->

    <!-- ZooKeeper is used to store metadata about replicas, when using Replicated tables.
    Optional. If you don't use replicated tables, you could omit that.

    See https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/replication/
    -->

    <!--
    <zookeeper>
    <node>
    <host>example1</host>
    <port>2181</port>
    </node>
    <node>
    <host>example2</host>
    <port>2181</port>
    </node>
    <node>
    <host>example3</host>
    <port>2181</port>
    </node>
    </zookeeper>
    -->

    <!-- Substitutions for parameters of replicated tables.
    Optional. If you don't use replicated tables, you could omit that.

    See https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/replication/#creating-replicated-tables
    -->
    <!--
    <macros>
    <shard>01</shard>
    <replica>example01-01-1</replica>
    </macros>
    -->


    <!-- Reloading interval for embedded dictionaries, in seconds. Default: 3600. -->
    <builtin_dictionaries_reload_interval>3600</builtin_dictionaries_reload_interval>


    <!-- Maximum session timeout, in seconds. Default: 3600. -->
    <max_session_timeout>3600</max_session_timeout>

    <!-- Default session timeout, in seconds. Default: 60. -->
    <default_session_timeout>60</default_session_timeout>

    <!-- Sending data to Graphite for monitoring. Several sections can be defined. -->
    <!--
    interval - send every X second
    root_path - prefix for keys
    hostname_in_path - append hostname to root_path (default = true)
    metrics - send data from table system.metrics
    events - send data from table system.events
    asynchronous_metrics - send data from table system.asynchronous_metrics
    -->
    <!--
    <graphite>
    <host>localhost</host>
    <port>42000</port>
    <timeout>0.1</timeout>
    <interval>60</interval>
    <root_path>one_min</root_path>
    <hostname_in_path>true</hostname_in_path>

    <metrics>true</metrics>
    <events>true</events>
    <events_cumulative>false</events_cumulative>
    <asynchronous_metrics>true</asynchronous_metrics>
    </graphite>
    <graphite>
    <host>localhost</host>
    <port>42000</port>
    <timeout>0.1</timeout>
    <interval>1</interval>
    <root_path>one_sec</root_path>

    <metrics>true</metrics>
    <events>true</events>
    <events_cumulative>false</events_cumulative>
    <asynchronous_metrics>false</asynchronous_metrics>
    </graphite>
    -->

    <!-- Serve endpoint for Prometheus monitoring. -->
    <!--
    endpoint - mertics path (relative to root, statring with "/")
    port - port to setup server. If not defined or 0 than http_port used
    metrics - send data from table system.metrics
    events - send data from table system.events
    asynchronous_metrics - send data from table system.asynchronous_metrics
    status_info - send data from different component from CH, ex: Dictionaries status
    -->
    <!--
    <prometheus>
    <endpoint>/metrics</endpoint>
    <port>9363</port>

    <metrics>true</metrics>
    <events>true</events>
    <asynchronous_metrics>true</asynchronous_metrics>
    <status_info>true</status_info>
    </prometheus>
    -->

    <!-- Query log. Used only for queries with setting log_queries = 1. -->
    <query_log>
    <!-- What table to insert data. If table is not exist, it will be created.
    When query log structure is changed after system update,
    then old table will be renamed and new table will be created automatically.
    -->
    <database>system</database>
    <table>query_log</table>
    <!--
    PARTITION BY expr: https://clickhouse.com/docs/en/table_engines/mergetree-family/custom_partitioning_key/
    Example:
    event_date
    toMonday(event_date)
    toYYYYMM(event_date)
    toStartOfHour(event_time)
    -->
    <partition_by>toYYYYMM(event_date)</partition_by>
    <!--
    Table TTL specification: https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree/#mergetree-table-ttl
    Example:
    event_date + INTERVAL 1 WEEK
    event_date + INTERVAL 7 DAY DELETE
    event_date + INTERVAL 2 WEEK TO DISK 'bbb'

    <ttl>event_date + INTERVAL 30 DAY DELETE</ttl>
    -->

    <!--
    ORDER BY expr: https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree#order_by
    Example:
    event_date, event_time
    event_date, type, query_id
    event_date, event_time, initial_query_id

    <order_by>event_date, event_time, initial_query_id</order_by>
    -->

    <!-- Instead of partition_by, you can provide full engine expression (starting with ENGINE = ) with parameters,
    Example: <engine>ENGINE = MergeTree PARTITION BY toYYYYMM(event_date) ORDER BY (event_date, event_time) SETTINGS index_granularity = 1024</engine>
    -->

    <!-- Interval of flushing data. -->
    <flush_interval_milliseconds>7500</flush_interval_milliseconds>

    <!-- example of using a different storage policy for a system table -->
    <!-- storage_policy>local_ssd</storage_policy -->
    </query_log>

    <!-- Trace log. Stores stack traces collected by query profilers.
    See query_profiler_real_time_period_ns and query_profiler_cpu_time_period_ns settings. -->
    <trace_log>
    <database>system</database>
    <table>trace_log</table>

    <partition_by>toYYYYMM(event_date)</partition_by>
    <flush_interval_milliseconds>7500</flush_interval_milliseconds>
    </trace_log>

    <!-- Query thread log. Has information about all threads participated in query execution.
    Used only for queries with setting log_query_threads = 1. -->
    <query_thread_log>
    <database>system</database>
    <table>query_thread_log</table>
    <partition_by>toYYYYMM(event_date)</partition_by>
    <flush_interval_milliseconds>7500</flush_interval_milliseconds>
    </query_thread_log>

    <!-- Query views log. Has information about all dependent views associated with a query.
    Used only for queries with setting log_query_views = 1. -->
    <query_views_log>
    <database>system</database>
    <table>query_views_log</table>
    <partition_by>toYYYYMM(event_date)</partition_by>
    <flush_interval_milliseconds>7500</flush_interval_milliseconds>
    </query_views_log>

    <!-- Uncomment if use part log.
    Part log contains information about all actions with parts in MergeTree tables (creation, deletion, merges, downloads).-->
    <part_log>
    <database>system</database>
    <table>part_log</table>
    <partition_by>toYYYYMM(event_date)</partition_by>
    <flush_interval_milliseconds>7500</flush_interval_milliseconds>
    </part_log>

    <!-- Uncomment to write text log into table.
    Text log contains all information from usual server log but stores it in structured and efficient way.
    The level of the messages that goes to the table can be limited (<level>), if not specified all messages will go to the table.
    <text_log>
    <database>system</database>
    <table>text_log</table>
    <flush_interval_milliseconds>7500</flush_interval_milliseconds>
    <level></level>
    </text_log>
    -->

    <!-- Metric log contains rows with current values of ProfileEvents, CurrentMetrics collected with "collect_interval_milliseconds" interval. -->
    <metric_log>
    <database>system</database>
    <table>metric_log</table>
    <flush_interval_milliseconds>7500</flush_interval_milliseconds>
    <collect_interval_milliseconds>1000</collect_interval_milliseconds>
    </metric_log>

    <!--
    Asynchronous metric log contains values of metrics from
    system.asynchronous_metrics.
    -->
    <asynchronous_metric_log>
    <database>system</database>
    <table>asynchronous_metric_log</table>
    <flush_interval_milliseconds>7000</flush_interval_milliseconds>
    </asynchronous_metric_log>

    <!--
    OpenTelemetry log contains OpenTelemetry trace spans.
    -->
    <opentelemetry_span_log>
    <!--
    The default table creation code is insufficient, this <engine> spec
    is a workaround. There is no 'event_time' for this log, but two times,
    start and finish. It is sorted by finish time, to avoid inserting
    data too far away in the past (probably we can sometimes insert a span
    that is seconds earlier than the last span in the table, due to a race
    between several spans inserted in parallel). This gives the spans a
    global order that we can use to e.g. retry insertion into some external
    system.
    -->
    <engine>
    engine MergeTree
    partition by toYYYYMM(finish_date)
    order by (finish_date, finish_time_us, trace_id)
    </engine>
    <database>system</database>
    <table>opentelemetry_span_log</table>
    <flush_interval_milliseconds>7500</flush_interval_milliseconds>
    </opentelemetry_span_log>


    <!-- Crash log. Stores stack traces for fatal errors.
    This table is normally empty. -->
    <crash_log>
    <database>system</database>
    <table>crash_log</table>

    <partition_by />
    <flush_interval_milliseconds>1000</flush_interval_milliseconds>
    </crash_log>

    <!-- Session log. Stores user log in (successful or not) and log out events.

    Note: session log has known security issues and should not be used in production.
    -->
    <!-- <session_log>
    <database>system</database>
    <table>session_log</table>

    <partition_by>toYYYYMM(event_date)</partition_by>
    <flush_interval_milliseconds>7500</flush_interval_milliseconds>
    </session_log> -->

    <!-- Profiling on Processors level. -->
    <processors_profile_log>
    <database>system</database>
    <table>processors_profile_log</table>

    <partition_by>toYYYYMM(event_date)</partition_by>
    <flush_interval_milliseconds>7500</flush_interval_milliseconds>
    </processors_profile_log>

    <!-- Log of asynchronous inserts. It allows to check status
    of insert query in fire-and-forget mode.
    -->
    <asynchronous_insert_log>
    <database>system</database>
    <table>asynchronous_insert_log</table>

    <flush_interval_milliseconds>7500</flush_interval_milliseconds>
    <partition_by>event_date</partition_by>
    <ttl>event_date + INTERVAL 3 DAY</ttl>
    </asynchronous_insert_log>

    <!-- <top_level_domains_path>/var/lib/clickhouse/top_level_domains/</top_level_domains_path> -->
    <!-- Custom TLD lists.
    Format: <name>/path/to/file</name>

    Changes will not be applied w/o server restart.
    Path to the list is under top_level_domains_path (see above).
    -->
    <top_level_domains_lists>
    <!--
    <public_suffix_list>/path/to/public_suffix_list.dat</public_suffix_list>
    -->
    </top_level_domains_lists>

    <!-- Configuration of external dictionaries. See:
    https://clickhouse.com/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts
    -->
    <dictionaries_config>*_dictionary.*ml</dictionaries_config>

    <!-- Configuration of user defined executable functions -->
    <user_defined_executable_functions_config>*_function.*ml</user_defined_executable_functions_config>

    <!-- Path in ZooKeeper to store user-defined SQL functions created by the command CREATE FUNCTION.
    If not specified they will be stored locally. -->
    <!-- <user_defined_zookeeper_path>/clickhouse/user_defined</user_defined_zookeeper_path> -->

    <!-- Uncomment if you want data to be compressed 30-100% better.
    Don't do that if you just started using ClickHouse.
    -->
    <!--
    <compression>
    <!- - Set of variants. Checked in order. Last matching case wins. If nothing matches, lz4 will be used. - ->
    <case>

    <!- - Conditions. All must be satisfied. Some conditions may be omitted. - ->
    <min_part_size>10000000000</min_part_size> <!- - Min part size in bytes. - ->
    <min_part_size_ratio>0.01</min_part_size_ratio> <!- - Min size of part relative to whole table size. - ->

    <!- - What compression method to use. - ->
    <method>zstd</method>
    </case>
    </compression>
    -->

    <!-- Configuration of encryption. The server executes a command to
    obtain an encryption key at startup if such a command is
    defined, or encryption codecs will be disabled otherwise. The
    command is executed through /bin/sh and is expected to write
    a Base64-encoded key to the stdout. -->
    <encryption_codecs>
    <!-- aes_128_gcm_siv -->
    <!-- Example of getting hex key from env -->
    <!-- the code should use this key and throw an exception if its length is not 16 bytes -->
    <!--key_hex from_env="..."></key_hex -->

    <!-- Example of multiple hex keys. They can be imported from env or be written down in config-->
    <!-- the code should use these keys and throw an exception if their length is not 16 bytes -->
    <!-- key_hex id="0">...</key_hex -->
    <!-- key_hex id="1" from_env=".."></key_hex -->
    <!-- key_hex id="2">...</key_hex -->
    <!-- current_key_id>2</current_key_id -->

    <!-- Example of getting hex key from config -->
    <!-- the code should use this key and throw an exception if its length is not 16 bytes -->
    <!-- key>...</key -->

    <!-- example of adding nonce -->
    <!-- nonce>...</nonce -->

    <!-- /aes_128_gcm_siv -->
    </encryption_codecs>

    <!-- Allow to execute distributed DDL queries (CREATE, DROP, ALTER, RENAME) on cluster.
    Works only if ZooKeeper is enabled. Comment it if such functionality isn't required. -->
    <distributed_ddl>
    <!-- Path in ZooKeeper to queue with DDL queries -->
    <path>/clickhouse/task_queue/ddl</path>

    <!-- Settings from this profile will be used to execute DDL queries -->
    <!-- <profile>default</profile> -->

    <!-- Controls how much ON CLUSTER queries can be run simultaneously. -->
    <!-- <pool_size>1</pool_size> -->

    <!--
    Cleanup settings (active tasks will not be removed)
    -->

    <!-- Controls task TTL (default 1 week) -->
    <!-- <task_max_lifetime>604800</task_max_lifetime> -->

    <!-- Controls how often cleanup should be performed (in seconds) -->
    <!-- <cleanup_delay_period>60</cleanup_delay_period> -->

    <!-- Controls how many tasks could be in the queue -->
    <!-- <max_tasks_in_queue>1000</max_tasks_in_queue> -->
    </distributed_ddl>

    <!-- Settings to fine tune MergeTree tables. See documentation in source code, in MergeTreeSettings.h -->
    <!--
    <merge_tree>
    <max_suspicious_broken_parts>5</max_suspicious_broken_parts>
    </merge_tree>
    -->

    <!-- Protection from accidental DROP.
    If size of a MergeTree table is greater than max_table_size_to_drop (in bytes) than table could not be dropped with any DROP query.
    If you want do delete one table and don't want to change clickhouse-server config, you could create special file <clickhouse-path>/flags/force_drop_table and make DROP once.
    By default max_table_size_to_drop is 50GB; max_table_size_to_drop=0 allows to DROP any tables.
    The same for max_partition_size_to_drop.
    Uncomment to disable protection.
    -->
    <!-- <max_table_size_to_drop>0</max_table_size_to_drop> -->
    <!-- <max_partition_size_to_drop>0</max_partition_size_to_drop> -->

    <!-- Example of parameters for GraphiteMergeTree table engine -->
    <graphite_rollup_example>
    <pattern>
    <regexp>click_cost</regexp>
    <function>any</function>
    <retention>
    <age>0</age>
    <precision>3600</precision>
    </retention>
    <retention>
    <age>86400</age>
    <precision>60</precision>
    </retention>
    </pattern>
    <default>
    <function>max</function>
    <retention>
    <age>0</age>
    <precision>60</precision>
    </retention>
    <retention>
    <age>3600</age>
    <precision>300</precision>
    </retention>
    <retention>
    <age>86400</age>
    <precision>3600</precision>
    </retention>
    </default>
    </graphite_rollup_example>

    <!-- Directory in <clickhouse-path> containing schema files for various input formats.
    The directory will be created if it doesn't exist.
    -->
    <format_schema_path>/var/lib/clickhouse/format_schemas/</format_schema_path>

    <!-- Default query masking rules, matching lines would be replaced with something else in the logs
    (both text logs and system.query_log).
    name - name for the rule (optional)
    regexp - RE2 compatible regular expression (mandatory)
    replace - substitution string for sensitive data (optional, by default - six asterisks)
    <query_masking_rules>
    <rule>
    <name>hide encrypt/decrypt arguments</name>
    <regexp>((?:aes_)?(?:encrypt|decrypt)(?:_mysql)?)\s*\(\s*(?:'(?:\\'|.)+'|.*?)\s*\)</regexp>
    <replace>\1(???)</replace>
    </rule>
    </query_masking_rules> -->

    <!-- Uncomment to use custom http handlers.
    rules are checked from top to bottom, first match runs the handler
    url - to match request URL, you can use 'regex:' prefix to use regex match(optional)
    methods - to match request method, you can use commas to separate multiple method matches(optional)
    headers - to match request headers, match each child element(child element name is header name), you can use 'regex:' prefix to use regex match(optional)
    handler is request handler
    type - supported types: static, dynamic_query_handler, predefined_query_handler
    query - use with predefined_query_handler type, executes query when the handler is called
    query_param_name - use with dynamic_query_handler type, extracts and executes the value corresponding to the <query_param_name> value in HTTP request params
    status - use with static type, response status code
    content_type - use with static type, response content-type
    response_content - use with static type, Response content sent to client, when using the prefix 'file://' or 'config://', find the content from the file or configuration send to client.

    <http_handlers>
    <rule>
    <url>/</url>
    <methods>POST,GET</methods>
    <headers><pragma>no-cache</pragma></headers>
    <handler>
    <type>dynamic_query_handler</type>
    <query_param_name>query</query_param_name>
    </handler>
    </rule>

    <rule>
    <url>/predefined_query</url>
    <methods>POST,GET</methods>
    <handler>
    <type>predefined_query_handler</type>
    <query>SELECT * FROM system.settings</query>
    </handler>
    </rule>

    <rule>
    <handler>
    <type>static</type>
    <status>200</status>
    <content_type>text/plain; charset=UTF-8</content_type>
    <response_content>config://http_server_default_response</response_content>
    </handler>
    </rule>
    </http_handlers>
    -->

    <send_crash_reports>
    <!-- Changing <enabled> to true allows sending crash reports to -->
    <!-- the ClickHouse core developers team via Sentry https://sentry.io -->
    <!-- Doing so at least in pre-production environments is highly appreciated -->
    <enabled>false</enabled>
    <!-- Change <anonymize> to true if you don't feel comfortable attaching the server hostname to the crash report -->
    <anonymize>false</anonymize>
    <!-- Default endpoint should be changed to different Sentry DSN only if you have -->
    <!-- some in-house engineers or hired consultants who're going to debug ClickHouse issues for you -->
    <endpoint>https://6f33034cfe684dd7a3ab9875e57b1c8d@o388870.ingest.sentry.io/5226277</endpoint>
    </send_crash_reports>

    <!-- Uncomment to disable ClickHouse internal DNS caching. -->
    <!-- <disable_internal_dns_cache>1</disable_internal_dns_cache> -->

    <!-- You can also configure rocksdb like this: -->
    <!--
    <rocksdb>
    <options>
    <max_background_jobs>8</max_background_jobs>
    </options>
    <column_family_options>
    <num_levels>2</num_levels>
    </column_family_options>
    <tables>
    <table>
    <name>TABLE</name>
    <options>
    <max_background_jobs>8</max_background_jobs>
    </options>
    <column_family_options>
    <num_levels>2</num_levels>
    </column_family_options>
    </table>
    </tables>
    </rocksdb>
    -->

    <!-- Configuration for the query cache -->
    <!-- <query_cache> -->
    <!-- <max_size_in_bytes>1073741824</max_size_in_bytes> -->
    <!-- <max_entries>1024</max_entries> -->
    <!-- <max_entry_size_in_bytes>1048576</max_entry_size_in_bytes> -->
    <!-- <max_entry_size_in_rows>30000000</max_entry_size_in_rows> -->
    <!-- </query_cache> -->

    <!-- Uncomment if enable merge tree metadata cache -->
    <!--merge_tree_metadata_cache>
    <lru_cache_size>268435456</lru_cache_size>
    <continue_if_corrupted>true</continue_if_corrupted>
    </merge_tree_metadata_cache-->

    <!-- This allows to disable exposing addresses in stack traces for security reasons.
    Please be aware that it does not improve security much, but makes debugging much harder.
    The addresses that are small offsets from zero will be displayed nevertheless to show nullptr dereferences.
    Regardless of this configuration, the addresses are visible in the system.stack_trace and system.trace_log tables
    if the user has access to these tables.
    I don't recommend to change this setting.
    <show_addresses_in_stack_traces>false</show_addresses_in_stack_traces>
    -->

    <!-- On Linux systems this can control the behavior of OOM killer.
    <oom_score>-1000</oom_score>
    -->
    </clickhouse>

  • /etc/clickhouse-server/users.xml

    • 文件對其服務的用戶及權限的設置,這邊文件也適用默認配置。

      註:可參考官方文件User Settings

  • 【users.xml】<點擊開合>
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    <clickhouse>
    <!-- See also the files in users.d directory where the settings can be overridden. -->

    <!-- Profiles of settings. -->
    <profiles>
    <!-- Default settings. -->
    <default>
    </default>

    <!-- Profile that allows only read queries. -->
    <readonly>
    <readonly>1</readonly>
    </readonly>
    </profiles>

    <!-- Users and ACL. -->
    <users>
    <!-- If user name was not specified, 'default' user is used. -->
    <default>
    <!-- See also the files in users.d directory where the password can be overridden.

    Password could be specified in plaintext or in SHA256 (in hex format).

    If you want to specify password in plaintext (not recommended), place it in 'password' element.
    Example: <password>qwerty</password>.
    Password could be empty.

    If you want to specify SHA256, place it in 'password_sha256_hex' element.
    Example: <password_sha256_hex>65e84be33532fb784c48129675f9eff3a682b27168c0ea744b2cf58ee02337c5</password_sha256_hex>
    Restrictions of SHA256: impossibility to connect to ClickHouse using MySQL JS client (as of July 2019).

    If you want to specify double SHA1, place it in 'password_double_sha1_hex' element.
    Example: <password_double_sha1_hex>e395796d6546b1b65db9d665cd43f0e858dd4303</password_double_sha1_hex>

    If you want to specify a previously defined LDAP server (see 'ldap_servers' in the main config) for authentication,
    place its name in 'server' element inside 'ldap' element.
    Example: <ldap><server>my_ldap_server</server></ldap>

    If you want to authenticate the user via Kerberos (assuming Kerberos is enabled, see 'kerberos' in the main config),
    place 'kerberos' element instead of 'password' (and similar) elements.
    The name part of the canonical principal name of the initiator must match the user name for authentication to succeed.
    You can also place 'realm' element inside 'kerberos' element to further restrict authentication to only those requests
    whose initiator's realm matches it.
    Example: <kerberos />
    Example: <kerberos><realm>EXAMPLE.COM</realm></kerberos>

    How to generate decent password:
    Execute: PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha256sum | tr -d '-'
    In first line will be password and in second - corresponding SHA256.

    How to generate double SHA1:
    Execute: PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha1sum | tr -d '-' | xxd -r -p | sha1sum | tr -d '-'
    In first line will be password and in second - corresponding double SHA1.
    -->
    <password></password>

    <!-- List of networks with open access.

    To open access from everywhere, specify:
    <ip>::/0</ip>

    To open access only from localhost, specify:
    <ip>::1</ip>
    <ip>127.0.0.1</ip>

    Each element of list has one of the following forms:
    <ip> IP-address or network mask. Examples: 213.180.204.3 or 10.0.0.1/8 or 10.0.0.1/255.255.255.0
    2a02:6b8::3 or 2a02:6b8::3/64 or 2a02:6b8::3/ffff:ffff:ffff:ffff::.
    <host> Hostname. Example: server01.clickhouse.com.
    To check access, DNS query is performed, and all received addresses compared to peer address.
    <host_regexp> Regular expression for host names. Example, ^server\d\d-\d\d-\d\.clickhouse\.com$
    To check access, DNS PTR query is performed for peer address and then regexp is applied.
    Then, for result of PTR query, another DNS query is performed and all received addresses compared to peer address.
    Strongly recommended that regexp is ends with $
    All results of DNS requests are cached till server restart.
    -->
    <networks>
    <ip>::/0</ip>
    </networks>

    <!-- Settings profile for user. -->
    <profile>default</profile>

    <!-- Quota for user. -->
    <quota>default</quota>

    <!-- User can create other users and grant rights to them. -->
    <!-- <access_management>1</access_management> -->
    </default>
    </users>

    <!-- Quotas. -->
    <quotas>
    <!-- Name of quota. -->
    <default>
    <!-- Limits for time interval. You could specify many intervals with different limits. -->
    <interval>
    <!-- Length of interval. -->
    <duration>3600</duration>

    <!-- No limits. Just calculate resource usage for time interval. -->
    <queries>0</queries>
    <errors>0</errors>
    <result_rows>0</result_rows>
    <read_rows>0</read_rows>
    <execution_time>0</execution_time>
    </interval>
    </default>
    </quotas>
    </clickhouse>

Run & Test

  • 此處是使用瀏覽器開啟web-client進行測試。

    1
    http://127.0.0.1:8080
  • 連線配置

    注意:[http://host:port *]這個要輸入位置,不要看畫面好像有輸入了就不輸入,那只是個placeholder
    注意:預設的使用者為default密碼為空。可見users.xml配置。
    注意:[HTTP Base auth],如果無法連入可以勾選或取消勾選試試看。

  • 相關的語法:

    注意:語法結束符號為;,當你要執行多行指令的時候會出錯,請使用;;,該問題應該是web-client的問題。

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    -- 建立資料庫
    CREATE DATABASE test;;

    -- 建立表
    CREATE TABLE IF NOT EXISTS test.sample_table (
    id UInt64,
    name String,
    age UInt32
    ) ENGINE = MergeTree()
    ORDER BY id;;

    -- 寫入示範資料
    INSERT INTO test.sample_table (id, name, age)
    VALUES
    (1, 'John', 30),
    (2, 'Alice', 25),
    (3, 'Bob', 35);;

    -- 查詢資料
    SELECT * FROM test.sample_table;;

ClickHouse GUI 工具

  • 【免費】dbeaver
    • 該工具還有支援其他資料庫,個人滿推薦的。

Other Docker Compose List

如需要找尋其他的 Docker Compose ,可以參考Docker Compose - 簡介的分享目錄。


註:以上參考了
Docker
ClickHouse
dbeaver