Facebook
From tom, 3 Years ago, written in Plain Text.
Embed
Download Paste or View Raw
Hits: 156
  1. [2020-09-27T11:01:16,400][INFO ][o.e.c.c.Coordinator      ] [serverra1_warm.sit.comp.state] master node [{serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}] failed, restarting discovery
  2. org.elasticsearch.ElasticsearchException: node [{serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}] failed [3] consecutive checks
  3.         at org.elasticsearch.cluster.coordination.LeaderChecker$CheckScheduler$1.handleException(LeaderChecker.java:277) ~[elasticsearch-7.7.0.jar:7.7.0]
  4.         at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1139) ~[elasticsearch-7.7.0.jar:7.7.0]
  5.         at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1139) ~[elasticsearch-7.7.0.jar:7.7.0]
  6.         at org.elasticsearch.transport.InboundHandler.lambda$handleException$2(InboundHandler.java:244) ~[elasticsearch-7.7.0.jar:7.7.0]
  7.         at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:225) ~[elasticsearch-7.7.0.jar:7.7.0]
  8.         at org.elasticsearch.transport.InboundHandler.handleException(InboundHandler.java:242) ~[elasticsearch-7.7.0.jar:7.7.0]
  9.         at org.elasticsearch.transport.InboundHandler.handlerResponseError(InboundHandler.java:234) ~[elasticsearch-7.7.0.jar:7.7.0]
  10.         at org.elasticsearch.transport.InboundHandler.messageReceived(InboundHandler.java:137) ~[elasticsearch-7.7.0.jar:7.7.0]
  11.  
  12.                 ...
  13.                 Caused by: org.elasticsearch.transport.RemoteTransportException: [serverra3.sit.comp.state][10.100.24.232:9300][internal:coordination/fault_detection/leader_check]
  14. Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: rejecting leader check since [{serverra1_warm.sit.comp.state}{KyQI0BMySRKE4yoeitADCQ}{hB25a3FYRIiwrWMNgHks9A}{10.100.24.230}{10.100.24.230:9301}{dlrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=warm, transform.node=true}] has been removed from the cluster
  15. ...
  16. [2020-09-27T11:01:26,411][WARN ][o.e.c.c.ClusterFormationFailureHelper] [serverra1_warm.sit.comp.state] master not discovered yet: have discovered [{serverra1_warm.sit.comp.state}{KyQI0BMySRKE4yoeitADCQ}{hB25a3FYRIiwrWMNgHks9A}{10.100.24.230}{10.100.24.230:9301}{dlrt}{rack_id=rack_one, ml.machine_memory=269645852672, xpack.installed=true, data=warm, transform.node=true, ml.max_open_jobs=20}, {serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}]; discovery will continue using [10.100.24.230:9300, 10.100.24.231:9300, 10.100.24.232:9300] from hosts providers and [{serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}] from last-known cluster state; node term 64, last-accepted version 411158 in term 64
  17. [2020-09-27T11:01:36,412][WARN ][o.e.c.c.ClusterFormationFailureHelper] [serverra1_warm.sit.comp.state] master not discovered yet: have discovered [{serverra1_warm.sit.comp.state}{KyQI0BMySRKE4yoeitADCQ}{hB25a3FYRIiwrWMNgHks9A}{10.100.24.230}{10.100.24.230:9301}{dlrt}{rack_id=rack_one, ml.machine_memory=269645852672, xpack.installed=true, data=warm, transform.node=true, ml.max_open_jobs=20}, {serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}]; discovery will continue using [10.100.24.230:9300, 10.100.24.231:9300, 10.100.24.232:9300] from hosts providers and [{serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}] from last-known cluster state; node term 64, last-accepted version 411158 in term 64
  18. [2020-09-27T11:01:46,430][WARN ][o.e.c.c.ClusterFormationFailureHelper] [serverra1_warm.sit.comp.state] master not discovered yet: have discovered [{serverra1_warm.sit.comp.state}{KyQI0BMySRKE4yoeitADCQ}{hB25a3FYRIiwrWMNgHks9A}{10.100.24.230}{10.100.24.230:9301}{dlrt}{rack_id=rack_one, ml.machine_memory=269645852672, xpack.installed=true, data=warm, transform.node=true, ml.max_open_jobs=20}, {serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}]; discovery will continue using [10.100.24.230:9300, 10.100.24.231:9300, 10.100.24.232:9300] from hosts providers and [{serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}] from last-known cluster state; node term 64, last-accepted version 411158 in term 64
  19. [2020-09-27T11:01:56,431][WARN ][o.e.c.c.ClusterFormationFailureHelper] [serverra1_warm.sit.comp.state] master not discovered yet: have discovered [{serverra1_warm.sit.comp.state}{KyQI0BMySRKE4yoeitADCQ}{hB25a3FYRIiwrWMNgHks9A}{10.100.24.230}{10.100.24.230:9301}{dlrt}{rack_id=rack_one, ml.machine_memory=269645852672, xpack.installed=true, data=warm, transform.node=true, ml.max_open_jobs=20}, {serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}]; discovery will continue using [10.100.24.230:9300, 10.100.24.231:9300, 10.100.24.232:9300] from hosts providers and [{serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}] from last-known cluster state; node term 64, last-accepted version 411158 in term 64
  20. [2020-09-27T11:02:06,432][WARN ][o.e.c.c.ClusterFormationFailureHelper] [serverra1_warm.sit.comp.state] master not discovered yet: have discovered [{serverra1_warm.sit.comp.state}{KyQI0BMySRKE4yoeitADCQ}{hB25a3FYRIiwrWMNgHks9A}{10.100.24.230}{10.100.24.230:9301}{dlrt}{rack_id=rack_one, ml.machine_memory=269645852672, xpack.installed=true, data=warm, transform.node=true, ml.max_open_jobs=20}, {serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}]; discovery will continue using [10.100.24.230:9300, 10.100.24.231:9300, 10.100.24.232:9300] from hosts providers and [{serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}] from last-known cluster state; node term 64, last-accepted version 411158 in term 64
  21. [2020-09-27T11:02:16,434][WARN ][o.e.c.c.ClusterFormationFailureHelper] [serverra1_warm.sit.comp.state] master not discovered yet: have discovered [{serverra1_warm.sit.comp.state}{KyQI0BMySRKE4yoeitADCQ}{hB25a3FYRIiwrWMNgHks9A}{10.100.24.230}{10.100.24.230:9301}{dlrt}{rack_id=rack_one, ml.machine_memory=269645852672, xpack.installed=true, data=warm, transform.node=true, ml.max_open_jobs=20}, {serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}]; discovery will continue using [10.100.24.230:9300, 10.100.24.231:9300, 10.100.24.232:9300] from hosts providers and [{serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}] from last-known cluster state; node term 64, last-accepted version 411158 in term 64
  22. [2020-09-27T11:02:16,452][INFO ][o.e.c.c.JoinHelper       ] [serverra1_warm.sit.comp.state] failed to join {serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true} with JoinRequest{sourceNode={serverra1_warm.sit.comp.state}{KyQI0BMySRKE4yoeitADCQ}{hB25a3FYRIiwrWMNgHks9A}{10.100.24.230}{10.100.24.230:9301}{dlrt}{rack_id=rack_one, ml.machine_memory=269645852672, xpack.installed=true, data=warm, transform.node=true, ml.max_open_jobs=20}, minimumTerm=64, optionalJoin=Optional[Join{term=64, lastAcceptedTerm=62, lastAcceptedVersion=401644, sourceNode={serverra1_warm.sit.comp.state}{KyQI0BMySRKE4yoeitADCQ}{hB25a3FYRIiwrWMNgHks9A}{10.100.24.230}{10.100.24.230:9301}{dlrt}{rack_id=rack_one, ml.machine_memory=269645852672, xpack.installed=true, data=warm, transform.node=true, ml.max_open_jobs=20}, targetNode={serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}}]}
  23. org.elasticsearch.transport.ReceiveTimeoutTransportException: [serverra3.sit.comp.state][10.100.24.232:9300][internal:cluster/coordination/join] request_id [2767852] timed out after [59915ms]
  24.         at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1041) [elasticsearch-7.7.0.jar:7.7.0]
  25.         at org.elasticsearch.common.util.concurrent.TstateeadContext$ContextPreservingRunnable.run(TstateeadContext.java:633) [elasticsearch-7.7.0.jar:7.7.0]
  26.         at java.util.concurrent.TstateeadPoolExecutor.runWorker(TstateeadPoolExecutor.java:1130) [?:?]
  27.         at java.util.concurrent.TstateeadPoolExecutor$Worker.run(TstateeadPoolExecutor.java:630) [?:?]
  28.         at java.lang.Tstateead.run(Tstateead.java:832) [?:?]
  29. [2020-09-27T11:02:16,455][INFO ][o.e.c.c.JoinHelper       ] [serverra1_warm.sit.comp.state] failed to join {serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true} with JoinRequest{sourceNode={serverra1_warm.sit.comp.state}{KyQI0BMySRKE4yoeitADCQ}{hB25a3FYRIiwrWMNgHks9A}{10.100.24.230}{10.100.24.230:9301}{dlrt}{rack_id=rack_one, ml.machine_memory=269645852672, xpack.installed=true, data=warm, transform.node=true, ml.max_open_jobs=20}, minimumTerm=64, optionalJoin=Optional[Join{term=64, lastAcceptedTerm=62, lastAcceptedVersion=401644, sourceNode={serverra1_warm.sit.comp.state}{KyQI0BMySRKE4yoeitADCQ}{hB25a3FYRIiwrWMNgHks9A}{10.100.24.230}{10.100.24.230:9301}{dlrt}{rack_id=rack_one, ml.machine_memory=269645852672, xpack.installed=true, data=warm, transform.node=true, ml.max_open_jobs=20}, targetNode={serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}}]}
  30. org.elasticsearch.transport.ReceiveTimeoutTransportException: [serverra3.sit.comp.state][10.100.24.232:9300][internal:cluster/coordination/join] request_id [2767852] timed out after [59915ms]
  31.         at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1041) [elasticsearch-7.7.0.jar:7.7.0]
  32.         at org.elasticsearch.common.util.concurrent.TstateeadContext$ContextPreservingRunnable.run(TstateeadContext.java:633) [elasticsearch-7.7.0.jar:7.7.0]
  33.         at java.util.concurrent.TstateeadPoolExecutor.runWorker(TstateeadPoolExecutor.java:1130) [?:?]
  34.         at java.util.concurrent.TstateeadPoolExecutor$Worker.run(TstateeadPoolExecutor.java:630) [?:?]
  35.         at java.lang.Tstateead.run(Tstateead.java:832) [?:?]
  36. [2020-09-27T11:02:26,435][INFO ][o.e.c.c.JoinHelper       ] [serverra1_warm.sit.comp.state] last failed join attempt was 9.9s ago, failed to join {serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true} with JoinRequest{sourceNode={serverra1_warm.sit.comp.state}{KyQI0BMySRKE4yoeitADCQ}{hB25a3FYRIiwrWMNgHks9A}{10.100.24.230}{10.100.24.230:9301}{dlrt}{rack_id=rack_one, ml.machine_memory=269645852672, xpack.installed=true, data=warm, transform.node=true, ml.max_open_jobs=20}, minimumTerm=64, optionalJoin=Optional[Join{term=64, lastAcceptedTerm=62, lastAcceptedVersion=401644, sourceNode={serverra1_warm.sit.comp.state}{KyQI0BMySRKE4yoeitADCQ}{hB25a3FYRIiwrWMNgHks9A}{10.100.24.230}{10.100.24.230:9301}{dlrt}{rack_id=rack_one, ml.machine_memory=269645852672, xpack.installed=true, data=warm, transform.node=true, ml.max_open_jobs=20}, targetNode={serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}}]}
  37. org.elasticsearch.transport.ReceiveTimeoutTransportException: [serverra3.sit.comp.state][10.100.24.232:9300][internal:cluster/coordination/join] request_id [2767852] timed out after [59915ms]
  38.         at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1041) ~[elasticsearch-7.7.0.jar:7.7.0]
  39.         at org.elasticsearch.common.util.concurrent.TstateeadContext$ContextPreservingRunnable.run(TstateeadContext.java:633) ~[elasticsearch-7.7.0.jar:7.7.0]
  40.         at java.util.concurrent.TstateeadPoolExecutor.runWorker(TstateeadPoolExecutor.java:1130) [?:?]
  41.         at java.util.concurrent.TstateeadPoolExecutor$Worker.run(TstateeadPoolExecutor.java:630) [?:?]
  42.         at java.lang.Tstateead.run(Tstateead.java:832) [?:?]
  43. [2020-09-27T11:02:26,437][WARN ][o.e.c.c.ClusterFormationFailureHelper] [serverra1_warm.sit.comp.state] master not discovered yet: have discovered [{serverra1_warm.sit.comp.state}{KyQI0BMySRKE4yoeitADCQ}{hB25a3FYRIiwrWMNgHks9A}{10.100.24.230}{10.100.24.230:9301}{dlrt}{rack_id=rack_one, ml.machine_memory=269645852672, xpack.installed=true, data=warm, transform.node=true, ml.max_open_jobs=20}, {serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}]; discovery will continue using [10.100.24.230:9300, 10.100.24.231:9300, 10.100.24.232:9300] from hosts providers and [{serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}] from last-known cluster state; node term 64, last-accepted version 411158 in term 64
  44. [2020-09-27T11:02:36,438][WARN ][o.e.c.c.ClusterFormationFailureHelper] [serverra1_warm.sit.comp.state] master not discovered yet: have discovered [{serverra1_warm.sit.comp.state}{KyQI0BMySRKE4yoeitADCQ}{hB25a3FYRIiwrWMNgHks9A}{10.100.24.230}{10.100.24.230:9301}{dlrt}{rack_id=rack_one, ml.machine_memory=269645852672, xpack.installed=true, data=warm, transform.node=true, ml.max_open_jobs=20}, {serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}]; discovery will continue using [10.100.24.230:9300, 10.100.24.231:9300, 10.100.24.232:9300] from hosts providers and [{serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}] from last-known cluster state; node term 64, last-accepted version 411158 in term 64
  45. [2020-09-27T11:02:46,439][WARN ][o.e.c.c.ClusterFormationFailureHelper] [serverra1_warm.sit.comp.state] master not discovered yet: have discovered [{serverra1_warm.sit.comp.state}{KyQI0BMySRKE4yoeitADCQ}{hB25a3FYRIiwrWMNgHks9A}{10.100.24.230}{10.100.24.230:9301}{dlrt}{rack_id=rack_one, ml.machine_memory=269645852672, xpack.installed=true, data=warm, transform.node=true, ml.max_open_jobs=20}, {serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}]; discovery will continue using [10.100.24.230:9300, 10.100.24.231:9300, 10.100.24.232:9300] from hosts providers and [{serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}] from last-known cluster state; node term 64, last-accepted version 411158 in term 64
  46. [2020-09-27T11:02:57,052][WARN ][o.e.c.c.ClusterFormationFailureHelper] [serverra1_warm.sit.comp.state] master not discovered yet: have discovered [{serverra1_warm.sit.comp.state}{KyQI0BMySRKE4yoeitADCQ}{hB25a3FYRIiwrWMNgHks9A}{10.100.24.230}{10.100.24.230:9301}{dlrt}{rack_id=rack_one, ml.machine_memory=269645852672, xpack.installed=true, data=warm, transform.node=true, ml.max_open_jobs=20}, {serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}]; discovery will continue using [10.100.24.230:9300, 10.100.24.231:9300, 10.100.24.232:9300] from hosts providers and [{serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}] from last-known cluster state; node term 64, last-accepted version 411158 in term 64
  47. [2020-09-27T11:03:07,053][WARN ][o.e.c.c.ClusterFormationFailureHelper] [serverra1_warm.sit.comp.state] master not discovered yet: have discovered [{serverra1_warm.sit.comp.state}{KyQI0BMySRKE4yoeitADCQ}{hB25a3FYRIiwrWMNgHks9A}{10.100.24.230}{10.100.24.230:9301}{dlrt}{rack_id=rack_one, ml.machine_memory=269645852672, xpack.installed=true, data=warm, transform.node=true, ml.max_open_jobs=20}, {serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}]; discovery will continue using [10.100.24.230:9300, 10.100.24.231:9300, 10.100.24.232:9300] from hosts providers and [{serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}] from last-known cluster state; node term 64, last-accepted version 411158 in term 64
  48. [2020-09-27T11:03:15,945][WARN ][o.e.t.TransportService   ] [serverra1_warm.sit.comp.state] Received response for a request that has timed out, sent [119550ms] ago, timed out [59635ms] ago, action [internal:cluster/coordination/join], node [{serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}], id [2767852]
  49. [2020-09-27T11:03:17,054][WARN ][o.e.c.c.ClusterFormationFailureHelper] [serverra1_warm.sit.comp.state] master not discovered yet: have discovered [{serverra1_warm.sit.comp.state}{KyQI0BMySRKE4yoeitADCQ}{hB25a3FYRIiwrWMNgHks9A}{10.100.24.230}{10.100.24.230:9301}{dlrt}{rack_id=rack_one, ml.machine_memory=269645852672, xpack.installed=true, data=warm, transform.node=true, ml.max_open_jobs=20}, {serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}]; discovery will continue using [10.100.24.230:9300, 10.100.24.231:9300, 10.100.24.232:9300] from hosts providers and [{serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}] from last-known cluster state; node term 64, last-accepted version 411158 in term 64
  50. [2020-09-27T11:03:27,116][WARN ][o.e.c.c.ClusterFormationFailureHelper] [serverra1_warm.sit.comp.state] master not discovered yet: have discovered [{serverra1_warm.sit.comp.state}{KyQI0BMySRKE4yoeitADCQ}{hB25a3FYRIiwrWMNgHks9A}{10.100.24.230}{10.100.24.230:9301}{dlrt}{rack_id=rack_one, ml.machine_memory=269645852672, xpack.installed=true, data=warm, transform.node=true, ml.max_open_jobs=20}, {serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}]; discovery will continue using [10.100.24.230:9300, 10.100.24.231:9300, 10.100.24.232:9300] from hosts providers and [{serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}] from last-known cluster state; node term 64, last-accepted version 411158 in term 64
  51. [2020-09-27T11:03:37,118][WARN ][o.e.c.c.ClusterFormationFailureHelper] [serverra1_warm.sit.comp.state] master not discovered yet: have discovered [{serverra1_warm.sit.comp.state}{KyQI0BMySRKE4yoeitADCQ}{hB25a3FYRIiwrWMNgHks9A}{10.100.24.230}{10.100.24.230:9301}{dlrt}{rack_id=rack_one, ml.machine_memory=269645852672, xpack.installed=true, data=warm, transform.node=true, ml.max_open_jobs=20}, {serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}]; discovery will continue using [10.100.24.230:9300, 10.100.24.231:9300, 10.100.24.232:9300] from hosts providers and [{serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}] from last-known cluster state; node term 64, last-accepted version 411158 in term 64
  52. [2020-09-27T11:03:47,119][WARN ][o.e.c.c.ClusterFormationFailureHelper] [serverra1_warm.sit.comp.state] master not discovered yet: have discovered [{serverra1_warm.sit.comp.state}{KyQI0BMySRKE4yoeitADCQ}{hB25a3FYRIiwrWMNgHks9A}{10.100.24.230}{10.100.24.230:9301}{dlrt}{rack_id=rack_one, ml.machine_memory=269645852672, xpack.installed=true, data=warm, transform.node=true, ml.max_open_jobs=20}, {serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}]; discovery will continue using [10.100.24.230:9300, 10.100.24.231:9300, 10.100.24.232:9300] from hosts providers and [{serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}] from last-known cluster state; node term 64, last-accepted version 411158 in term 64
  53. [2020-09-27T11:03:57,529][WARN ][o.e.c.c.ClusterFormationFailureHelper] [serverra1_warm.sit.comp.state] master not discovered yet: have discovered [{serverra1_warm.sit.comp.state}{KyQI0BMySRKE4yoeitADCQ}{hB25a3FYRIiwrWMNgHks9A}{10.100.24.230}{10.100.24.230:9301}{dlrt}{rack_id=rack_one, ml.machine_memory=269645852672, xpack.installed=true, data=warm, transform.node=true, ml.max_open_jobs=20}, {serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}]; discovery will continue using [10.100.24.230:9300, 10.100.24.231:9300, 10.100.24.232:9300] from hosts providers and [{serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverla1.sit.comp.state}{DtuHk-M9TRi1axJC_BzVog}{KEQf3lgnSayJzBwLl1g0mQ}{10.100.24.233}{10.100.24.233:9300}{dilmrt}{rack_id=rack_2, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}, {serverra2.sit.comp.state}{wHODtVp4QJmjUVaO86eUPw}{Y0cKXKRCRQ-o97EC8bbWmA}{10.100.24.231}{10.100.24.231:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}] from last-known cluster state; node term 64, last-accepted version 411158 in term 64
  54. [2020-09-27T11:04:55,511][INFO ][o.e.c.c.JoinHelper       ] [serverra1_warm.sit.comp.state] failed to join {serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true} with JoinRequest{sourceNode={serverra1_warm.sit.comp.state}{KyQI0BMySRKE4yoeitADCQ}{hB25a3FYRIiwrWMNgHks9A}{10.100.24.230}{10.100.24.230:9301}{dlrt}{rack_id=rack_one, ml.machine_memory=269645852672, xpack.installed=true, data=warm, transform.node=true, ml.max_open_jobs=20}, minimumTerm=64, optionalJoin=Optional[Join{term=64, lastAcceptedTerm=62, lastAcceptedVersion=401644, sourceNode={serverra1_warm.sit.comp.state}{KyQI0BMySRKE4yoeitADCQ}{hB25a3FYRIiwrWMNgHks9A}{10.100.24.230}{10.100.24.230:9301}{dlrt}{rack_id=rack_one, ml.machine_memory=269645852672, xpack.installed=true, data=warm, transform.node=true, ml.max_open_jobs=20}, targetNode={serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}}]}
  55. org.elasticsearch.transport.ReceiveTimeoutTransportException: [serverra3.sit.comp.state][10.100.24.232:9300][internal:cluster/coordination/join] request_id [2780865] timed out after [59884ms]
  56.         at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1041) [elasticsearch-7.7.0.jar:7.7.0]
  57.         at org.elasticsearch.common.util.concurrent.TstateeadContext$ContextPreservingRunnable.run(TstateeadContext.java:633) [elasticsearch-7.7.0.jar:7.7.0]
  58.         at java.util.concurrent.TstateeadPoolExecutor.runWorker(TstateeadPoolExecutor.java:1130) [?:?]
  59.         at java.util.concurrent.TstateeadPoolExecutor$Worker.run(TstateeadPoolExecutor.java:630) [?:?]
  60.         at java.lang.Tstateead.run(Tstateead.java:832) [?:?]
  61. [2020-09-27T11:04:55,512][INFO ][o.e.c.c.JoinHelper       ] [serverra1_warm.sit.comp.state] failed to join {serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true} with JoinRequest{sourceNode={serverra1_warm.sit.comp.state}{KyQI0BMySRKE4yoeitADCQ}{hB25a3FYRIiwrWMNgHks9A}{10.100.24.230}{10.100.24.230:9301}{dlrt}{rack_id=rack_one, ml.machine_memory=269645852672, xpack.installed=true, data=warm, transform.node=true, ml.max_open_jobs=20}, minimumTerm=64, optionalJoin=Optional[Join{term=64, lastAcceptedTerm=62, lastAcceptedVersion=401644, sourceNode={serverra1_warm.sit.comp.state}{KyQI0BMySRKE4yoeitADCQ}{hB25a3FYRIiwrWMNgHks9A}{10.100.24.230}{10.100.24.230:9301}{dlrt}{rack_id=rack_one, ml.machine_memory=269645852672, xpack.installed=true, data=warm, transform.node=true, ml.max_open_jobs=20}, targetNode={serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}}]}
  62. org.elasticsearch.transport.ReceiveTimeoutTransportException: [serverra3.sit.comp.state][10.100.24.232:9300][internal:cluster/coordination/join] request_id [2780865] timed out after [59884ms]
  63.         at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1041) [elasticsearch-7.7.0.jar:7.7.0]
  64.         at org.elasticsearch.common.util.concurrent.TstateeadContext$ContextPreservingRunnable.run(TstateeadContext.java:633) [elasticsearch-7.7.0.jar:7.7.0]
  65.         at java.util.concurrent.TstateeadPoolExecutor.runWorker(TstateeadPoolExecutor.java:1130) [?:?]
  66.         at java.util.concurrent.TstateeadPoolExecutor$Worker.run(TstateeadPoolExecutor.java:630) [?:?]
  67.         at java.lang.Tstateead.run(Tstateead.java:832) [?:?]
  68. [2020-09-27T11:05:57,221][WARN ][o.e.t.TransportService   ] [serverra1_warm.sit.comp.state] Received response for a request that has timed out, sent [121534ms] ago, timed out [61650ms] ago, action [internal:cluster/coordination/join], node [{serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}], id [2780865]
  69. [2020-09-27T11:06:04,162][WARN ][o.e.i.c.IndicesClusterStateService] [serverra1_warm.sit.comp.state] [comp_app_digical_srs-dao-2020.08.25][0] marking and sending shard failed due to [failed recovery]
  70. org.elasticsearch.indices.recovery.RecoveryFailedException: [comp_app_digical_srs-dao-2020.08.25][0]: Recovery failed from {serverra1.sit.comp.state}{WJ1uKAbVTd-9BvtMZYk37g}{0FN7GxerSKuq6fneXklYbg}{10.100.24.230}{10.100.24.230:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true} into {serverra1_warm.sit.comp.state}{KyQI0BMySRKE4yoeitADCQ}{hB25a3FYRIiwrWMNgHks9A}{10.100.24.230}{10.100.24.230:9301}{dlrt}{rack_id=rack_one, ml.machine_memory=269645852672, xpack.installed=true, data=warm, transform.node=true, ml.max_open_jobs=20}
  71.         at org.elasticsearch.indices.recovery.PeerRecoveryTargetService.lambda$doRecovery$2(PeerRecoveryTargetService.java:247) [elasticsearch-7.7.0.jar:7.7.0]
  72.         at org.elasticsearch.indices.recovery.PeerRecoveryTargetService$1.handleException(PeerRecoveryTargetService.java:292) [elasticsearch-7.7.0.jar:7.7.0]
  73.         at org.elasticsearch.transport.PlainTransportFuture.handleException(PlainTransportFuture.java:97) [elasticsearch-7.7.0.jar:7.7.0]
  74.         at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1139) [elasticsearch-7.7.0.jar:7.7.0]
  75.         at org.elasticsearch.transport.InboundHandler.lambda$handleException$2(InboundHandler.java:244) [elasticsearch-7.7.0.jar:7.7.0]
  76.         at org.elasticsearch.common.util.concurrent.TstateeadContext$ContextPreservingRunnable.run(TstateeadContext.java:633) [elasticsearch-7.7.0.jar:7.7.0]
  77.         at java.util.concurrent.TstateeadPoolExecutor.runWorker(TstateeadPoolExecutor.java:1130) [?:?]
  78.         at java.util.concurrent.TstateeadPoolExecutor$Worker.run(TstateeadPoolExecutor.java:630) [?:?]
  79.         at java.lang.Tstateead.run(Tstateead.java:832) [?:?]
  80. Caused by: org.elasticsearch.transport.RemoteTransportException: [serverra1.sit.comp.state][10.100.24.230:9300][internal:index/shard/recovery/start_recovery]
  81. Caused by: org.elasticsearch.index.engine.RecoveryEngineException: Phase[1] prepare target for translog failed
  82.         at org.elasticsearch.indices.recovery.RecoverySourceHandler.lambda$prepareTargetForTranslog$32(RecoverySourceHandler.java:626) ~[elasticsearch-7.7.0.jar:7.7.0]
  83.         at org.elasticsearch.action.ActionListener$1.onFailure(ActionListener.java:71) ~[elasticsearch-7.7.0.jar:7.7.0]
  84.         at org.elasticsearch.action.ActionListener$4.onFailure(ActionListener.java:173) ~[elasticsearch-7.7.0.jar:7.7.0]
  85.         at org.elasticsearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:59) ~[elasticsearch-7.7.0.jar:7.7.0]
  86.         at org.elasticsearch.transport.PlainTransportFuture.handleException(PlainTransportFuture.java:97) ~[elasticsearch-7.7.0.jar:7.7.0]
  87.         at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1139) ~[elasticsearch-7.7.0.jar:7.7.0]
  88.         at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1040) ~[elasticsearch-7.7.0.jar:7.7.0]
  89.         at org.elasticsearch.common.util.concurrent.TstateeadContext$ContextPreservingRunnable.run(TstateeadContext.java:633) ~[elasticsearch-7.7.0.jar:7.7.0]
  90.         at java.util.concurrent.TstateeadPoolExecutor.runWorker(TstateeadPoolExecutor.java:1130) ~[?:?]
  91.         at java.util.concurrent.TstateeadPoolExecutor$Worker.run(TstateeadPoolExecutor.java:630) ~[?:?]
  92.         at java.lang.Tstateead.run(Tstateead.java:832) ~[?:?]
  93. Caused by: org.elasticsearch.transport.ReceiveTimeoutTransportException: [serverra1_warm.sit.comp.state][10.100.24.230:9301][internal:index/shard/recovery/prepare_translog] request_id [12165884] timed out after [899987ms]
  94.         at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1041) ~[elasticsearch-7.7.0.jar:7.7.0]
  95.         at org.elasticsearch.common.util.concurrent.TstateeadContext$ContextPreservingRunnable.run(TstateeadContext.java:633) ~[elasticsearch-7.7.0.jar:7.7.0]
  96.         at java.util.concurrent.TstateeadPoolExecutor.runWorker(TstateeadPoolExecutor.java:1130) ~[?:?]
  97.         at java.util.concurrent.TstateeadPoolExecutor$Worker.run(TstateeadPoolExecutor.java:630) ~[?:?]
  98.         at java.lang.Tstateead.run(Tstateead.java:832) ~[?:?]
  99. [2020-09-27T11:06:04,241][WARN ][o.e.c.s.ClusterApplierService] [serverra1_warm.sit.comp.state] cluster state applier task [ApplyCommitRequest{term=64, version=411155, sourceNode={serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}}] took [12.8m] which is above the warn tstateeshold of [30s]: [running task [ApplyCommitRequest{term=64, version=411155, sourceNode={serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}}]] took [0ms], [connecting to new nodes] took [0ms], [applying settings] took [0ms], [running applier [org.elasticsearch.indices.cluster.IndicesClusterStateService@7aecf67]] took [773390ms], [running applier [org.elasticsearch.script.ScriptService@560812ba]] took [0ms], [running applier [org.elasticsearch.xpack.ilm.IndexLifecycleService@68de0841]] took [0ms], [running applier [org.elasticsearch.repositories.RepositoriesService@7647f39]] took [0ms], [running applier [org.elasticsearch.snapshots.RestoreService@40c8bcec]] took [0ms], [running applier [org.elasticsearch.ingest.IngestService@71d8dbb4]] took [0ms], [running applier [org.elasticsearch.action.ingest.IngestActionForwarder@2acc80a7]] took [0ms], [running applier [org.elasticsearch.action.admin.cluster.repositories.cleanup.TransportCleanupRepositoryAction$$Lambda$3650/0x00007f8b47908cb0@3cfe2039]] took [0ms], [running applier [org.elasticsearch.tasks.TaskManager@1784d7e5]] took [0ms], [notifying listener [org.elasticsearch.cluster.InternalClusterInfoService@53c68d11]] took [0ms], [notifying listener [org.elasticsearch.xpack.security.support.SecurityIndexManager@7b4576ff]] took [0ms], [notifying listener [org.elasticsearch.xpack.security.support.SecurityIndexManager@504823fb]] took [0ms], [notifying listener [org.elasticsearch.xpack.security.authc.TokenService$$Lambda$2346/0x00007f8b81111458@5e1c6ace]] took [0ms], [notifying listener [org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$$Lambda$2429/0x00007f8b80c25c58@478d8bf]] took [0ms], [notifying listener [org.elasticsearch.xpack.watcher.support.WatcherIndexTemplateRegistry@153e7430]] took [0ms], [notifying listener [org.elasticsearch.xpack.watcher.WatcherLifeCycleService@28c92643]] took [0ms], [notifying listener [org.elasticsearch.xpack.watcher.WatcherIndexingListener@1cb23a28]] took [0ms], [notifying listener [org.elasticsearch.xpack.ml.MlIndexTemplateRegistry@35c2cef4]] took [0ms], [notifying listener [org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager@2246336a]] took [0ms], [notifying listener [org.elasticsearch.xpack.ml.datafeed.DatafeedManager$TaskRunner@16fe6ed8]] took [0ms], [notifying listener [org.elasticsearch.xpack.ml.inference.loadingservice.ModelLoadingService@719ab82a]] took [0ms], [notifying listener [org.elasticsearch.xpack.ml.MlAssignmentNotifier@574f585d]] took [0ms], [notifying listener [org.elasticsearch.xpack.ml.MlInitializationService@5215514e]] took [0ms], [notifying listener [org.elasticsearch.xpack.ilm.history.ILMHistoryTemplateRegistry@37f87d7a]] took [0ms], [notifying listener [org.elasticsearch.xpack.ilm.IndexLifecycleService@68de0841]] took [0ms], [notifying listener [org.elasticsearch.xpack.core.slm.history.SnapshotLifecycleTemplateRegistry@1387c847]] took [0ms], [notifying listener [org.elasticsearch.xpack.slm.SnapshotLifecycleService@3ea7f7da]] took [0ms], [notifying listener [org.elasticsearch.xpack.ccr.action.ShardFollowTaskCleaner@46c05e61]] took [0ms], [notifying listener [org.elasticsearch.xpack.transform.TransformClusterStateListener@3227e86e]] took [0ms], [notifying listener [org.elasticsearch.cluster.metadata.TemplateUpgradeService@1fcfcb9d]] took [0ms], [notifying listener [org.elasticsearch.node.ResponseCollectorService@1e57e699]] took [0ms], [notifying listener [org.elasticsearch.snapshots.SnapshotShardsService@5a208064]] took [0ms], [notifying listener [org.elasticsearch.xpack.ml.action.TransportOpenJobAction$OpenJobPersistentTasksExecutor$$Lambda$3167/0x00007f8b47ee4058@57c977d6]] took [0ms], [notifying listener [org.elasticsearch.xpack.ml.action.TransportStartDataFrameAnalyticsAction$TaskExecutor$$Lambda$3171/0x00007f8b47f20c58@36bbdd91]] took [0ms], [notifying listener [org.elasticsearch.persistent.PersistentTasksClusterService@177996f1]] took [0ms], [notifying listener [org.elasticsearch.cluster.routing.DelayedAllocationService@490407da]] took [0ms], [notifying listener [org.elasticsearch.indices.store.IndicesStore@55efaeeb]] took [6ms], [notifying listener [org.elasticsearch.gateway.DanglingIndicesState@27a6fffd]] took [21ms], [notifying listener [org.elasticsearch.persistent.PersistentTasksNodeService@69d1b481]] took [0ms], [notifying listener [org.elasticsearch.license.LicenseService@7998f751]] took [0ms], [notifying listener [org.elasticsearch.xpack.search.AsyncSearchMaintenanceService@3cfa129b]] took [0ms], [notifying listener [org.elasticsearch.xpack.ccr.action.AutoFollowCoordinator@5828627f]] took [0ms], [notifying listener [org.elasticsearch.gateway.GatewayService@1098496c]] took [0ms], [notifying listener [org.elasticsearch.cluster.service.ClusterApplierService$LocalNodeMasterListeners@3560b2f0]] took [0ms]
  100. [2020-09-27T11:06:04,243][INFO ][o.e.c.s.ClusterSettings  ] [serverra1_warm.sit.comp.state] updating [cluster.routing.allocation.node_concurrent_incoming_recoveries] from [1] to [2]
  101. [2020-09-27T11:06:04,243][INFO ][o.e.c.s.ClusterSettings  ] [serverra1_warm.sit.comp.state] updating [cluster.routing.allocation.node_concurrent_outgoing_recoveries] from [1] to [2]
  102. [2020-09-27T11:06:58,081][WARN ][o.e.c.s.ClusterApplierService] [serverra1_warm.sit.comp.state] cluster state applier task [ApplyCommitRequest{term=64, version=411156, sourceNode={serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}}] took [53.8s] which is above the warn tstateeshold of [30s]: [running task [ApplyCommitRequest{term=64, version=411156, sourceNode={serverra3.sit.comp.state}{BVnFEkNNTcKHn-WldV8mlw}{wh_nNgblT7OpMU4_BD59wA}{10.100.24.232}{10.100.24.232:9300}{dilmrt}{rack_id=rack_one, ml.machine_memory=269645852672, ml.max_open_jobs=20, xpack.installed=true, data=hot, transform.node=true}}]] took [0ms], [connecting to new nodes] took [0ms], [applying settings] took [2ms], [running applier [org.elasticsearch.indices.cluster.IndicesClusterStateService@7aecf67]] took [53299ms], [running applier [org.elasticsearch.script.ScriptService@560812ba]] took [0ms], [running applier [org.elasticsearch.xpack.ilm.IndexLifecycleService@68de0841]] took [0ms], [running applier [org.elasticsearch.repositories.RepositoriesService@7647f39]] took [0ms], [running applier [org.elasticsearch.snapshots.RestoreService@40c8bcec]] took [0ms], [running applier [org.elasticsearch.ingest.IngestService@71d8dbb4]] took [0ms], [running applier [org.elasticsearch.action.ingest.IngestActionForwarder@2acc80a7]] took [0ms], [running applier [org.elasticsearch.action.admin.cluster.repositories.cleanup.TransportCleanupRepositoryAction$$Lambda$3650/0x00007f8b47908cb0@3cfe2039]] took [0ms], [running applier [org.elasticsearch.tasks.TaskManager@1784d7e5]] took [0ms], [notifying listener [org.elasticsearch.cluster.InternalClusterInfoService@53c68d11]] took [0ms], [notifying listener [org.elasticsearch.xpack.security.support.SecurityIndexManager@7b4576ff]] took [0ms], [notifying listener [org.elasticsearch.xpack.security.support.SecurityIndexManager@504823fb]] took [0ms], [notifying listener [org.elasticsearch.xpack.security.authc.TokenService$$Lambda$2346/0x00007f8b81111458@5e1c6ace]] took [0ms], [notifying listener [org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$$Lambda$2429/0x00007f8b80c25c58@478d8bf]] took [0ms], [notifying listener [org.elasticsearch.xpack.watcher.support.WatcherIndexTemplateRegistry@153e7430]] took [0ms], [notifying listener [org.elasticsearch.xpack.watcher.WatcherLifeCycleService@28c92643]] took [0ms], [notifying listener [org.elasticsearch.xpack.watcher.WatcherIndexingListener@1cb23a28]] took [0ms], [notifying listener [org.elasticsearch.xpack.ml.MlIndexTemplateRegistry@35c2cef4]] took [0ms], [notifying listener [org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager@2246336a]] took [0ms], [notifying listener [org.elasticsearch.xpack.ml.datafeed.DatafeedManager$TaskRunner@16fe6ed8]] took [0ms], [notifying listener [org.elasticsearch.xpack.ml.inference.loadingservice.ModelLoadingService@719ab82a]] took [0ms], [notifying listener [org.elasticsearch.xpack.ml.MlAssignmentNotifier@574f585d]] took [0ms], [notifying listener [org.elasticsearch.xpack.ml.MlInitializationService@5215514e]] took [0ms], [notifying listener [org.elasticsearch.xpack.ilm.history.ILMHistoryTemplateRegistry@37f87d7a]] took [0ms], [notifying listener [org.elasticsearch.xpack.ilm.IndexLifecycleService@68de0841]] took [0ms], [notifying listener [org.elasticsearch.xpack.core.slm.history.SnapshotLifecycleTemplateRegistry@1387c847]] took [0ms], [notifying listener [org.elasticsearch.xpack.slm.SnapshotLifecycleService@3ea7f7da]] took [0ms], [notifying listener [org.elasticsearch.xpack.ccr.action.ShardFollowTaskCleaner@46c05e61]] took [0ms], [notifying listener [org.elasticsearch.xpack.transform.TransformClusterStateListener@3227e86e]] took [0ms], [notifying listener [org.elasticsearch.cluster.metadata.TemplateUpgradeService@1fcfcb9d]] took [0ms], [notifying listener [org.elasticsearch.node.ResponseCollectorService@1e57e699]] took [0ms], [notifying listener [org.elasticsearch.snapshots.SnapshotShardsService@5a208064]] took [0ms], [notifying listener [org.elasticsearch.xpack.ml.action.TransportOpenJobAction$OpenJobPersistentTasksExecutor$$Lambda$3167/0x00007f8b47ee4058@57c977d6]] took [0ms], [notifying listener [org.elasticsearch.xpack.ml.action.TransportStartDataFrameAnalyticsAction$TaskExecutor$$Lambda$3171/0x00007f8b47f20c58@36bbdd91]] took [0ms], [notifying listener [org.elasticsearch.persistent.PersistentTasksClusterService@177996f1]] took [0ms], [notifying listener [org.elasticsearch.cluster.routing.DelayedAllocationService@490407da]] took [0ms], [notifying listener [org.elasticsearch.indices.store.IndicesStore@55efaeeb]] took [504ms], [notifying listener [org.elasticsearch.gateway.DanglingIndicesState@27a6fffd]] took [31ms], [notifying listener [org.elasticsearch.persistent.PersistentTasksNodeService@69d1b481]] took [0ms], [notifying listener [org.elasticsearch.license.LicenseService@7998f751]] took [0ms], [notifying listener [org.elasticsearch.xpack.search.AsyncSearchMaintenanceService@3cfa129b]] took [0ms], [notifying listener [org.elasticsearch.xpack.ccr.action.AutoFollowCoordinator@5828627f]] took [0ms], [notifying listener [org.elasticsearch.gateway.GatewayService@1098496c]] took [0ms], [notifying listener [org.elasticsearch.cluster.service.ClusterApplierService$LocalNodeMasterListeners@3560b2f0]] took [0ms]
  103. [2020-09-27T11:10:39,055][WARN ][o.e.x.m.MonitoringService] [serverra1_warm.sit.comp.state] monitoring execution failed
  104. org.elasticsearch.xpack.monitoring.exporter.ExportException: failed to flush export bulks
  105.         at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound.lambda$doFlush$0(ExportBulk.java:109) [x-pack-monitoring-7.7.0.jar:7.7.0]
  106.         at org.elasticsearch.action.ActionListener$1.onFailure(ActionListener.java:71) [elasticsearch-7.7.0.jar:7.7.0]
  107.         at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$doFlush$1(LocalBulk.java:112) [x-pack-monitoring-7.7.0.jar:7.7.0]
  108.         at org.elasticsearch.action.ActionListener$1.onFailure(ActionListener.java:71) [elasticsearch-7.7.0.jar:7.7.0]
  109.         at org.elasticsearch.action.support.ContextPreservingActionListener.onFailure(ContextPreservingActionListener.java:50) [elasticsearch-7.7.0.jar:7.7.0]
  110.         at org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:79) [elasticsearch-7.7.0.jar:7.7.0]
  111.  
  112.                 ...
  113.                 Caused by: org.elasticsearch.xpack.monitoring.exporter.ExportException: failed to flush export bulk [default_local]
  114.         ... 48 more
  115. Caused by: org.elasticsearch.transport.RemoteTransportException: [serverla1.sit.comp.state][10.100.24.233:9300][indices:data/write/bulk]
  116. Caused by: org.elasticsearch.common.util.concurrent.EsRejectedExecutionException: rejected execution of org.elasticsearch.transport.InboundHandler$RequestHandler@618a5d18 on EsTstateeadPoolExecutor[name = serverla1.sit.comp.state/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsTstateeadPoolExecutor@4cbf834d[Running, pool size = 48, active tstateeads = 48, queued tasks = 200, completed tasks = 11197926]]
  117.         at org.elasticsearch.common.util.concurrent.EsAbortPolicy.rejectedExecution(EsAbortPolicy.java:48) ~[elasticsearch-7.7.0.jar:7.7.0]
  118.         at java.util.concurrent.TstateeadPoolExecutor.reject(TstateeadPoolExecutor.java:827) ~[?:?]
  119. ...
  120. [2020-09-27T11:10:46,557][WARN ][o.e.x.m.e.l.LocalExporter] [serverra1_warm.sit.comp.state] unexpected error while indexing monitoring document
  121. org.elasticsearch.xpack.monitoring.exporter.ExportException: RemoteTransportException[[serverla1.sit.comp.state][10.100.24.233:9300][indices:data/write/bulk[s]]]; nested: RemoteTransportException[[serverla1.sit.comp.state][10.100.24.233:9300][indices:data/write/bulk[s][p]]]; nested: EsRejectedExecutionException[rejected execution of processing of [41992937][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[.monitoring-es-7-2020.09.27][0]] containing [index {[.monitoring-es-7-2020.09.27][_doc][-pnUznQBobWFoWi3MMEJ], source[{"cluster_uuid":"bfHRZvWjQjONSvMesj2IMA","timestamp":"2020-09-27T09:10:46.511Z","interval_ms":10000,"type":"node_stats","source_node":{"uuid":"KyQI0BMySRKE4yoeitADCQ","host":"10.100.24.230","transport_address":"10.100.24.230:9301","ip":"10.100.24.230","name":"serverra1_warm.sit.comp.state","timestamp":"2020-09-27T09:10:46.511Z"},"node_stats":{"node_id":"KyQI0BMySRKE4yoeitADCQ","node_master":false,"mlockall":true,"indices":{"docs":{"count":220626208},"store":{"size_in_bytes":106308877258},"indexing":{"index_total":41,"index_time_in_millis":6397,"tstateottle_time_in_millis":0},"search":{"query_total":0,"query_time_in_millis":0},"query_cache":{"memory_size_in_bytes":0,"hit_count":0,"miss_count":0,"evictions":0},"fielddata":{"memory_size_in_bytes":0,"evictions":0},"segments":{"count":453,"memory_in_bytes":4779764,"terms_memory_in_bytes":2978120,"stored_fields_memory_in_bytes":1450528,"term_vectors_memory_in_bytes":0,"norms_memory_in_bytes":266624,"points_memory_in_bytes":0,"doc_values_memory_in_bytes":84492,"index_writer_memory_in_bytes":0,"version_map_memory_in_bytes":0,"fixed_bit_set_memory_in_bytes":0},"request_cache":{"memory_size_in_bytes":0,"evictions":0,"hit_count":0,"miss_count":0}},"os":{"cpu":{"load_average":{"1m":27.04,"5m":40.9,"15m":21.98}},"cgroup":{"cpuacct":{"control_group":"/","usage_nanos":72405418881816537},"cpu":{"control_group":"/","cfs_period_micros":100000,"cfs_quota_micros":-1,"stat":{"number_of_elapsed_periods":0,"number_of_times_tstateottled":0,"time_tstateottled_nanos":0}},"memory":{"control_group":"/system.slice/elasticsearch_warm.service","limit_in_bytes":"9223372036854771712","usage_in_bytes":"128804397056"}}},"process":{"open_file_descriptors":1288,"max_file_descriptors":65536,"cpu":{"percent":0}},"jvm":{"mem":{"heap_used_in_bytes":28935779616,"heap_used_percent":56,"heap_max_in_bytes":51539607552},"gc":{"collectors":{"young":{"collection_count":152,"collection_time_in_millis":7004},"old":{"collection_count":0,"collection_time_in_millis":0}}}},"tstateead_pool":{"generic":{"tstateeads":46,"queue":0,"rejected":0},"get":{"tstateeads":0,"queue":0,"rejected":0},"management":{"tstateeads":5,"queue":0,"rejected":0},"search":{"tstateeads":0,"queue":0,"rejected":0},"watcher":{"tstateeads":0,"queue":0,"rejected":0},"write":{"tstateeads":48,"queue":0,"rejected":0}},"fs":{"total":{"total_in_bytes":109706424975360,"free_in_bytes":78464678326272,"available_in_bytes":78464678326272},"io_stats":{"total":{"operations":109123130,"read_operations":92308377,"write_operations":16814753,"read_kilobytes":23207220976,"write_kilobytes":692495824}}}}}]}], target allocation id: 0UbnkeLNQ26ObKc-uZCIDQ, primary term: 1 on EsTstateeadPoolExecutor[name = serverla1.sit.comp.state/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsTstateeadPoolExecutor@4cbf834d[Running, pool size = 48, active tstateeads = 48, queued tasks = 200, completed tasks = 11197926]]];
  122.         at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$tstateowExportException$2(LocalBulk.java:125) ~[x-pack-monitoring-7.7.0.jar:7.7.0]
  123.         at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) ~[?:?]
  124.         at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177) ~[?:?]
  125.  
  126.                 ...
  127.                 Caused by: org.elasticsearch.transport.RemoteTransportException: [serverla1.sit.comp.state][10.100.24.233:9300][indices:data/write/bulk[s]]
  128. Caused by: org.elasticsearch.transport.RemoteTransportException: [serverla1.sit.comp.state][10.100.24.233:9300][indices:data/write/bulk[s][p]]
  129. Caused by: org.elasticsearch.common.util.concurrent.EsRejectedExecutionException: rejected execution of processing of [41992937][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[.monitoring-es-7-2020.09.27][0]] containing [index {[.monitoring-es-7-2020.09.27][_doc][-pnUznQBobWFoWi3MMEJ], source[{"cluster_uuid":"bfHRZvWjQjONSvMesj2IMA","timestamp":"2020-09-27T09:10:46.511Z","interval_ms":10000,"type":"node_stats","source_node":{"uuid":"KyQI0BMySRKE4yoeitADCQ","host":"10.100.24.230","transport_address":"10.100.24.230:9301","ip":"10.100.24.230","name":"serverra1_warm.sit.comp.state","timestamp":"2020-09-27T09:10:46.511Z"},"node_stats":{"node_id":"KyQI0BMySRKE4yoeitADCQ","node_master":false,"mlockall":true,"indices":{"docs":{"count":220626208},"store":{"size_in_bytes":106308877258},"indexing":{"index_total":41,"index_time_in_millis":6397,"tstateottle_time_in_millis":0},"search":{"query_total":0,"query_time_in_millis":0},"query_cache":{"memory_size_in_bytes":0,"hit_count":0,"miss_count":0,"evictions":0},"fielddata":{"memory_size_in_bytes":0,"evictions":0},"segments":{"count":453,"memory_in_bytes":4779764,"terms_memory_in_bytes":2978120,"stored_fields_memory_in_bytes":1450528,"term_vectors_memory_in_bytes":0,"norms_memory_in_bytes":266624,"points_memory_in_bytes":0,"doc_values_memory_in_bytes":84492,"index_writer_memory_in_bytes":0,"version_map_memory_in_bytes":0,"fixed_bit_set_memory_in_bytes":0},"request_cache":{"memory_size_in_bytes":0,"evictions":0,"hit_count":0,"miss_count":0}},"os":{"cpu":{"load_average":{"1m":27.04,"5m":40.9,"15m":21.98}},"cgroup":{"cpuacct":{"control_group":"/","usage_nanos":72405418881816537},"cpu":{"control_group":"/","cfs_period_micros":100000,"cfs_quota_micros":-1,"stat":{"number_of_elapsed_periods":0,"number_of_times_tstateottled":0,"time_tstateottled_nanos":0}},"memory":{"control_group":"/system.slice/elasticsearch_warm.service","limit_in_bytes":"9223372036854771712","usage_in_bytes":"128804397056"}}},"process":{"open_file_descriptors":1288,"max_file_descriptors":65536,"cpu":{"percent":0}},"jvm":{"mem":{"heap_used_in_bytes":28935779616,"heap_used_percent":56,"heap_max_in_bytes":51539607552},"gc":{"collectors":{"young":{"collection_count":152,"collection_time_in_millis":7004},"old":{"collection_count":0,"collection_time_in_millis":0}}}},"tstateead_pool":{"generic":{"tstateeads":46,"queue":0,"rejected":0},"get":{"tstateeads":0,"queue":0,"rejected":0},"management":{"tstateeads":5,"queue":0,"rejected":0},"search":{"tstateeads":0,"queue":0,"rejected":0},"watcher":{"tstateeads":0,"queue":0,"rejected":0},"write":{"tstateeads":48,"queue":0,"rejected":0}},"fs":{"total":{"total_in_bytes":109706424975360,"free_in_bytes":78464678326272,"available_in_bytes":78464678326272},"io_stats":{"total":{"operations":109123130,"read_operations":92308377,"write_operations":16814753,"read_kilobytes":23207220976,"write_kilobytes":692495824}}}}}]}], target allocation id: 0UbnkeLNQ26ObKc-uZCIDQ, primary term: 1 on EsTstateeadPoolExecutor[name = serverla1.sit.comp.state/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsTstateeadPoolExecutor@4cbf834d[Running, pool size = 48, active tstateeads = 48, queued tasks = 200, completed tasks = 11197926]]
  130.         at org.elasticsearch.common.util.concurrent.EsAbortPolicy.rejectedExecution(EsAbortPolicy.java:48) ~[elasticsearch-7.7.0.jar:7.7.0]
  131.         at java.util.concurrent.TstateeadPoolExecutor.reject(TstateeadPoolExecutor.java:827) ~[?:?]
  132.         at java.util.concurrent.TstateeadPoolExecutor.execute(TstateeadPoolExecutor.java:1357) ~[?:?]
  133.         at org.elasticsearch.common.util.concurrent.EsTstateeadPoolExecutor.execute(EsTstateeadPoolExecutor.java:84) ~[elasticsearch-
  134.                 ...
  135.                         at java.lang.Tstateead.run(Tstateead.java:832) ~[?:?]
  136. [2020-09-27T11:10:46,573][WARN ][o.e.x.m.MonitoringService] [serverra1_warm.sit.comp.state] monitoring execution failed
  137. org.elasticsearch.xpack.monitoring.exporter.ExportException: failed to flush export bulks
  138.  
  139. ...