k8s cluster hangs when running 22 airflow worker pods in parallel

8/15/2019

I am running airflow with Kubernetes executor on docker-desktop kubernetes cluster (on Mac). I have multiple sensorOperators in dag file, each one of them are part of downstream dependency. In total 22 sensors operators run in parallel, as a result after 5-7 minutes of execution, my kubernetes cluster connection drops. After restarting the cluster, I can again access my k8s dashbaord and check the logs of all red failed tasks and they seems to complain on mysql connection failure. image

[2019-08-15 10:09:03,829] {__init__.py:1374} INFO - Executing <Task(IngestionStatusSensor): ingestion_ready_relational_character_creation> on 2019-03-15T00:00:00+00:00
[2019-08-15 10:09:03,829] {base_task_runner.py:119} INFO - Running: ['airflow', 'run', 'datascience_ecc_v1', 'ingestion_ready_relational_character_creation', '2019-03-15T00:00:00+00:00', '--job_id', '22', '--raw', '-sd', 'DAGS_FOLDER/DAG_datascience_ecc_v1.py', '--cfg_path', '/tmp/tmpb3993h8h']
[2019-08-15 10:10:00,468] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation [2019-08-15 10:10:00,447] {settings.py:182} INFO - settings.configure_orm(): Using pool settings. pool_size=10, pool_recycle=1800, pid=11
[2019-08-15 10:12:39,448] {logging_mixin.py:95} INFO - [2019-08-15 10:12:39,381] {jobs.py:195} ERROR - Scheduler heartbeat got an exception: (_mysql_exceptions.OperationalError) (2006, "Unknown MySQL server host 'mysql' (111)") (Background on this error at: http://sqlalche.me/e/e3q8)
[2019-08-15 10:12:42,967] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation [2019-08-15 10:12:42,772] {__init__.py:51} INFO - Using executor LocalExecutor
[2019-08-15 10:12:44,651] {logging_mixin.py:95} INFO - [2019-08-15 10:12:44,651] {jobs.py:195} ERROR - Scheduler heartbeat got an exception: (_mysql_exceptions.OperationalError) (2006, "Unknown MySQL server host 'mysql' (111)") (Background on this error at: http://sqlalche.me/e/e3q8)
[2019-08-15 10:12:45,331] {logging_mixin.py:95} INFO - [2019-08-15 10:12:45,331] {jobs.py:195} ERROR - Scheduler heartbeat got an exception: (_mysql_exceptions.OperationalError) (2006, "Unknown MySQL server host 'mysql' (111)") (Background on this error at: http://sqlalche.me/e/e3q8)
[2019-08-15 10:12:45,364] {logging_mixin.py:95} INFO - [2019-08-15 10:12:45,364] {jobs.py:195} ERROR - Scheduler heartbeat got an exception: (_mysql_exceptions.OperationalError) (2006, "Unknown MySQL server host 'mysql' (111)") (Background on this error at: http://sqlalche.me/e/e3q8)
[2019-08-15 10:12:50,394] {logging_mixin.py:95} INFO - [2019-08-15 10:12:50,394] {jobs.py:195} ERROR - Scheduler heartbeat got an exception: (_mysql_exceptions.OperationalError) (2006, "Unknown MySQL server host 'mysql' (111)") (Background on this error at: http://sqlalche.me/e/e3q8)
[2019-08-15 10:12:55,415] {logging_mixin.py:95} INFO - [2019-08-15 10:12:55,415] {jobs.py:195} ERROR - Scheduler heartbeat got an exception: (_mysql_exceptions.OperationalError) (2006, "Unknown MySQL server host 'mysql' (111)") (Background on this error at: http://sqlalche.me/e/e3q8)
[2019-08-15 10:12:55,529] {logging_mixin.py:95} INFO - [2019-08-15 10:12:55,528] {jobs.py:195} ERROR - Scheduler heartbeat got an exception: (_mysql_exceptions.OperationalError) (2006, "Unknown MySQL server host 'mysql' (111)") (Background on this error at: http://sqlalche.me/e/e3q8)
[2019-08-15 10:12:58,758] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation [2019-08-15 10:12:58,724] {cli_action_loggers.py:70} ERROR - Failed on pre-execution callback using <function default_action_log at 0x7f7452d13730>
[2019-08-15 10:12:58,758] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation Traceback (most recent call last):
[2019-08-15 10:12:58,759] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation   File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 2228, in _wrap_pool_connect
[2019-08-15 10:12:58,759] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation     return fn()
[2019-08-15 10:12:58,759] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation   File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 434, in connect
[2019-08-15 10:12:58,759] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation     return _ConnectionFairy._checkout(self)
[2019-08-15 10:12:58,775] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation   File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 831, in _checkout
[2019-08-15 10:12:58,775] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation     fairy = _ConnectionRecord.checkout(pool)
[2019-08-15 10:12:58,775] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation   File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 563, in checkout
[2019-08-15 10:12:58,775] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation     rec = pool._do_get()
[2019-08-15 10:12:58,775] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation   File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 1259, in _do_get
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation     self._dec_overflow()
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation   File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/util/langhelpers.py", line 67, in __exit__
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation     compat.reraise(exc_type, exc_value, exc_tb)
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation   File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 277, in reraise
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation     raise value
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation   File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 1256, in _do_get
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation     return self._create_connection()
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation   File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 379, in _create_connection
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation     return _ConnectionRecord(self)
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation   File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 508, in __init__
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation     self.__connect(first_connect_check=True)
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation   File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 710, in __connect
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation     connection = pool._invoke_creator(self)
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation   File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/engine/strategies.py", line 114, in connect
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation     return dialect.connect(*cargs, **cparams)
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation   File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 437, in connect
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation     return self.dbapi.connect(*cargs, **cparams)
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation   File "/usr/local/airflow/venv/lib/python3.6/site-packages/MySQLdb/__init__.py", line 85, in Connect
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation     return Connection(*args, **kwargs)
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation   File "/usr/local/airflow/venv/lib/python3.6/site-packages/MySQLdb/connections.py", line 208, in __init__
[2019-08-15 10:12:58,778] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation     super(Connection, self).__init__(*args, **kwargs2)
[2019-08-15 10:12:58,778] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation _mysql_exceptions.OperationalError: (2006, "Unknown MySQL server host 'mysql' (111)")

However, If I disable the dag from airflow UI dashboard, and run each failed task independently, they seem to run successfully. I thought maybe there is a connection limit to mysql, so I added following into airflow core configs

sql_alchemy_pool_enabled=True
sql_alchemy_pool_size = 10
sql_alchemy_max_overflow = 15
sql_alchemy_pool_recycle = 1800
sql_alchemy_reconnect_timeout = 300

I also tried increasing the parallelism and dag_concurrency to 32 and 40 in airflow config.cfg respectively. But both of these configs didn't had any effect. I have no idea of whats causing this failures. Either the cluster goes down first, and then worker pods are not able to connect with mysql server, or its the other way around. Is it the issue with docker-desktop kubernetes cluster ? should I be looking at the logs of kube-dns ?

Update after I ran 3 dag tasks together, the cluster hanged again and this time, the airflow-webserver gave-up too

Traceback (most recent call last):
  File "/usr/local/airflow/venv/lib/python3.6/site-packages/urllib3/response.py", line 397, in _error_catcher
    yield
  File "/usr/local/airflow/venv/lib/python3.6/site-packages/urllib3/response.py", line 704, in read_chunked
    self._update_chunk_length()
  File "/usr/local/airflow/venv/lib/python3.6/site-packages/urllib3/response.py", line 643, in _update_chunk_length
    raise httplib.IncompleteRead(line)
http.client.IncompleteRead: IncompleteRead(0 bytes read)

During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/usr/local/airflow/venv/lib/python3.6/site-packages/airflow/contrib/executors/kubernetes_executor.py", line 293, in run
    self.worker_uuid)
  File "/usr/local/airflow/venv/lib/python3.6/site-packages/airflow/contrib/executors/kubernetes_executor.py", line 314, in _run
    **kwargs):
  File "/usr/local/airflow/venv/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 144, in stream
    for line in iter_resp_lines(resp):
  File "/usr/local/airflow/venv/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 48, in iter_resp_lines
    for seg in resp.read_chunked(decode_content=False):
  File "/usr/local/airflow/venv/lib/python3.6/site-packages/urllib3/response.py", line 732, in read_chunked
    self._original_response.close()
  File "/usr/local/lib/python3.6/contextlib.py", line 99, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/airflow/venv/lib/python3.6/site-packages/urllib3/response.py", line 415, in _error_catcher
    raise ProtocolError('Connection broken: %r' % e, e)
urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))
[2019-08-13 14:39:03,684] {kubernetes_executor.py:295} ERROR - Unknown error in KubernetesJobWatcher. Failing
Traceback (most recent call last):
  File "/usr/local/airflow/venv/lib/python3.6/site-packages/urllib3/response.py", line 639, in _update_chunk_length
    self.chunk_left = int(line, 16)
ValueError: invalid literal for int() with base 16: b''
  ...

    File "/usr/local/lib/python3.6/contextlib.py", line 99, in __exit__
        self.gen.throw(type, value, traceback)
      File "/usr/local/airflow/venv/lib/python3.6/site-packages/urllib3/response.py", line 415, in _error_catcher
        raise ProtocolError('Connection broken: %r' % e, e)
    urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))
    ...
  File "/usr/local/airflow/venv/lib/python3.6/site-packages/MySQLdb/__init__.py", line 85, in Connect
    return Connection(*args, **kwargs)
  File "/usr/local/airflow/venv/lib/python3.6/site-packages/MySQLdb/connections.py", line 208, in __init__
    super(Connection, self).__init__(*args, **kwargs2)
_mysql_exceptions.OperationalError: (2006, "Unknown MySQL server host 'mysql' (111)")

I have less clue of where to look for, but if someone do, let me know, I will provide relevant logs as required.

Environment: docker: 19.03.1, kubernetes: 1.14.3, airflow: 1.10.3, mysql: 5.7,

-- Anum Sheraz
airflow
airflow-scheduler
docker-desktop
kubernetes
mysql

2 Answers

8/19/2019

Airflow defaults task concurrency is set to 32, which then tries to run all 22 pods at same time, demanding more memory than my k8s cluster could bare. Reducing down AIRFLOW__CORE__PARALLELISM in airflow config from 32 (default) to 8 did the work for me. And of-course, for those whose business case demands running more tasks in parallel, increasing memory and multi-clustering is the option.

-- Anum Sheraz
Source: StackOverflow

8/19/2019

It looks like like a resource issue.

You can try to increase CPU and memory in Docker settings and connect to docker VM to check resources usage.

Alternatively, you can create one master cluster with one or two nodes in Virtualbox and try to run tasks there. In this case master node will not suffer from lack of resources and cluster should still be available to connect.

Here is how to connect to docker-desktop for mac

Please let me know if that helped.

-- OhHiMark
Source: StackOverflow