![]() Your local Airflow settings file can define a pod_mutation_hook function that has the ability to mutate pod objects before sending them to the Kubernetes client for scheduling. See _pod_operator.KubernetesPodOperator Pod Mutation Hook K = KubernetesPodOperator(namespace='default', "requiredDuringSchedulingIgnoredDuringExecution": [ 'preferredDuringSchedulingIgnoredDuringExecution': [ Volume = Volume(name='test-volume', configs=volume_config) Volume_mount = VolumeMount('test-volume',Ĭonfigmaps = Secret_all_keys = Secret('env', None, 'airflow-secrets-2') Secret_env = Secret('env', 'SQL_CONN', 'airflow-secrets', 'sql_alchemy_conn') Secret_file = Secret('volume', '/etc/sql_conn', 'airflow-secrets', 'sql_alchemy_conn') If you don’t configure this, the logs will be lost after the worker pods shuts downĪnother option is to use S3/GCS/etc to store logsįrom import KubernetesOperatorįrom _pod_operator import KubernetesPodOperatorįrom import Secretįrom import Volumeįrom _mount import VolumeMountįrom import Port Before starting the container, a git pull of the dags repository will be performed and used throughout the lifecycle of the podīy storing logs onto a persistent disk, the files are accessible by workers and the webserver. There are two volumes available:īy storing dags onto persistent disk, it will be made available to all workersĪnother option is to use git-sync. The volumes are optional and depend on your configuration. The Kubernetes executor will create a new pod for every task instance.Įxample helm charts are available at scripts/ci/kubernetes/kube/.yaml in the source distribution. The kubernetes executor is introduced in Apache Airflow 1.10.0.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |