Issues Retrieving Pod/Container Log Information via Kubectl on Hosts Having Proxy Configured

Problem

There are issues retrieving pod/container log information via Kubectl on hosts which have proxy configured to reach the management plane.

Copy

Environment

  • Platform9 Managed Kubernetes - All Versions

Cause

  • In scenarios where hosts are configured to connect to the Platform9 Cloud (Management Plane) via a proxy server (HTTP/S), then the PMK stack reads this HTTP information from the host's environment variables like HTTP_PROXY, HTTPS_PROXY, NO_PROXY, http_proxy, https_proxy, no_proxy and as well as from the pf9-comms's service configuration.
Copy
  • Further, the PMK stack adds local IPs, localhost, VIP, Container & Service CIDR to no_proxy variables automatically as seen above.
  • This results in the proxy configuration taking effect for the PMK stack without the no_proxy environment variable being set for IPs of the nodes in the cluster. As the Proxy settings does not exclude hosts, when kubectl logs [pod] gets executed, the kube-apiserver initiates a TLS connection with kubelet on the worker node running the pod in question which then passes through to the HTTP proxy server which blackholes the request causing a TLS request time out.

Resolution

  • Set NO_PROXY, no_proxy environment variables on all the Master Nodes within the cluster to include the CIDR for all of the hosts in the cluster.
  • To set it ephemerally run the below-mentioned commands. Note: This would not be persisted.
Copy
  • To persist the values across host reboots, create a file at path /etc/profile.d/ and set the values in it as shown below.
Copy
Copy

Additional Information

Proxy Configuration to Access Hosts

Type to search, ESC to discard
Type to search, ESC to discard
Type to search, ESC to discard