# Issues Retrieving Pod/Container Log Information via Kubectl on Hosts Having Proxy Configured

## Problem

There are issues retrieving pod/container log information via Kubectl on hosts which have proxy configured to reach the management plane.

{% tabs %}
{% tab title="None" %}

```none
$ kubectl logs speaker-fghuy -n metallb-system
Error from server: Get https://10.89.436.54:10250/containerLogs/metallb-system/speaker-fghuy/speaker: net/http: TLS handshake timeout
```

{% endtab %}
{% endtabs %}

## Environment

* Platform9 Managed Kubernetes - All Versions

## Cause

* In scenarios where hosts are configured to connect to the Platform9 Cloud (Management Plane) via a proxy server (HTTP/S), then the PMK stack reads this HTTP information from the host's environment variables like *HTTP\_PROXY, HTTPS\_PROXY, NO\_PROXY, http\_proxy, https\_proxy, no\_proxy* and as well as from the pf9-comms's service configuration.

{% tabs %}
{% tab title="None" %}

```none
pf9@worker25:/etc/pf9/kube.d$ cat master.yaml | grep proxy
value: http://proxyname.platform9cloud.net:3128
value: http://proxyname.platform9cloud.net:3128
- name: http_proxyvalue: http://proxyname.platform9cloud.net:3128
- name: https_proxyvalue: http://proxyname.platform9cloud.net:3128
- name: no_proxy
- --requestheader-allowed-names=aggregator,kubelet,admin,kube-proxy
- --proxy-client-cert-file=/srv/kubernetes/certs/aggregator/request.crt
- --proxy-client-key-file=/srv/kubernetes/certs/aggregator/request.key
value: http://proxyname.platform9cloud.net:3128
value: http://proxyname.platform9cloud.net:3128
- name: http_proxyvalue: http://proxyname.platform9cloud.net:3128
- name: https_proxyvalue: http://proxyname.platform9cloud.net:3128
- name: no_proxy
```

{% endtab %}
{% endtabs %}

* Further, the PMK stack adds local IPs, localhost, VIP, Container & Service CIDR to *no\_proxy* variables automatically as seen above.
* This results in the proxy configuration taking effect for the PMK stack without the *no\_proxy* environment variable being set for IPs of the nodes in the cluster. As the Proxy settings does not exclude hosts, when *kubectl logs \[pod]* gets executed, the *kube-apiserver* initiates a TLS connection with kubelet on the worker node running the pod in question which then passes through to the HTTP proxy server which blackholes the request causing a TLS request time out.

## Resolution

* Set *NO\_PROXY*, *no\_proxy* environment variables on all the Master Nodes within the cluster to include the CIDR for all of the hosts in the cluster.
* To set it ephemerally run the below-mentioned commands. Note: This would not be persisted.

{% tabs %}
{% tab title="None" %}

```none
export NO_PROXY=a.b.c.d/mask
export no_proxy=a.b.c.d/mask
```

{% endtab %}
{% endtabs %}

* To persist the values across host reboots, create a file at path */etc/profile.d/* and set the values in it as shown below.

{% tabs %}
{% tab title="None" %}

```none
$ sudo touch /etc/profile.d/http_proxy.sh
$ sudo chmod u+x /etc/profile.d/http_proxy.sh
```

{% endtab %}
{% endtabs %}

{% tabs %}
{% tab title="None" %}

```none
$ cat /etc/profile.d/http_proxy.sh
export NO_PROXY=a.b.c.d/mask
export no_proxy=a.b.c.d/mask
```

{% endtab %}
{% endtabs %}

## Additional Information

[Proxy Configuration to Access Hosts](https://support.platform9.com/hc/en-us/articles/360022481233)
