When we set up a kubernetes cluster on DigitalOcean, we ran into a very common issue with service discovery. Essentially, pods within the cluster could not access public URLs to the cluster. This usually resulted in a "timeout", "bad header", or SSL error messages. Checking the logs of our pods, this is what we saw:

dial tcp 104.248.104.145:443: connect: connection timed out","time":"2020-04-30T23:56:01Z"}

We emailed DigitalOcean regarding this issue and they responded:

There is a known issue in the kubernetes project for connections from within the cluster accessing public urls to the cluster. When traffic that originates from a Kubernetes pod goes through the Load Balancer, the kube-proxy service intercepts the traffic and directs it to the service instead of letting it go out to the Load Balancer and back to the node.

The support engineer then asked if our services could be resolved via internal dns, or the service's cluster IP. This didn't work for us, unfortunately, but if your service name was rails-http in the makisu namespace, then you could access your service like so:

rails-http.makisu

You can double-check this by running a dnsutils pod. From the Debugging DNS Resolution article on the kubernetes site:

$ kubectl apply -f <https://k8s.io/examples/admin/dns/dnsutils.yaml>
pod/dnsutils created

$ kubectl exec -ti dnsutils -- nslookup rails-http.makisu
Server:    10.0.0.10
Address 1: 10.0.0.10

Name:      rails-http.makisu
Address 1: 10.0.0.1

The other way that worked for us was to patch the LoadBalancer service attached to our nginx-ingress-controller. Following this workaround given to us by the support engineer, we executed this by:

  1. Adding an A record that points to the LoadBalancer, name it whatever you want.
  2. Adding this annotation to the LoadBalancer service once the domain name points to the external IP address of the LoadBalancer
metadata:
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-hostname:L "anything.makisu.co"

When you apply this annotation, your in-cluster services should be able to access other in-cluster services through their public URLs exposed via your ingress.