I'm trying to install VerneMQ on a Kubernetes cluster over Oracle OCI usign Helm chart.
The Kubernetes infrastructure seems to be up and running, I can deploy my custom microservices without a problem.
I'm following the instructions from https://github.com/vernemq/docker-vernemq
Here the steps:
helm install --name="broker" ./from helm/vernemq directory
the output is:
NAME:   broker
LAST DEPLOYED: Fri Mar  1 11:07:37 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/RoleBinding
NAME            AGE
broker-vernemq  1s
==> v1/Service
NAME                     TYPE       CLUSTER-IP    EXTERNAL-IP  PORT(S)   AGE
broker-vernemq-headless  ClusterIP  None          <none>       4369/TCP  1s
broker-vernemq           ClusterIP  10.96.120.32  <none>       1883/TCP  1s
==> v1/StatefulSet
NAME            DESIRED  CURRENT  AGE
broker-vernemq  3        1        1s
==> v1/Pod(related)
NAME              READY  STATUS             RESTARTS  AGE
broker-vernemq-0  0/1    ContainerCreating  0         1s
==> v1/ServiceAccount
NAME            SECRETS  AGE
broker-vernemq  1        1s
==> v1/Role
NAME            AGE
broker-vernemq  1s
NOTES:
1. Check your VerneMQ cluster status:
  kubectl exec --namespace default broker-vernemq-0 /usr/sbin/vmq-admin cluster show
2. Get VerneMQ MQTT port
  echo "Subscribe/publish MQTT messages there: 127.0.0.1:1883"
  kubectl port-forward svc/broker-vernemq 1883:1883
but when I do this check
kubectl exec --namespace default broker-vernemq-0 vmq-admin cluster show
I got
Node 'VerneMQ@broker-vernemq-0..default.svc.cluster.local' not responding to pings.
command terminated with exit code 1
I think there is something wrong with subdomain (the double dots without nothing between them)
Whit this command
kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name | head -1) -c kubedns
The last log line is
I0301 10:07:38.366826       1 dns.go:552] Could not find endpoints for service "broker-vernemq-headless" in namespace "default". DNS records will be created once endpoints show up.
I've also tried with this custom yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
  namespace: default
  name: vernemq
  labels:
    app: vernemq
spec:
  serviceName: vernemq
  replicas: 3
  selector:
    matchLabels:
      app: vernemq
  template:
    metadata:
      labels:
        app: vernemq
    spec:
      containers:
      - name: vernemq
        image: erlio/docker-vernemq:latest
        imagePullPolicy: Always
        ports:
          - containerPort: 1883
            name: mqtt
          - containerPort: 8883
            name: mqtts
          - containerPort: 4369
            name: epmd
        env:
        - name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: MY_POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
          value: "off"
        - name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
          value: "1"
        - name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
          value: "vernemq"
        - name: DOCKER_VERNEMQ_VMQ_PASSWD__PASSWORD_FILE
          value: "/etc/vernemq-passwd/vmq.passwd"
        volumeMounts:
          - name: vernemq-passwd
            mountPath: /etc/vernemq-passwd
            readOnly: true
      volumes:
      - name: vernemq-passwd
        secret:
          secretName: vernemq-passwd
---
apiVersion: v1
kind: Service
metadata:
  name: vernemq
  labels:
    app: vernemq
spec:
  clusterIP: None
  selector:
    app: vernemq
  ports:
  - port: 4369
    name: epmd
---
apiVersion: v1
kind: Service
metadata:
  name: mqtt
  labels:
    app: mqtt
spec:
  type: ClusterIP
  selector:
    app: vernemq
  ports:
  - port: 1883
    name: mqtt
---
apiVersion: v1
kind: Service
metadata:
  name: mqtts
  labels:
    app: mqtts
spec:
  type: LoadBalancer
  selector:
    app: vernemq
  ports:
  - port: 8883
    name: mqtts
Any suggestion?
Many thanks
Jack