I am very new in Istio therefore it might be simple question but I have several confusion regarding Istio.I am using Istio 1.8.0 and 1.19 for k8s.Sorry for the multiple questions but will be appreciate if you can help me to clarify best approaches.
After I inject Istio, I suppose ı could not able to access service to service directly inside pod but as you can see below ı can. Maybe I 've misunderstanding but is it expected behaviour? Meanwhile how can I debug whether services talk each other via envoy proxy with mTLS ? I am using
STRICTmode and should I deploy peerauthentication the namespace where microservices are running to avoid this?kubectl get peerauthentication --all-namespaces NAMESPACE NAME AGE istio-system default 26hHow can I restrict the traffic lets say api-dev service should not be access to auth-dev but can access backend-dev?
Some of the microservices needs to commnunicate with database where its also running in
databasenamespace. We have also some which we do not want to inject istio also using same database? So, should database also deployed in the same namespace where we have istio injection? If yes, then does it mean I need to deploy another database instance for rest of services?
    $ kubectl get ns --show-labels
    NAME              STATUS   AGE    LABELS
    database          Active   317d   name=database
    hub-dev           Active   15h    istio-injection=enabled
    dev               Active   318d   name=dev
    capel0068340585:~ semural$ kubectl get pods -n hub-dev
    NAME                                     READY   STATUS    RESTARTS   AGE
    api-dev-5b9cdfc55c-ltgqz                  3/3     Running   0          117m
    auth-dev-54bd586cc9-l8jdn                 3/3     Running   0          13h
    backend-dev-6b86994697-2cxst              2/2     Running   0          120m
    cronjob-dev-7c599bf944-cw8ql              3/3     Running   0          137m
    mp-dev-59cb8d5589-w5mxc                   3/3     Running   0          117m
    ui-dev-5884478c7b-q8lnm                   2/2     Running   0          114m
    redis-hub-master-0                        2/2     Running   0           2m57s
    
    $ kubectl get svc -n hub-dev
    NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
    api-dev                ClusterIP   xxxxxxxxxxxxx      <none>        80/TCP    13h
    auth-dev               ClusterIP   xxxxxxxxxxxxx      <none>        80/TCP    13h
    backend-dev            ClusterIP   xxxxxxxxxxxxx      <none>        80/TCP    14h
    cronjob-dev            ClusterIP   xxxxxxxxxxxxx      <none>        80/TCP    14h
    mp-dev                 ClusterIP   xxxxxxxxxxxxx      <none>        80/TCP    13h
    ui-dev                 ClusterIP   xxxxxxxxxxxxx      <none>        80/TCP    13h
    redis-hub-master       ClusterIP   xxxxxxxxxxxxx      <none>        6379/TCP  3m47s
    
----------
    $ kubectl exec -ti ui-dev-5884478c7b-q8lnm -n hub-dev sh
    Defaulting container name to oneapihub-ui.
    Use 'kubectl describe pod/ui-dev-5884478c7b-q8lnm -n hub-dev' to see all of the containers in this pod.
    /usr/src/app $ curl -vv  http://hub-backend-dev
    *   Trying 10.254.78.120:80...
    * TCP_NODELAY set
    * Connected to backend-dev (10.254.78.120) port 80 (#0)
    > GET / HTTP/1.1
    > Host: backend-dev
    > User-Agent: curl/7.67.0
    > Accept: */*
    >
    * Mark bundle as not supporting multiuse
    < HTTP/1.1 404 Not Found
    < content-security-policy: default-src 'self'
    <
    <!DOCTYPE html>
    <html lang="en">
    <head>
    <meta charset="utf-8">
    <title>Error</title>
    </head>
    <body>
    <pre>Cannot GET /</pre>
    </body>
    </html>
    * Connection #0 to host oneapihub-backend-dev left intact
    /usr/src/app $