istio应用对外访问极速入门

可能用到的链接与参考资料:

引言

istio所提供的可观测性对于微服务来说是非常重要的。但是在搭建istio下的服务的时候,总是会出现各种各样的问题。笔者在构建简单服务的时候也是碰到了无数奇怪的情况,而且每次构建环境的时候都差不多是一样的体验。

为此我还是将本次构建环境的过程写下来。所使用的应用是基于go iris的FFT代码,用于模拟CPU敏感的微服务组件。代码均已上传到github仓库上,希望能对后来者,特别是初学者有所助益。

本文的目标:

  • 帮助读者了解ingress gateway的工作内容
  • 扩展原有httpbin为自定义服务

ingressgateway

ingressgateway应该是istio的入口服务,可以进行负载均衡与进一步的流量控制。其在k8s中主要通过service表现其功能,使用kubectl get svc/istio-ingressgateway -n=istio-system -o=yaml可以导出其配置信息。

apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"a      
pp":"istio-ingressgateway","install.operator.istio.io/owning-resource":"unknown","i
nstall.operator.istio.io/owning-resource-namespace":"istio-system","istio":"ingress
gateway","istio.io/rev":"default","operator.istio.io/component":"IngressGateways","
operator.istio.io/managed":"Reconcile","operator.istio.io/version":"1.11.2","releas
e":"istio"},"name":"istio-ingressgateway","namespace":"istio-system"},"spec":{"port
s":[{"name":"status-port","port":15021,"protocol":"TCP","targetPort":15021},{"name"
:"http2","port":80,"protocol":"TCP","targetPort":8080},{"name":"https","port":443,"
protocol":"TCP","targetPort":8443},{"name":"tcp","port":31400,"protocol":"TCP","tar
getPort":31400},{"name":"tls","port":15443,"protocol":"TCP","targetPort":15443}],"s
elector":{"app":"istio-ingressgateway","istio":"ingressgateway"},"type":"LoadBalanc
er"}}
  creationTimestamp: "2021-09-07T09:59:49Z"
  labels:
    app: istio-ingressgateway
    install.operator.istio.io/owning-resource: unknown
    install.operator.istio.io/owning-resource-namespace: istio-system
    istio: ingressgateway
    istio.io/rev: default
    operator.istio.io/component: IngressGateways
    operator.istio.io/managed: Reconcile
    operator.istio.io/version: 1.11.2
    release: istio
  name: istio-ingressgateway
  namespace: istio-system
  resourceVersion: "8269056"
  uid: 914b2deb-1018-4dfd-81ff-3f2d94db2f72
spec:
  allocateLoadBalancerNodePorts: true
  clusterIP: 10.110.100.72
  clusterIPs:
  - 10.110.100.72
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: status-port
    nodePort: 30479
    port: 15021
    protocol: TCP
    targetPort: 15021
  - name: http2
    nodePort: 30001
    port: 80
    protocol: TCP
    targetPort: 8080
  - name: https
    nodePort: 30477
    port: 443
    protocol: TCP
    targetPort: 8443
  - name: tcp
    nodePort: 30465
    port: 31400
    protocol: TCP
    targetPort: 31400
  - name: tls
    nodePort: 31772
    port: 15443
    protocol: TCP
    targetPort: 15443
  selector:
    app: istio-ingressgateway
    istio: ingressgateway
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer: {}

虽然是LoadBalancer类型,但是因为实验室是本地环境,也没有额外去配置Nginx,所以并没有external IP。此时ingressgateway的IP可以由export INGRESS_HOST=$(kubectl get po -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].status.hostIP}')语句获取。

主要的工作端口是http2,80->8080,这里使用NodePort将其导出进行使用:export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')

应用部署

应用构建比较简单,源代码本身也不复杂。在打包完成镜像后,就需要开始写yaml文件将应用部署上k8s。

需要注意的是:

  • 必须确保所有的资源都在同一个namespace中。主要是复制的时候容易漏掉
  • 由于httpbin的名称比较容易搞混,我给不同的资源使用了不同的名称。有一些内容必须要对应上。(yaml中有些项是数组,我为了简单起见我就省略了)
    • 在Deployment中spec.selector.matchLabels必须与spec.template.metadata.labels中的一个labels匹配上
    • Service的targetPort和Deployment的spec.containers.ports.containerPort必须一致
    • Service的port和VirtualService的spec.http.route.destination.port.number必须一致,而且VirtualService的spec.http.route.destination.port.host必须是该Service的名称(我特意写成了不同的名称)
    • Gateway的port必须和ingressgateway的port对应(这里与http2的端口对应)
apiVersion: v1
kind: Namespace
metadata:
  name: wty-istio
  labels:
    istio-injection: enabled 
---
apiVersion: v1
kind: Service
metadata:
  name: wtyfft-svc
  namespace: wty-istio
spec:
  selector:
    app: wtyfft-app
  ports:
    - protocol: TCP
      port: 8000
      targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wtyfft-deploy
  labels:
    app: wtyfft-deploy-label
  namespace: wty-istio
spec:
  replicas: 3
  selector:
    matchLabels:
      app: wtyfft-app
  template:
    metadata:
      annotations:
        prometheus.io/port: "8080"
        prometheus.io/scrape: "true"
        prometheus.io/path: "/metrics"
      labels:
        app: wtyfft-app
    spec:
      containers:
      - name: wtyfft-container-name
        image: registry.cn-hangzhou.aliyuncs.com/wtysos11/wty-fft:0.0.1
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080   
        resources:
          requests:
            memory: "128Mi"
            cpu: "100m"
          limits:
            memory: "256Mi"
            cpu: "200m"   
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: wtyfft-gateway
  namespace: wty-istio
spec:
  selector:
    istio: ingressgateway # use Istio default gateway implementation
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: wtyfft-vs
  namespace: wty-istio
spec:
  hosts:
  - "*"
  gateways:
  - wtyfft-gateway
  http:
  - match:
    - uri:
        prefix: /fft
    - uri:
        exact: "/metrics"
    route:
    - destination:
        host: wtyfft-svc
        port:
          number: 8000

后记

其实看例子的话就没什么困难的地方了。原来的httpbin比bookinfo的例子要好不少,我在httpbin的基础上再改了一点。希望之后的实验顺利吧。

0%