Kubernetes部署 Jmeter 集群(kubernetes怎么讀)

      網友投稿 772 2022-05-30

      前提條件

      部署拓撲

      docker 鏡像

      構建 docker 鏡像

      部署清單

      前提條件

      部署拓撲

      docker 鏡像

      構建 docker 鏡像

      部署清單

      Kubernetes部署

      部署組件

      部署清單

      主執行腳本

      jmeter_slaves

      jmeter_master

      influxdb

      grafana

      初始化 dashboard

      啟動 dashboard 腳本

      部署清單

      啟動測試

      執行腳本

      部署清單

      小結

      前提條件

      Kubernetes > 1.16

      部署拓撲

      可以從 master 節點啟動測試,master 節點把對應的測試腳本發送到對應的 slaves 節點,slave 節點的 pod/nodes 主要作用即發壓。

      部署文件清單:

      jmeter_cluster_create.sh — 此腳本將要求一個唯一的 namespace,然后它將繼續創建命名空間和所有組件(jmeter master,slaves,influxdb 和 grafana)。

      注意:在啟動前,請在jmeter_slaves_deploy.yaml文件中設置要用于 slaves 服務器的副本數,通常副本數應與擁有的 worker nodes 相匹配。

      jmeter_master_configmap.yaml — Jmeter master 的應用配置。

      jmeter_master_deployment.yaml — Jmeter master 的部署清單。

      jmeter_slaves_deploy.yaml — Jmeter slave 的部署清單。

      jmeter_slave_svc.yaml — jmeter slave 的服務清單。使用 headless service,這使我們能夠直接獲取 jmeter slave 的 POD IP 地址,而我們不需要 DNS 或輪詢。創建此文件是為了使 slave Pod IP 地址更容易直接發送到 jmeter master。

      jmeter_influxdb_configmap.yaml — influxdb 部署的應用配置。如果要在默認的 influxdb 端口之外使用 graphite 存儲方法,這會將 influxdb 配置為暴露端口 2003,以便支持 graphite 。因此,可以使用 influxdb 部署來支持jmeter 后置-方法(graphite 和 influxdb)。

      jmeter_influxdb_deploy.yaml — Influxdb 的部署清單

      jmeter_influxdb_svc.yaml — Influxdb 的服務清單。

      jmeter_grafana_deploy.yaml — grafana 部署清單。

      jmeter_grafana_svc.yaml — grafana 部署的服務清單,默認情況下使用 NodePort,如果公有云中運行它,則可以將其更改為 LoadBalancer(并且可以設置 CNAME 以使用 FQDN 縮短名稱)。

      dashboard.sh — 該腳本用于自動創建以下內容:

      (1)influxdb pod 中的一個 influxdb 數據庫(Jmeter)

      (2)grafana 中的數據源(jmeterdb)

      start_test.sh —此腳本用于自動運行 Jmeter 測試腳本,而無需手動登錄 Jmeter 主 shell,它將詢問 Jmeter 測試腳本的位置,然后將其復制到 Jmeter master pod 并啟動自動對 Jmeter slave 進行測試。

      jmeter_stop.sh - 停止測試

      GrafanaJMeterTemplate.json — 預先構建的 Jmeter grafana 儀表板。

      Dockerfile-base - 構建 Jmeter 基礎鏡像

      Dockerfile-master - 構建 Jmeter master 鏡像

      Dockerfile-slave - 構建 Jmeter slave 鏡像

      Dockerimages.sh - 批量構建 docker 鏡像

      docker 鏡像

      構建 docker 鏡像

      執行腳本,構建鏡像:

      ./dockerimages.sh

      查看鏡像:

      $ docker images

      將鏡像推送到 Registry:

      $ sudo docker login --username=xxxx registry.cn-beijing.aliyuncs.com $ sudo docker tag [ImageId] registry.cn-beijing.aliyuncs.com/7d/jmeter-base:[鏡像版本號] $ sudo docker push registry.cn-beijing.aliyuncs.com/7d/jmeter-base:[鏡像版本號]

      部署清單

      Dockerfile-base (構建 Jmeter 基礎鏡像):

      FROM alpine:latest LABEL MAINTAINER 7DGroup ARG JMETER_VERSION=5.2.1 #定義時區參數 ENV TZ=Asia/Shanghai RUN apk update && \ apk upgrade && \ apk add --update openjdk8-jre wget tar bash && \ mkdir /jmeter && cd /jmeter/ && \ wget https://mirrors.tuna.tsinghua.edu.cn/apache/jmeter/binaries/apache-jmeter-${JMETER_VERSION}.tgz && \ tar -xzf apache-jmeter-$JMETER_VERSION.tgz && rm apache-jmeter-$JMETER_VERSION.tgz && \ cd /jmeter/apache-jmeter-$JMETER_VERSION/ && \ wget -q -O /tmp/JMeterPlugins-Standard-1.4.0.zip https://jmeter-plugins.org/downloads/file/JMeterPlugins-Standard-1.4.0.zip && unzip -n /tmp/JMeterPlugins-Standard-1.4.0.zip && rm /tmp/JMeterPlugins-Standard-1.4.0.zip && \ wget -q -O /jmeter/apache-jmeter-$JMETER_VERSION/lib/ext/pepper-box-1.0.jar https://github.com/raladev/load/blob/master/JARs/pepper-box-1.0.jar?raw=true && \ cd /jmeter/apache-jmeter-$JMETER_VERSION/ && \ wget -q -O /tmp/bzm-parallel-0.7.zip https://jmeter-plugins.org/files/packages/bzm-parallel-0.7.zip && \unzip -n /tmp/bzm-parallel-0.7.zip && rm /tmp/bzm-parallel-0.7.zip && \ ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo "$TZ" > /etc/timezone ENV JMETER_HOME /jmeter/apache-jmeter-$JMETER_VERSION/ ENV PATH $JMETER_HOME/bin:$PATH #JMeter 主配置文件 ADD jmeter.properties $JMETER_HOME/bin/jmeter.properties

      Dockerfile-master(構建 Jmeter master 鏡像):

      FROM registry.cn-beijing.aliyuncs.com/7d/jmeter-base:latest MAINTAINER 7DGroup EXPOSE 60000

      Dockerfile-slave(構建 Jmeter slave 鏡像):

      Dockerfile-slave: FROM registry.cn-beijing.aliyuncs.com/7d/jmeter-base:latest MAINTAINER 7DGroup EXPOSE 1099 50000 ENTRYPOINT $JMETER_HOME/bin/jmeter-server \ -Dserver.rmi.localport=50000 \ -Dserver_port=1099 \ -Jserver.rmi.ssl.disable=true

      Dockerimages.sh(批量構建 docker 鏡像):

      #!/bin/bash -e docker build --tag="registry.cn-beijing.aliyuncs.com/7d/jmeter-base:latest" -f Dockerfile-base . docker build --tag="registry.cn-beijing.aliyuncs.com/7d/jmeter-master:latest" -f Dockerfile-master . docker build --tag="registry.cn-beijing.aliyuncs.com/7d/jmeter-slave:latest" -f Dockerfile-slave .

      Kubernetes部署

      部署組件

      執行jmeter_cluster_create.sh,并輸入一個唯一的 namespace

      ./jmeter_cluster_create.sh

      等待一會,查看pods安裝情況:

      $ kubectl get pods -n 7dgroup NAME READY STATUS RESTARTS AGE influxdb-jmeter-584cf69759-j5m85 1/1 Running 2 5m jmeter-grafana-6d5b75b7f6-57dxj 1/1 Running 1 5m jmeter-master-84bfd5d96d-kthzm 1/1 Running 0 5m jmeter-slaves-b5b75757-dxkxz 1/1 Running 0 5m jmeter-slaves-b5b75757-n58jw 1/1 Running 0 5m

      部署清單

      jmeter_cluster_create.sh(創建命名空間和所有組件(jmeter master,slaves,influxdb 和 grafana)):

      #!/usr/bin/env bash #Create multiple Jmeter namespaces on an existing kuberntes cluster #Started On January 23, 2018 working_dir=`pwd` echo "checking if kubectl is present" if ! hash kubectl 2>/dev/null then echo "'kubectl' was not found in PATH" echo "Kindly ensure that you can acces an existing kubernetes cluster via kubectl" exit fi kubectl version --short echo "Current list of namespaces on the kubernetes cluster:" echo kubectl get namespaces | grep -v NAME | awk '{print }' echo echo "Enter the name of the new tenant unique name, this will be used to create the namespace" read tenant echo #Check If namespace exists kubectl get namespace $tenant > /dev/null 2>&1 if [ $? -eq 0 ] then echo "Namespace $tenant already exists, please select a unique name" echo "Current list of namespaces on the kubernetes cluster" sleep 2 kubectl get namespaces | grep -v NAME | awk '{print }' exit 1 fi echo echo "Creating Namespace: $tenant" kubectl create namespace $tenant echo "Namspace $tenant has been created" echo echo "Creating Jmeter slave nodes" nodes=`kubectl get no | egrep -v "master|NAME" | wc -l` echo echo "Number of worker nodes on this cluster is " $nodes echo #echo "Creating $nodes Jmeter slave replicas and service" echo kubectl create -n $tenant -f $working_dir/jmeter_slaves_deploy.yaml kubectl create -n $tenant -f $working_dir/jmeter_slaves_svc.yaml echo "Creating Jmeter Master" kubectl create -n $tenant -f $working_dir/jmeter_master_configmap.yaml kubectl create -n $tenant -f $working_dir/jmeter_master_deploy.yaml echo "Creating Influxdb and the service" kubectl create -n $tenant -f $working_dir/jmeter_influxdb_configmap.yaml kubectl create -n $tenant -f $working_dir/jmeter_influxdb_deploy.yaml kubectl create -n $tenant -f $working_dir/jmeter_influxdb_svc.yaml echo "Creating Grafana Deployment" kubectl create -n $tenant -f $working_dir/jmeter_grafana_deploy.yaml kubectl create -n $tenant -f $working_dir/jmeter_grafana_svc.yaml echo "Printout Of the $tenant Objects" echo kubectl get -n $tenant all echo namespace = $tenant > $working_dir/tenant_export

      jmeter_slaves_deploy.yaml(Jmeter slave 的部署清單):

      apiVersion: apps/v1 kind: Deployment metadata: name: jmeter-slaves labels: jmeter_mode: slave spec: replicas: 2 selector: matchLabels: jmeter_mode: slave template: metadata: labels: jmeter_mode: slave spec: containers: - name: jmslave image: registry.cn-beijing.aliyuncs.com/7d/jmeter-slave:latest imagePullPolicy: IfNotPresent ports: - containerPort: 1099 - containerPort: 50000 resources: limits: cpu: 4000m memory: 4Gi requests: cpu: 500m memory: 512Mi

      jmeter_slaves_svc.yaml( Jmeter slave 的服務清單):

      apiVersion: v1 kind: Service metadata: name: jmeter-slaves-svc labels: jmeter_mode: slave spec: clusterIP: None ports: - port: 1099 name: first targetPort: 1099 - port: 50000 name: second targetPort: 50000

      jmeter_master_configmap.yaml(jmeter_master 應用配置):

      apiVersion: v1 kind: ConfigMap metadata: name: jmeter-load-test labels: app: influxdb-jmeter data: load_test: | #!/bin/bash #Script created to invoke jmeter test script with the slave POD IP addresses #Script should be run like: ./load_test "path to the test script in jmx format" /jmeter/apache-jmeter-*/bin/jmeter -n -t `getent ahostsv4 jmeter-slaves-svc | cut -d' ' -f1 | sort -u | awk -v ORS=, '{print }' | sed 's/,$//'`

      jmeter_master_deploy.yaml(jmeter_master 部署清單):

      apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: jmeter-master labels: jmeter_mode: master spec: replicas: 1 selector: matchLabels: jmeter_mode: master template: metadata: labels: jmeter_mode: master spec: containers: - name: jmmaster image: registry.cn-beijing.aliyuncs.com/7d/jmeter-master:latest imagePullPolicy: IfNotPresent command: [ "/bin/bash", "-c", "--" ] args: [ "while true; do sleep 30; done;" ] volumeMounts: - name: loadtest mountPath: /load_test subPath: "load_test" ports: - containerPort: 60000 resources: limits: cpu: 4000m memory: 4Gi requests: cpu: 500m memory: 512Mi volumes: - name: loadtest configMap: name: jmeter-load-test

      jmeter_influxdb_configmap.yaml(influxdb 的應用配置):

      apiVersion: v1 kind: ConfigMap metadata: name: influxdb-config labels: app: influxdb-jmeter data: influxdb.conf: | [meta] dir = "/var/lib/influxdb/meta" [data] dir = "/var/lib/influxdb/data" engine = "tsm1" wal-dir = "/var/lib/influxdb/wal" # Configure the graphite api [[graphite]] enabled = true bind-address = ":2003" # If not set, is actually set to bind-address. database = "jmeter" # store graphite data in this database

      jmeter_influxdb_deploy.yaml(influxdb 部署清單):

      apiVersion: apps/v1 kind: Deployment metadata: name: influxdb-jmeter labels: app: influxdb-jmeter spec: replicas: 1 selector: matchLabels: app: influxdb-jmeter template: metadata: labels: app: influxdb-jmeter spec: containers: - image: influxdb imagePullPolicy: IfNotPresent name: influxdb volumeMounts: - name: config-volume mountPath: /etc/influxdb ports: - containerPort: 8083 name: influx - containerPort: 8086 name: api - containerPort: 2003 name: graphite volumes: - name: config-volume configMap: name: influxdb-config

      jmeter_influxdb_svc.yaml(influxdb 部署服務清單):

      apiVersion: v1 kind: Service metadata: name: jmeter-influxdb labels: app: influxdb-jmeter spec: ports: - port: 8083 name: http targetPort: 8083 - port: 8086 name: api targetPort: 8086 - port: 2003 name: graphite targetPort: 2003 selector: app: influxdb-jmeter

      jmeter_grafana_deploy.yaml(grafana 部署清單):

      apiVersion: apps/v1 kind: Deployment metadata: name: jmeter-grafana labels: app: jmeter-grafana spec: replicas: 1 selector: matchLabels: app: jmeter-grafana template: metadata: labels: app: jmeter-grafana spec: containers: - name: grafana image: grafana/grafana:5.2.0 imagePullPolicy: IfNotPresent ports: - containerPort: 3000 protocol: TCP env: - name: GF_AUTH_BASIC_ENABLED value: "true" - name: GF_USERS_ALLOW_ORG_CREATE value: "true" - name: GF_AUTH_ANONYMOUS_ENABLED value: "true" - name: GF_AUTH_ANONYMOUS_ORG_ROLE value: Admin - name: GF_SERVER_ROOT_URL # If you're only using the API Server proxy, set this value instead: # value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy value: /

      jmeter_grafana_svc.yaml(grafana 部署服務清單):

      apiVersion: v1 kind: Service metadata: name: jmeter-grafana labels: app: jmeter-grafana spec: ports: - port: 3000 targetPort: 3000 selector: app: jmeter-grafana type: NodePort --- apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/service-weight: 'jmeter-grafana: 100' name: jmeter-grafana-ingress spec: rules: # 配置七層域名 - host: grafana-jmeter.7d.com http: paths: # 配置Context Path - path: / backend: serviceName: jmeter-grafana servicePort: 3000

      初始化 dashboard

      啟動 dashboard 腳本

      $ ./dashboard.sh

      檢查 service 部署情況:

      $ kubectl get svc -n 7dgroup NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE jmeter-grafana NodePort 10.96.6.201 3000:31801/TCP 10m jmeter-influxdb ClusterIP 10.96.111.60 8083/TCP,8086/TCP,2003/TCP 10m jmeter-slaves-svc ClusterIP None 1099/TCP,50000/TCP 10m

      我們可以通過 http://任意 node_ip:31801/ 訪問 grafana

      最后,我們在 grafana 導入 dashborad 模版:

      如果你不喜歡這個模版,也可以導入熱門模版:5496

      部署清單

      dashboard.sh 該腳本用于自動創建以下內容:

      (1)influxdb pod 中的一個 influxdb 數據庫(Jmeter)

      Kubernetes 下部署 Jmeter 集群(kubernetes怎么讀)

      (2)grafana 中的數據源(jmeterdb)

      #!/usr/bin/env bash working_dir=`pwd` #Get namesapce variable tenant=`awk '{print $NF}' $working_dir/tenant_export` ## Create jmeter database automatically in Influxdb echo "Creating Influxdb jmeter Database" ##Wait until Influxdb Deployment is up and running ##influxdb_status=`kubectl get po -n $tenant | grep influxdb-jmeter | awk '{print $2}' | grep Running influxdb_pod=`kubectl get po -n $tenant | grep influxdb-jmeter | awk '{print $1}'` kubectl exec -ti -n $tenant $influxdb_pod -- influx -execute 'CREATE DATABASE jmeter' ## Create the influxdb datasource in Grafana echo "Creating the Influxdb data source" grafana_pod=`kubectl get po -n $tenant | grep jmeter-grafana | awk '{print $1}'` ## Make load test script in Jmeter master pod executable #Get Master pod details master_pod=`kubectl get po -n $tenant | grep jmeter-master | awk '{print $1}'` kubectl exec -ti -n $tenant $master_pod -- cp -r /load_test /![]()jmeter/load_test kubectl exec -ti -n $tenant $master_pod -- chmod 755 /jmeter/load_test ##kubectl cp $working_dir/influxdb-jmeter-datasource.json -n $tenant $grafana_pod:/influxdb-jmeter-datasource.json kubectl exec -ti -n $tenant $grafana_pod -- curl 'http://admin:admin@127.0.0.1:3000/api/datasources' -X POST -H 'Content-Type: application/json;charset=UTF-8' --data-binary '{"name":"jmeterdb","type":"influxdb","url":"http://jmeter-influxdb:8086","access":"proxy","isDefault":true,"database":"jmeter","user":"admin","password":"admin"}'

      啟動測試

      執行腳本

      $ ./start_test.sh

      需要一個測試腳本,本例為:web-test.jmx

      $ ./start_test.sh Enter path to the jmx file web-test.jmx ''SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/jmeter/apache-jmeter-5.0/lib/log4j-slf4j-impl-2.11.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/jmeter/apache-jmeter-5.0/lib/ext/pepper-box-1.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Jul 25, 2020 11:30:58 AM java.util.prefs.FileSystemPreferences$1 run INFO: Created user preferences directory. Creating summariser

      Created the tree successfully using web-test.jmx Configuring remote engine: 10.100.113.31 Configuring remote engine: 10.100.167.173 Starting remote engines Starting the test @ Sat Jul 25 11:30:59 UTC 2020 (1595676659540) Remote engines have been started Waiting for possible Shutdown/StopTestNow/Heapdump message on port 4445 summary + 803 in 00:00:29 = 27.5/s Avg: 350 Min: 172 Max: 1477 Err: 0 (0.00%) Active: 40 Started: 40 Finished: 0 summary + 1300 in 00:00:29 = 45.3/s Avg: 367 Min: 172 Max: 2729 Err: 0 (0.00%) Active: 40 Started: 40 Finished: 0 summary = 2103 in 00:00:58 = 36.4/s Avg: 361 Min: 172 Max: 2729 Err: 0 (0.00%) summary + 1400 in 00:00:31 = 45.4/s Avg: 342 Min: 160 Max: 2145 Err: 0 (0.00%) Active: 40 Started: 40 Finished: 0 summary = 3503 in 00:01:29 = 39.5/s Avg: 353 Min: 160 Max: 2729 Err: 0 (0.00%) summary + 1400 in 00:00:31 = 45.2/s Avg: 352 Min: 169 Max: 2398 Err: 0 (0.00%) Active: 40 Started: 40 Finished: 0 summary = 4903 in 00:02:00 = 41.0/s Avg: 353 Min: 160 Max: 2729 Err: 0 (0.00%) summary + 1400 in 00:00:30 = 46.8/s Avg: 344 Min: 151 Max: 1475 Err: 0 (0.00%) Active: 40 Started: 40 Finished: 0 summary = 6303 in 00:02:30 = 42.1/s Avg: 351 Min: 151 Max: 2729 Err: 0 (0.00%) summary + 1200 in 00:00:28 = 43.5/s Avg: 354 Min: 163 Max: 2018 Err: 0 (0.00%) Active: 40 Started: 40 Finished: 0 summary = 7503 in 00:02:57 = 42.3/s Avg: 351 Min: 151 Max: 2729 Err: 0 (0.00%) summary + 1300 in 00:00:30 = 43.7/s Avg: 456 Min: 173 Max: 2401 Err: 0 (0.00%) Active: 40 Started: 40 Finished: 0 summary = 8803 in 00:03:27 = 42.5/s Avg: 367 Min: 151 Max: 2729 Err: 0 (0.00%) summary + 1400 in 00:00:31 = 44.9/s Avg: 349 Min: 158 Max: 2128 Err: 0 (0.00%) Active: 40 Started: 40 Finished: 0 summary = 10203 in 00:03:58 = 42.8/s Avg: 364 Min: 151 Max: 2729 Err: 0 (0.00%) summary + 1400 in 00:00:32 = 44.3/s Avg: 351 Min: 166 Max: 1494 Err: 0 (0.00%) Active: 40 Started: 40 Finished: 0 summary = 11603 in 00:04:30 = 43.0/s Avg: 363 Min: 151 Max: 2729 Err: 0 (0.00%) summary + 1400 in 00:00:30 = 46.9/s Avg: 344 Min: 165 Max: 2075 Err: 0 (0.00%) Active: 40 Started: 40 Finished: 0 summary = 13003 in 00:05:00 = 43.4/s Avg: 361 Min: 151 Max: 2729 Err: 0 (0.00%) summary + 1300 in 00:00:28 = 46.0/s Avg: 352 Min: 159 Max: 1486 Err: 0 (0.00%) Active: 40 Started: 40 Finished: 0 summary = 14303 in 00:05:28 = 43.6/s Avg: 360 Min: 151 Max: 2729 Err: 0 (0.00%) summary + 1400 in 00:00:31 = 45.6/s Avg: 339 Min: 163 Max: 2042 Err: 0 (0.00%) Active: 40 Started: 40 Finished: 0 summary = 15703 in 00:05:58 = 43.8/s Avg: 358 Min: 151 Max: 2729 Err: 0 (0.00%) summary + 494 in 00:00:07 = 69.0/s Avg: 350 Min: 171 Max: 1499 Err: 0 (0.00%) Active: 0 Started: 40 Finished: 40 summary = 16197 in 00:06:06 = 44.3/s Avg: 358 Min: 151 Max: 2729 Err: 0 (0.00%) Tidying up remote @ Sat Jul 25 11:37:09 UTC 2020 (1595677029361) ... end of run

      查看測試數據:

      部署清單

      start_test.sh(此腳本用于自動運行 Jmeter 測試腳本,而無需手動登錄 Jmeter 主 shell,它將詢問 Jmeter 測試腳本的位置,然后將其復制到 Jmeter master pod 并啟動自動對 Jmeter slave 進行測試):

      #!/usr/bin/env bash #Script created to launch Jmeter tests directly from the current terminal without accessing the jmeter master pod. #It requires that you supply the path to the jmx file #After execution, test script jmx file may be deleted from the pod itself but not locally. #直接從當前終端啟動 Jmeter 測試而創建的腳本,無需訪問 Jmeter master pod。 #要求提供 jmx 文件的路徑 #執行后,測試腳本 jmx 文件可能會從 pod 本身刪除,但不會在本地刪除。 working_dir="`pwd`" # 獲取 namesapce 變量 tenant=`awk '{print $NF}' "$working_dir/tenant_export"` jmx="$1" [ -n "$jmx" ] || read -p 'Enter path to the jmx file ' jmx if [ ! -f "$jmx" ]; then echo "Test script file was not found in PATH" echo "Kindly check and input the correct file path" exit fi test_name="$(basename "$jmx")" # 獲取 master pod 詳細信息 master_pod=`kubectl get po -n $tenant | grep jmeter-master | awk '{print $1}'` kubectl cp "$jmx" -n $tenant "$master_pod:/$test_name" ## 啟動 Jmeter 壓測 kubectl exec -ti -n $tenant $master_pod -- /bin/bash /load_test "$test_name" kubectl exec -ti -n $tenant $master_pod -- /bin/bash /load_test "$test_name"

      jmeter_stop.sh(停止測試):

      #!/usr/bin/env bash #Script writtent to stop a running jmeter master test #Kindly ensure you have the necessary kubeconfig #編寫腳本來停止運行的 jmeter master 測試 #請確保你有必要的 kubeconfig working_dir=`pwd` #獲取 namesapce 變量 tenant=`awk '{print $NF}' $working_dir/tenant_export` master_pod=`kubectl get po -n $tenant | grep jmeter-master | awk '{print $1}'` kubectl -n $tenant exec -it $master_pod -- bash -c "./jmeter/apache-jmeter-5.0/bin/stoptest.sh"

      小結

      傳統 Jmeter 存在的問題:

      并發數超過單節點承載能力時,多節點環境配置、維護復雜;

      默認配置下無法并行運行多個測試,需要更改配置啟動額外進程;

      難以支持云環境下測試資源的彈性伸縮需求。

      Kubernetes-Jmeter 帶來的改變:

      壓測執行節點一鍵安裝;

      多個項目、多個測試可并行使用同一個測試資源池(最大并發數允許情況下, Kubernetes 也提供了 RBAC、namespace 等管理能力,支持多用戶共享一個集群,并實現資源限制),提高資源利用率;

      對接 Kubernetes HPA 根據并發數自動啟動、釋放壓測執行節點。

      源碼地址:

      https://github.com/zuozewei/blog-example/tree/master/Kubernetes/k8s-jmeter-cluster

      參考資料:

      [1]:https://github.com/kubernauts/jmeter-kubernetes

      Docker Kubernetes 壓力測試 容器

      版權聲明:本文內容由網絡用戶投稿,版權歸原作者所有,本站不擁有其著作權,亦不承擔相應法律責任。如果您發現本站中有涉嫌抄襲或描述失實的內容,請聯系我們jiasou666@gmail.com 處理,核實后本網站將在24小時內刪除侵權內容。

      上一篇:挑戰一晚上從零入門lua語言,直接對標Python快速上手(lua語言和python)
      下一篇:預見 公有云發展趨勢及挑戰(預見未來)
      相關文章
      亚洲中文字幕无码一区二区三区| 亚洲国产精品尤物YW在线观看| 亚洲精品tv久久久久久久久久| 亚洲欧洲国产精品久久| 亚洲国产第一站精品蜜芽| 亚洲精品V欧洲精品V日韩精品 | 亚洲精品国产suv一区88| 亚洲国产夜色在线观看| 老司机亚洲精品影院| 亚洲视频在线视频| 亚洲国产精品一区| 亚洲国产成人久久精品动漫| 国产亚洲美女精品久久久久狼| 久久精品国产精品亚洲精品 | 亚洲国产精品视频| 久久夜色精品国产亚洲av| 在线亚洲人成电影网站色www| 亚洲综合精品网站| 亚洲色中文字幕无码AV| 国产亚洲福利精品一区| 亚洲第一AAAAA片| 亚洲好看的理论片电影| 色噜噜综合亚洲av中文无码| 中文字幕亚洲免费无线观看日本| 久久精品亚洲一区二区三区浴池 | 国产午夜亚洲精品不卡电影| 国产精品亚洲精品爽爽| 亚洲精品WWW久久久久久| 最新国产AV无码专区亚洲| 久久久久久久尹人综合网亚洲| 久久亚洲国产精品五月天| 内射干少妇亚洲69XXX| 亚洲人成在久久综合网站| 亚洲熟妇AV日韩熟妇在线| 久久精品国产亚洲av瑜伽| 亚洲精品无码AV中文字幕电影网站| 久久影视国产亚洲| 久久精品国产精品亚洲艾草网| 亚洲高清资源在线观看| 亚洲人和日本人jizz| 亚洲欧洲日产国码久在线|