浏览代码

[bitnami/kafka] feat: Kafka 4.0.0 (#32516)

Juan Ariza Toledano 5 月之前
父节点
当前提交
8bea6f9e21
共有 35 个文件被更改,包括 1793 次插入2142 次删除
  1. 6 5
      .vib/kafka/runtime-parameters.yaml
  2. 5 1
      bitnami/kafka/CHANGELOG.md
  3. 2 5
      bitnami/kafka/Chart.lock
  4. 3 8
      bitnami/kafka/Chart.yaml
  5. 275 320
      bitnami/kafka/README.md
  6. 15 15
      bitnami/kafka/templates/NOTES.txt
  7. 236 465
      bitnami/kafka/templates/_helpers.tpl
  8. 512 0
      bitnami/kafka/templates/_init_containers.tpl
  9. 4 6
      bitnami/kafka/templates/broker/config-secrets.yaml
  10. 32 35
      bitnami/kafka/templates/broker/configmap.yaml
  11. 3 4
      bitnami/kafka/templates/broker/hpa.yaml
  12. 1 1
      bitnami/kafka/templates/broker/pdb.yaml
  13. 39 142
      bitnami/kafka/templates/broker/statefulset.yaml
  14. 3 3
      bitnami/kafka/templates/broker/svc-external-access.yaml
  15. 1 1
      bitnami/kafka/templates/broker/svc-headless.yaml
  16. 2 2
      bitnami/kafka/templates/broker/vpa.yaml
  17. 53 0
      bitnami/kafka/templates/ca-cert.yaml
  18. 56 0
      bitnami/kafka/templates/cert.yaml
  19. 4 6
      bitnami/kafka/templates/controller-eligible/config-secrets.yaml
  20. 35 35
      bitnami/kafka/templates/controller-eligible/configmap.yaml
  21. 3 4
      bitnami/kafka/templates/controller-eligible/hpa.yaml
  22. 2 2
      bitnami/kafka/templates/controller-eligible/pdb.yaml
  23. 42 133
      bitnami/kafka/templates/controller-eligible/statefulset.yaml
  24. 4 4
      bitnami/kafka/templates/controller-eligible/svc-external-access.yaml
  25. 2 7
      bitnami/kafka/templates/controller-eligible/svc-headless.yaml
  26. 3 4
      bitnami/kafka/templates/controller-eligible/vpa.yaml
  27. 4 4
      bitnami/kafka/templates/log4j2-configmap.yaml
  28. 1 1
      bitnami/kafka/templates/metrics/jmx-configmap.yaml
  29. 1 1
      bitnami/kafka/templates/networkpolicy.yaml
  30. 76 73
      bitnami/kafka/templates/provisioning/job.yaml
  31. 0 400
      bitnami/kafka/templates/scripts-configmap.yaml
  32. 18 19
      bitnami/kafka/templates/secrets.yaml
  33. 1 1
      bitnami/kafka/templates/svc.yaml
  34. 26 54
      bitnami/kafka/templates/tls-secret.yaml
  35. 323 381
      bitnami/kafka/values.yaml

+ 6 - 5
.vib/kafka/runtime-parameters.yaml

@@ -1,15 +1,16 @@
 externalAccess:
   enabled: true
-  autoDiscovery:
-    enabled: true
-    containerSecurityContext:
-      enabled: true
-      runAsUser: 1002
   controller:
     service:
       ports:
         external: 80
       type: LoadBalancer
+defaultInitContainers:
+  autoDiscovery:
+    enabled: true
+    containerSecurityContext:
+      enabled: true
+      runAsUser: 1002
 rbac:
   create: true
 listeners:

+ 5 - 1
bitnami/kafka/CHANGELOG.md

@@ -1,8 +1,12 @@
 # Changelog
 
+## 32.0.0 (2025-03-25)
+
+* [bitnami/kafka] feat: Kafka 4.0.0 ([#32516](https://github.com/bitnami/charts/pull/32516))
+
 ## 31.5.0 (2025-03-06)
 
-* [bitnami/kafka] IpFamilies and IpFamilyPolicy configurables ([#31456](https://github.com/bitnami/charts/pull/31456))
+* [bitnami/kafka] IpFamilies and IpFamilyPolicy configurables (#31456) ([30daf36](https://github.com/bitnami/charts/commit/30daf368a955addaa59136ed6b18f8702124f72a)), closes [#31456](https://github.com/bitnami/charts/issues/31456) [#31389](https://github.com/bitnami/charts/issues/31389)
 
 ## <small>31.4.1 (2025-03-04)</small>
 

+ 2 - 5
bitnami/kafka/Chart.lock

@@ -1,9 +1,6 @@
 dependencies:
-- name: zookeeper
-  repository: oci://registry-1.docker.io/bitnamicharts
-  version: 13.7.4
 - name: common
   repository: oci://registry-1.docker.io/bitnamicharts
   version: 2.30.0
-digest: sha256:616e1a226cc640527e8790abc23b0044546c2a2deeaee9fc8c1d4e3043f404be
-generated: "2025-03-04T10:26:08.93190282Z"
+digest: sha256:46afdf79eae69065904d430f03f7e5b79a148afed20aa45ee83ba88adc036169
+generated: "2025-03-19T16:25:26.437709+01:00"

+ 3 - 8
bitnami/kafka/Chart.yaml

@@ -9,18 +9,14 @@ annotations:
     - name: jmx-exporter
       image: docker.io/bitnami/jmx-exporter:1.1.0-debian-12-r7
     - name: kafka
-      image: docker.io/bitnami/kafka:3.9.0-debian-12-r12
+      image: docker.io/bitnami/kafka:4.0.0-debian-12-r0
     - name: kubectl
       image: docker.io/bitnami/kubectl:1.32.2-debian-12-r3
     - name: os-shell
       image: docker.io/bitnami/os-shell:12-debian-12-r39
 apiVersion: v2
-appVersion: 3.9.0
+appVersion: 4.0.0
 dependencies:
-- condition: zookeeper.enabled
-  name: zookeeper
-  repository: oci://registry-1.docker.io/bitnamicharts
-  version: 13.x.x
 - name: common
   repository: oci://registry-1.docker.io/bitnamicharts
   tags:
@@ -31,7 +27,6 @@ home: https://bitnami.com
 icon: https://dyltqmyl993wv.cloudfront.net/assets/stacks/kafka/img/kafka-stack-220x234.png
 keywords:
 - kafka
-- zookeeper
 - streaming
 - producer
 - consumer
@@ -41,4 +36,4 @@ maintainers:
 name: kafka
 sources:
 - https://github.com/bitnami/charts/tree/main/bitnami/kafka
-version: 31.5.0
+version: 32.0.0

+ 275 - 320
bitnami/kafka/README.md

@@ -44,49 +44,20 @@ These commands deploy Kafka on the Kubernetes cluster in the default configurati
 
 ## Configuration and installation details
 
-### Resource requests and limits
-
-Bitnami charts allow setting resource requests and limits for all containers inside the chart deployment. These are inside the `resources` value (check parameter table). Setting requests is essential for production workloads and these should be adapted to your specific use case.
-
-To make this process easier, the chart contains the `resourcesPreset` values, which automatically sets the `resources` section according to different presets. Check these presets in [the bitnami/common chart](https://github.com/bitnami/charts/blob/main/bitnami/common/templates/_resources.tpl#L15). However, in production workloads using `resourcesPreset` is discouraged as it may not fully adapt to your specific needs. Find more information on container resource management in the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
-
-### Prometheus metrics
-
-This chart can be integrated with Prometheus by setting `metrics.jmx.enabled` to `true`. This will deploy a sidecar container with [jmx_exporter](https://github.com/prometheus/jmx_exporter) in all pods and a `metrics` service, which can be configured under the `metrics.service` section. This `metrics` service will have the necessary annotations to be automatically scraped by Prometheus.
-
-#### Prometheus requirements
-
-It is necessary to have a working installation of Prometheus or Prometheus Operator for the integration to work. Install the [Bitnami Prometheus helm chart](https://github.com/bitnami/charts/tree/main/bitnami/prometheus) or the [Bitnami Kube Prometheus helm chart](https://github.com/bitnami/charts/tree/main/bitnami/kube-prometheus) to easily have a working Prometheus in your cluster.
-
-#### Integration with Prometheus Operator
-
-The chart can deploy `ServiceMonitor` objects for integration with Prometheus Operator installations. To do so, set the value `metrics.serviceMonitor.enabled=true`. Ensure that the Prometheus Operator `CustomResourceDefinitions` are installed in the cluster or it will fail with the following error:
-
-```text
-no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
-```
-
-Install the [Bitnami Kube Prometheus helm chart](https://github.com/bitnami/charts/tree/main/bitnami/kube-prometheus) for having the necessary CRDs and the Prometheus Operator.
-
-### [Rolling VS Immutable tags](https://techdocs.broadcom.com/us/en/vmware-tanzu/application-catalog/tanzu-application-catalog/services/tac-doc/apps-tutorials-understand-rolling-tags-containers-index.html)
-
-It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.
-
-Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.
-
 ### Listeners configuration
 
-This chart allows you to automatically configure Kafka with 3 listeners:
+This chart allows you to automatically configure Kafka with 4 listeners:
 
-- One for inter-broker communications.
-- A second one for communications with clients within the K8s cluster.
-- (optional) a third listener for communications with clients outside the K8s cluster. Check [this section](#accessing-kafka-brokers-from-outside-the-cluster) for more information.
+- One for controller communications.
+- A second one for inter-broker communications.
+- A third one for communications with clients within the K8s cluster.
+- (optional) a forth listener for communications with clients outside the K8s cluster. Check [this section](#accessing-kafka-brokers-from-outside-the-cluster) for more information.
 
 For more complex configurations, set the `listeners`, `advertisedListeners` and `listenerSecurityProtocolMap` parameters as needed.
 
-### Enable security for Kafka and Zookeeper
+### Enable security for Kafka
 
-You can configure different authentication protocols for each listener you configure in Kafka. For instance, you can use `sasl_tls` authentication for client communications, while using `tls` for inter-broker communications. This table shows the available protocols and the security they provide:
+You can configure different authentication protocols for each listener you configure in Kafka. For instance, you can use `sasl_tls` authentication for client communications, while using `tls` for controller and inter-broker communications. This table shows the available protocols and the security they provide:
 
 | Method    | Authentication               | Encryption via TLS |
 |-----------|------------------------------|--------------------|
@@ -96,19 +67,19 @@ You can configure different authentication protocols for each listener you confi
 | sasl      | Yes (via SASL)               | No                 |
 | sasl_tls  | Yes (via SASL)               | Yes                |
 
-Configure the authentication protocols for client and inter-broker communications by setting the *auth.clientProtocol* and *auth.interBrokerProtocol* parameters to the desired ones, respectively.
+Configure the authentication protocols for client, controller and inter-broker communications by setting the `listeners.client.protocol`, `listeners.controller.protocol` and `listeners.interbroker.protocol` parameters to the desired ones, respectively.
 
 If you enabled SASL authentication on any listener, you can set the SASL credentials using the parameters below:
 
-- `auth.sasl.jaas.clientUsers`/`auth.sasl.jaas.clientPasswords`: when enabling SASL authentication for communications with clients.
-- `auth.sasl.jaas.interBrokerUser`/`auth.sasl.jaas.interBrokerPassword`:  when enabling SASL authentication for inter-broker communications.
-- `auth.jaas.zookeeperUser`/`auth.jaas.zookeeperPassword`: In the case that the Zookeeper chart is deployed with SASL authentication enabled.
+- `sasl.client.users`/`sasl.client.passwords`: when enabling SASL authentication for communications with clients.
+- `sasl.interbroker.user`/`sasl.interbroker.password`:  when enabling SASL authentication for inter-broker communications.
+- `sasl.controller.user`/`sasl.controller.password`:  when enabling SASL authentication for controller communications.
 
-In order to configure TLS authentication/encryption, you **can** create a secret per Kafka broker you have in the cluster containing the Java Key Stores (JKS) files: the truststore (`kafka.truststore.jks`) and the keystore (`kafka.keystore.jks`). Then, you need pass the secret names with the `tls.existingSecret` parameter when deploying the chart.
+In order to configure TLS authentication/encryption, you **can** create a secret per Kafka node you have in the cluster containing the Java Key Stores (JKS) files: the truststore (`kafka.truststore.jks`) and the keystore (`kafka.keystore.jks`). Then, you need pass the secret names with the `tls.existingSecret` parameter when deploying the chart.
 
-> **Note**: If the JKS files are password protected (recommended), you will need to provide the password to get access to the keystores. To do so, use the `tls.password` parameter to provide your password.
+> **Note**: If the JKS files are password protected (recommended), you will need to provide the password to get access to the keystores. To do so, use the `tls.keystorePassword` and `tls.truststorePassword` parameters to provide your passwords.
 
-For instance, to configure TLS authentication on a Kafka cluster with 2 Kafka brokers use the commands below to create the secrets:
+For instance, to configure TLS authentication on a Kafka cluster with 2 Kafka nodes use the commands below to create the secrets:
 
 ```console
 kubectl create secret generic kafka-jks-0 --from-file=kafka.truststore.jks=./kafka.truststore.jks --from-file=kafka.keystore.jks=./kafka-0.keystore.jks
@@ -117,66 +88,37 @@ kubectl create secret generic kafka-jks-1 --from-file=kafka.truststore.jks=./kaf
 
 > **Note**: the command above assumes you already created the truststore and keystores files. This [script](https://raw.githubusercontent.com/confluentinc/confluent-platform-security-tools/master/kafka-generate-ssl.sh) can help you with the JKS files generation.
 
-If, for some reason (like using Cert-Manager) you can not use the default JKS secret scheme, you can use the additional parameters:
+If, for some reason (like using CertManager) you can not use the default JKS secret scheme, you can use the additional parameters:
 
-- `tls.jksTruststoreSecret` to define additional secret, where the `kafka.truststore.jks` is being kept. The truststore password **must** be the same as in `tls.password`
-- `tls.jksTruststore` to overwrite the default value of the truststore key (`kafka.truststore.jks`).
+- `tls.jksTruststoreSecret` to define additional secret, where the `kafka.truststore.jks` is being kept. The truststore password **must** be the same as in `tls.truststorePassword`
+- `tls.jksTruststoreKey` to overwrite the default value of the truststore key (`kafka.truststore.jks`).
 
-> **Note**: If you are using cert-manager, particularly when an ACME issuer is used, the `ca.crt` field is not put in the `Secret` that cert-manager creates. To handle this, the `tls.pemChainIncluded` property can be set to `true` and the initContainer created by this Chart will attempt to extract the intermediate certs from the `tls.crt` field of the secret (which is a PEM chain)
-> **Note**: The truststore/keystore from above **must** be protected with the same password as in `tls.password`
+> **Note**: If you are using CertManager, particularly when an ACME issuer is used, the `ca.crt` field is not put in the `Secret` that CertManager creates. To handle this, the `tls.pemChainIncluded` property can be set to `true` and the initContainer created by this Chart will attempt to extract the intermediate certs from the `tls.crt` field of the secret (which is a PEM chain)
+> **Note**: The truststore/keystore from above **must** be protected with the same passwords set in the `tls.keystorePassword` and `tls.truststorePassword` parameters.
 
 You can deploy the chart with authentication using the following parameters:
 
 ```console
 replicaCount=2
-listeners.client.client.protocol=SASL
-listeners.client.interbroker.protocol=TLS
+listeners.client.protocol=SASL
+listeners.interbroker.protocol=TLS
 tls.existingSecret=kafka-jks
-tls.password=jksPassword
+tls.keystorePassword=jksPassword
+tls.truststorePassword=jksPassword
 sasl.client.users[0]=brokerUser
 sasl.client.passwords[0]=brokerPassword
-sasl.zookeeper.user=zookeeperUser
-sasl.zookeeper.password=zookeeperPassword
-zookeeper.auth.enabled=true
-zookeeper.auth.serverUsers=zookeeperUser
-zookeeper.auth.serverPasswords=zookeeperPassword
-zookeeper.auth.clientUser=zookeeperUser
-zookeeper.auth.clientPassword=zookeeperPassword
-```
-
-You can deploy the chart with AclAuthorizer using the following parameters:
 
-```console
-replicaCount=2
-listeners.client.protocol=SASL
-listeners.interbroker.protocol=SASL_TLS
-tls.existingSecret=kafka-jks-0
-tls.password=jksPassword
-sasl.client.users[0]=brokerUser
-sasl.client.passwords[0]=brokerPassword
-sasl.zookeeper.user=zookeeperUser
-sasl.zookeeper.password=zookeeperPassword
-zookeeper.auth.enabled=true
-zookeeper.auth.serverUsers=zookeeperUser
-zookeeper.auth.serverPasswords=zookeeperPassword
-zookeeper.auth.clientUser=zookeeperUser
-zookeeper.auth.clientPassword=zookeeperPassword
-authorizerClassName=kafka.security.authorizer.AclAuthorizer
-allowEveryoneIfNoAclFound=false
-superUsers=User:admin
 ```
 
-If you are using Kafka ACLs, you might encounter in kafka-authorizer.log the following event: `[...] Principal = User:ANONYMOUS is Allowed Operation [...]`.
-
 By setting the following parameter: `listeners.client.protocol=SSL` and `listener.client.sslClientAuth=required`, Kafka will require the clients to authenticate to Kafka brokers via certificate.
 
-As result, we will be able to see in kafka-authorizer.log the events specific Subject: `[...] Principal = User:CN=kafka,OU=...,O=...,L=...,C=..,ST=... is [...]`.
+As result, we will be able to see in `kafka-authorizer.log` the events specific Subject: `[...] Principal = User:CN=kafka,OU=...,O=...,L=...,C=..,ST=... is [...]`.
 
 ### Update credentials
 
 The Bitnami Kafka chart, when upgrading, reuses the secret previously rendered by the chart or the one specified in `sasl.existingSecret`. To update credentials, use one of the following:
 
-- Run `helm upgrade` specifying new credentials in the `sasl` section as explained in the [authentication section](#enable-security-for-kafka-and-zookeeper).
+- Run `helm upgrade` specifying new credentials in the `sasl` section as explained in the [authentication section](#enable-security-for-kafka).
 - Run `helm upgrade` specifying a new secret in `sasl.existingSecret`
 
 ### Accessing Kafka brokers from outside the cluster
@@ -197,7 +139,7 @@ externalAccess.broker.service.type=LoadBalancer
 externalAccess.controller.service.type=LoadBalancer
 externalAccess.broker.service.ports.external=9094
 externalAccess.controller.service.containerPorts.external=9094
-externalAccess.autoDiscovery.enabled=true
+defaultInitContainers.autoDiscovery.enabled=true
 serviceAccount.create=true
 rbac.create=true
 ```
@@ -232,7 +174,7 @@ You have two alternatives to use NodePort services:
   externalAccess.enabled=true
   externalAccess.controller.service.type=NodePort
   externalAccess.broker.service.type=NodePort
-  externalAccess.autoDiscovery.enabled=true
+  defaultInitContainers.autoDiscovery.enabled=true
   serviceAccount.create=true
   rbac.create=true
   ```
@@ -299,28 +241,41 @@ externalAccess:
         external-dns.alpha.kubernetes.io/hostname: "{{ .targetPod }}.example.com"
 ```
 
+### Resource requests and limits
+
+Bitnami charts allow setting resource requests and limits for all containers inside the chart deployment. These are inside the `resources` values (check parameters table). Setting requests is essential for production workloads and these should be adapted to your specific use case.
+
+To make this process easier, the chart contains the `resourcesPreset` values, which automatically sets the `resources` section according to different presets. Check these presets in [the bitnami/common chart](https://github.com/bitnami/charts/blob/main/bitnami/common/templates/_resources.tpl#L15). However, in production workloads using `resourcesPreset` is discouraged as it may not fully adapt to your specific needs. Find more information on container resource management in the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
+
+### Prometheus metrics
+
 ### Enable metrics
 
-The chart can optionally start two metrics exporters:
+This chart can be integrated with Prometheus by setting `metrics.jmx.enabled` to `true`. This will deploy a sidecar container with [jmx_exporter](https://github.com/prometheus/jmx_exporter) in all pods and a `metrics` service, which can be configured under the `metrics.jmx.service` section. This service will have the necessary annotations to be automatically scraped by Prometheus.
 
-- JMX exporter, to expose JMX metrics. By default, it uses port 5556.
-- Zookeeper exporter, to expose Zookeeper metrics. By default, it uses port 9141.
+#### Prometheus requirements
 
-To expose JMX metrics to Prometheus, use the parameter below:
+It is necessary to have a working installation of Prometheus or Prometheus Operator for the integration to work. Install the [Bitnami Prometheus helm chart](https://github.com/bitnami/charts/tree/main/bitnami/prometheus) or the [Bitnami Kube Prometheus helm chart](https://github.com/bitnami/charts/tree/main/bitnami/kube-prometheus) to easily have a working Prometheus in your cluster.
 
-```text
-metrics.jmx.enabled: true
-```
+#### Integration with Prometheus Operator
 
-- To enable Zookeeper chart metrics, use the parameter below:
+The chart can deploy `ServiceMonitor` objects for integration with Prometheus Operator installations. To do so, set the value `metrics.serviceMonitor.enabled=true`. Ensure that the Prometheus Operator `CustomResourceDefinitions` are installed in the cluster or it will fail with the following error:
 
 ```text
-zookeeper.metrics.enabled: true
+no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
 ```
 
+Install the [Bitnami Kube Prometheus helm chart](https://github.com/bitnami/charts/tree/main/bitnami/kube-prometheus) for having the necessary CRDs and the Prometheus Operator.
+
+### [Rolling VS Immutable tags](https://techdocs.broadcom.com/us/en/vmware-tanzu/application-catalog/tanzu-application-catalog/services/tac-doc/apps-tutorials-understand-rolling-tags-containers-index.html)
+
+It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.
+
+Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.
+
 ### Sidecars
 
-If you have a need for additional containers to run within the same pod as Kafka (e.g. an additional metrics or logging exporter), you can do so via the `sidecars` config parameter. Simply define your container according to the Kubernetes container spec.
+If you have a need for additional containers to run within the same pod as Kafka (e.g. an additional metrics or logging exporter), you can do so via the `sidecars` parameters. Simply define your container according to the Kubernetes container spec.
 
 ```yaml
 sidecars:
@@ -343,8 +298,6 @@ As an alternative, you can use of the preset configurations for pod affinity, po
 There are cases where you may want to deploy extra objects, such as Kafka Connect. For covering this case, the chart allows adding the full specification of other objects using the `extraDeploy` parameter. The following example would create a deployment including a Kafka Connect deployment so you can connect Kafka with MongoDB&reg;:
 
 ```yaml
-## Extra objects to deploy (value evaluated as a template)
-##
 extraDeploy:
   - |
     apiVersion: apps/v1
@@ -418,23 +371,22 @@ RUN mkdir -p /opt/bitnami/kafka/plugins && \
 CMD /opt/bitnami/kafka/bin/connect-standalone.sh /opt/bitnami/kafka/config/connect-standalone.properties /opt/bitnami/kafka/config/mongo.properties
 ```
 
-### Backup and restore
-
-To back up and restore Helm chart deployments on Kubernetes, you need to back up the persistent volumes from the source deployment and attach them to a new deployment using [Velero](https://velero.io/), a Kubernetes backup/restore tool. Find the instructions for using Velero in [this guide](https://techdocs.broadcom.com/us/en/vmware-tanzu/application-catalog/tanzu-application-catalog/services/tac-doc/apps-tutorials-backup-restore-deployments-velero-index.html).
-
-## Persistence
+### Persistence
 
 The [Bitnami Kafka](https://github.com/bitnami/containers/tree/main/bitnami/kafka) image stores the Kafka data at the `/bitnami/kafka` path of the container. Persistent Volume Claims are used to keep the data across deployments. This is known to work in GCE, AWS, and minikube.
 
-### Adjust permissions of persistent volume mountpoint
+#### Adjust permissions of persistent volume mountpoint
 
 As the image run as non-root by default, it is necessary to adjust the ownership of the persistent volume so that the container can write data into it.
 
-By default, the chart is configured to use Kubernetes Security Context to automatically change the ownership of the volume. However, this feature does not work in all Kubernetes distributions.
-As an alternative, this chart supports using an initContainer to change the ownership of the volume before mounting it in the final destination.
+By default, the chart is configured to use Kubernetes Security Context to automatically change the ownership of the volume. However, this feature does not work in all Kubernetes distributions. As an alternative, this chart supports using an initContainer to change the ownership of the volume before mounting it in the final destination.
 
 You can enable this initContainer by setting `volumePermissions.enabled` to `true`.
 
+#### Backup and restore
+
+To back up and restore Helm chart deployments on Kubernetes, you need to back up the persistent volumes from the source deployment and attach them to a new deployment using [Velero](https://velero.io/), a Kubernetes backup/restore tool. Find the instructions for using Velero in [this guide](https://techdocs.broadcom.com/us/en/vmware-tanzu/application-catalog/tanzu-application-catalog/services/tac-doc/apps-tutorials-backup-restore-deployments-velero-index.html).
+
 ## Parameters
 
 ### Global parameters
@@ -444,7 +396,6 @@ You can enable this initContainer by setting `volumePermissions.enabled` to `tru
 | `global.imageRegistry`                                | Global Docker image registry                                                                                                                                                                                                                                                                                                                                        | `""`    |
 | `global.imagePullSecrets`                             | Global Docker registry secret names as an array                                                                                                                                                                                                                                                                                                                     | `[]`    |
 | `global.defaultStorageClass`                          | Global default StorageClass for Persistent Volume(s)                                                                                                                                                                                                                                                                                                                | `""`    |
-| `global.storageClass`                                 | DEPRECATED: use global.defaultStorageClass instead                                                                                                                                                                                                                                                                                                                  | `""`    |
 | `global.security.allowInsecureImages`                 | Allows skipping image verification                                                                                                                                                                                                                                                                                                                                  | `false` |
 | `global.compatibility.openshift.adaptSecurityContext` | Adapt the securityContext sections of the deployment to make them compatible with Openshift restricted-v2 SCC: remove runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs. Possible values: auto (apply if the detected running cluster is Openshift), force (perform the adaptation always), disabled (do not perform adaptation) | `auto`  |
 
@@ -456,16 +407,18 @@ You can enable this initContainer by setting `volumePermissions.enabled` to `tru
 | `apiVersions`             | Override Kubernetes API versions reported by .Capabilities                              | `[]`            |
 | `nameOverride`            | String to partially override common.names.fullname                                      | `""`            |
 | `fullnameOverride`        | String to fully override common.names.fullname                                          | `""`            |
+| `namespaceOverride`       | String to fully override common.names.namespace                                         | `""`            |
 | `clusterDomain`           | Default Kubernetes cluster domain                                                       | `cluster.local` |
 | `commonLabels`            | Labels to add to all deployed objects                                                   | `{}`            |
 | `commonAnnotations`       | Annotations to add to all deployed objects                                              | `{}`            |
 | `extraDeploy`             | Array of extra objects to deploy with the release                                       | `[]`            |
-| `serviceBindings.enabled` | Create secret for service binding (Experimental)                                        | `false`         |
+| `usePasswordFiles`        | Mount credentials as files instead of using environment variables                       | `true`          |
 | `diagnosticMode.enabled`  | Enable diagnostic mode (all probes will be disabled and the command will be overridden) | `false`         |
-| `diagnosticMode.command`  | Command to override all containers in the statefulset                                   | `["sleep"]`     |
-| `diagnosticMode.args`     | Args to override all containers in the statefulset                                      | `["infinity"]`  |
+| `diagnosticMode.command`  | Command to override all containers in the chart release                                 | `["sleep"]`     |
+| `diagnosticMode.args`     | Args to override all containers in the chart release                                    | `["infinity"]`  |
+| `serviceBindings.enabled` | Create secret for service binding (Experimental)                                        | `false`         |
 
-### Kafka parameters
+### Kafka common parameters
 
 | Name                                  | Description                                                                                                                                                                                                | Value                                                 |
 | ------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------- |
@@ -475,18 +428,19 @@ You can enable this initContainer by setting `volumePermissions.enabled` to `tru
 | `image.pullPolicy`                    | Kafka image pull policy                                                                                                                                                                                    | `IfNotPresent`                                        |
 | `image.pullSecrets`                   | Specify docker-registry secret names as an array                                                                                                                                                           | `[]`                                                  |
 | `image.debug`                         | Specify if debug values should be set                                                                                                                                                                      | `false`                                               |
-| `extraInit`                           | Additional content for the kafka init script, rendered as a template.                                                                                                                                      | `""`                                                  |
-| `config`                              | Configuration file for Kafka, rendered as a template. Auto-generated based on chart values when not specified.                                                                                             | `""`                                                  |
-| `existingConfigmap`                   | ConfigMap with Kafka Configuration                                                                                                                                                                         | `""`                                                  |
-| `extraConfig`                         | Additional configuration to be appended at the end of the generated Kafka configuration file.                                                                                                              | `""`                                                  |
-| `extraConfigYaml`                     | Additional configuration in yaml format to be appended at the end of the generated Kafka configuration file.                                                                                               | `{}`                                                  |
-| `secretConfig`                        | Additional configuration to be appended at the end of the generated Kafka configuration file.                                                                                                              | `""`                                                  |
-| `existingSecretConfig`                | Secret with additonal configuration that will be appended to the end of the generated Kafka configuration file                                                                                             | `""`                                                  |
-| `log4j`                               | An optional log4j.properties file to overwrite the default of the Kafka brokers                                                                                                                            | `""`                                                  |
-| `existingLog4jConfigMap`              | The name of an existing ConfigMap containing a log4j.properties file                                                                                                                                       | `""`                                                  |
+| `clusterId`                           | Kafka Kraft cluster ID (ignored if existingKraftSecret is set). A random cluster ID will be generated the 1st time Kraft is initialized if not set.                                                        | `""`                                                  |
+| `existingKraftSecret`                 | Name of the secret containing the Kafka KRaft Cluster ID and one directory ID per controller replica                                                                                                       | `""`                                                  |
+| `config`                              | Specify content for Kafka configuration (auto-generated based on other parameters otherwise)                                                                                                               | `{}`                                                  |
+| `overrideConfiguration`               | Kafka common configuration override. Values defined here takes precedence over the ones defined at `config`                                                                                                | `{}`                                                  |
+| `existingConfigmap`                   | Name of an existing ConfigMap with the Kafka configuration                                                                                                                                                 | `""`                                                  |
+| `secretConfig`                        | Additional configuration to be appended at the end of the generated Kafka configuration (store in a secret)                                                                                                | `""`                                                  |
+| `existingSecretConfig`                | Secret with additional configuration that will be appended to the end of the generated Kafka configuration                                                                                                 | `""`                                                  |
+| `log4j2`                              | Specify content for Kafka log4j2 configuration (default one is used otherwise)                                                                                                                             | `""`                                                  |
+| `existingLog4j2ConfigMap`             | The name of an existing ConfigMap containing the log4j2.yaml file                                                                                                                                          | `""`                                                  |
 | `heapOpts`                            | Kafka Java Heap configuration                                                                                                                                                                              | `-XX:InitialRAMPercentage=75 -XX:MaxRAMPercentage=75` |
-| `brokerRackAssignment`                | Set Broker Assignment for multi tenant environment Allowed values: `aws-az`, `azure`                                                                                                                       | `""`                                                  |
-| `brokerRackAssignmentApiVersion`      | Set Broker Assignment API version when brokerRackAssignment set to : `azure`                                                                                                                               | `2023-11-15`                                          |
+| `brokerRackAwareness.enabled`         | Enable Kafka Rack Awareness                                                                                                                                                                                | `false`                                               |
+| `brokerRackAwareness.cloudProvider`   | Cloud provider to use to set Broker Rack Awareness. Allowed values: `aws-az`, `azure`                                                                                                                      | `""`                                                  |
+| `brokerRackAwareness.azureApiVersion` | Metadata API version to use when brokerRackAwareness.cloudProvider is set to `azure`                                                                                                                       | `2023-11-15`                                          |
 | `interBrokerProtocolVersion`          | Override the setting 'inter.broker.protocol.version' during the ZK migration.                                                                                                                              | `""`                                                  |
 | `listeners.client.name`               | Name for the Kafka client listener                                                                                                                                                                         | `CLIENT`                                              |
 | `listeners.client.containerPort`      | Port for the Kafka client listener                                                                                                                                                                         | `9092`                                                |
@@ -530,50 +484,94 @@ You can enable this initContainer by setting `volumePermissions.enabled` to `tru
 | `sasl.controller.clientSecret`      | Client Secret for controller communications when SASL is enabled with mechanism OAUTHBEARER. If not set and SASL is enabled for the inter-broker listener, a random secret will be generated. | `""`                                |
 | `sasl.client.users`                 | Comma-separated list of usernames for client communications when SASL is enabled                                                                                                              | `["user1"]`                         |
 | `sasl.client.passwords`             | Comma-separated list of passwords for client communications when SASL is enabled, must match the number of client.users                                                                       | `""`                                |
-| `sasl.zookeeper.user`               | Username for zookeeper communications when SASL is enabled.                                                                                                                                   | `""`                                |
-| `sasl.zookeeper.password`           | Password for zookeeper communications when SASL is enabled.                                                                                                                                   | `""`                                |
-| `sasl.existingSecret`               | Name of the existing secret containing credentials for clientUsers, interBrokerUser, controllerUser and zookeeperUser                                                                         | `""`                                |
+| `sasl.existingSecret`               | Name of the existing secret containing credentials for client.users, interbroker.user and controller.user                                                                                     | `""`                                |
 
 ### Kafka TLS parameters
 
-| Name                                         | Description                                                                                                                             | Value                      |
-| -------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------- | -------------------------- |
-| `tls.type`                                   | Format to use for TLS certificates. Allowed types: `JKS` and `PEM`                                                                      | `JKS`                      |
-| `tls.pemChainIncluded`                       | Flag to denote that the Certificate Authority (CA) certificates are bundled with the endpoint cert.                                     | `false`                    |
-| `tls.existingSecret`                         | Name of the existing secret containing the TLS certificates for the Kafka nodes.                                                        | `""`                       |
-| `tls.autoGenerated`                          | Generate automatically self-signed TLS certificates for Kafka brokers. Currently only supported if `tls.type` is `PEM`                  | `false`                    |
-| `tls.customAltNames`                         | Optionally specify extra list of additional subject alternative names (SANs) for the automatically generated TLS certificates.          | `[]`                       |
-| `tls.passwordsSecret`                        | Name of the secret containing the password to access the JKS files or PEM key when they are password-protected. (`key`: `password`)     | `""`                       |
-| `tls.passwordsSecretKeystoreKey`             | The secret key from the tls.passwordsSecret containing the password for the Keystore.                                                   | `keystore-password`        |
-| `tls.passwordsSecretTruststoreKey`           | The secret key from the tls.passwordsSecret containing the password for the Truststore.                                                 | `truststore-password`      |
-| `tls.passwordsSecretPemPasswordKey`          | The secret key from the tls.passwordsSecret containing the password for the PEM key inside 'tls.passwordsSecret'.                       | `""`                       |
-| `tls.keystorePassword`                       | Password to access the JKS keystore when it is password-protected. Ignored when 'tls.passwordsSecret' is provided.                      | `""`                       |
-| `tls.truststorePassword`                     | Password to access the JKS truststore when it is password-protected. Ignored when 'tls.passwordsSecret' is provided.                    | `""`                       |
-| `tls.keyPassword`                            | Password to access the PEM key when it is password-protected.                                                                           | `""`                       |
-| `tls.jksKeystoreKey`                         | The secret key from the `tls.existingSecret` containing the keystore                                                                    | `""`                       |
-| `tls.jksTruststoreSecret`                    | Name of the existing secret containing your truststore if truststore not existing or different from the one in the `tls.existingSecret` | `""`                       |
-| `tls.jksTruststoreKey`                       | The secret key from the `tls.existingSecret` or `tls.jksTruststoreSecret` containing the truststore                                     | `""`                       |
-| `tls.endpointIdentificationAlgorithm`        | The endpoint identification algorithm to validate server hostname using server certificate                                              | `https`                    |
-| `tls.sslClientAuth`                          | Sets the default value for the ssl.client.auth Kafka setting.                                                                           | `required`                 |
-| `tls.zookeeper.enabled`                      | Enable TLS for Zookeeper client connections.                                                                                            | `false`                    |
-| `tls.zookeeper.verifyHostname`               | Hostname validation.                                                                                                                    | `true`                     |
-| `tls.zookeeper.existingSecret`               | Name of the existing secret containing the TLS certificates for ZooKeeper client communications.                                        | `""`                       |
-| `tls.zookeeper.existingSecretKeystoreKey`    | The secret key from the  tls.zookeeper.existingSecret containing the Keystore.                                                          | `zookeeper.keystore.jks`   |
-| `tls.zookeeper.existingSecretTruststoreKey`  | The secret key from the tls.zookeeper.existingSecret containing the Truststore.                                                         | `zookeeper.truststore.jks` |
-| `tls.zookeeper.passwordsSecret`              | Existing secret containing Keystore and Truststore passwords.                                                                           | `""`                       |
-| `tls.zookeeper.passwordsSecretKeystoreKey`   | The secret key from the tls.zookeeper.passwordsSecret containing the password for the Keystore.                                         | `keystore-password`        |
-| `tls.zookeeper.passwordsSecretTruststoreKey` | The secret key from the tls.zookeeper.passwordsSecret containing the password for the Truststore.                                       | `truststore-password`      |
-| `tls.zookeeper.keystorePassword`             | Password to access the JKS keystore when it is password-protected. Ignored when 'tls.passwordsSecret' is provided.                      | `""`                       |
-| `tls.zookeeper.truststorePassword`           | Password to access the JKS truststore when it is password-protected. Ignored when 'tls.passwordsSecret' is provided.                    | `""`                       |
-| `extraEnvVars`                               | Extra environment variables to add to Kafka pods                                                                                        | `[]`                       |
-| `extraEnvVarsCM`                             | ConfigMap with extra environment variables                                                                                              | `""`                       |
-| `extraEnvVarsSecret`                         | Secret with extra environment variables                                                                                                 | `""`                       |
-| `extraVolumes`                               | Optionally specify extra list of additional volumes for the Kafka pod(s)                                                                | `[]`                       |
-| `extraVolumeMounts`                          | Optionally specify extra list of additional volumeMounts for the Kafka container(s)                                                     | `[]`                       |
-| `sidecars`                                   | Add additional sidecar containers to the Kafka pod(s)                                                                                   | `[]`                       |
-| `initContainers`                             | Add additional Add init containers to the Kafka pod(s)                                                                                  | `[]`                       |
-| `dnsPolicy`                                  | Specifies the DNS policy for the zookeeper pods                                                                                         | `""`                       |
-| `dnsConfig`                                  | allows users more control on the DNS settings for a Pod. Required if `dnsPolicy` is set to `None`                                       | `{}`                       |
+| Name                                                                                        | Description                                                                                                                                                                                                                                                                                                                | Value                      |
+| ------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------- |
+| `tls.type`                                                                                  | Format to use for TLS certificates. Allowed types: `JKS` and `PEM`                                                                                                                                                                                                                                                         | `JKS`                      |
+| `tls.pemChainIncluded`                                                                      | Flag to denote that the Certificate Authority (CA) certificates are bundled with the endpoint cert.                                                                                                                                                                                                                        | `false`                    |
+| `tls.autoGenerated.enabled`                                                                 | Enable automatic generation of TLS certificates (only supported if `tls.type` is `PEM`)                                                                                                                                                                                                                                    | `true`                     |
+| `tls.autoGenerated.engine`                                                                  | Mechanism to generate the certificates (allowed values: helm, cert-manager)                                                                                                                                                                                                                                                | `helm`                     |
+| `tls.autoGenerated.customAltNames`                                                          | List of additional subject alternative names (SANs) for the automatically generated TLS certificates.                                                                                                                                                                                                                      | `[]`                       |
+| `tls.autoGenerated.certManager.existingIssuer`                                              | The name of an existing Issuer to use for generating the certificates (only for `cert-manager` engine)                                                                                                                                                                                                                     | `""`                       |
+| `tls.autoGenerated.certManager.existingIssuerKind`                                          | Existing Issuer kind, defaults to Issuer (only for `cert-manager` engine)                                                                                                                                                                                                                                                  | `""`                       |
+| `tls.autoGenerated.certManager.keyAlgorithm`                                                | Key algorithm for the certificates (only for `cert-manager` engine)                                                                                                                                                                                                                                                        | `RSA`                      |
+| `tls.autoGenerated.certManager.keySize`                                                     | Key size for the certificates (only for `cert-manager` engine)                                                                                                                                                                                                                                                             | `2048`                     |
+| `tls.autoGenerated.certManager.duration`                                                    | Duration for the certificates (only for `cert-manager` engine)                                                                                                                                                                                                                                                             | `2160h`                    |
+| `tls.autoGenerated.certManager.renewBefore`                                                 | Renewal period for the certificates (only for `cert-manager` engine)                                                                                                                                                                                                                                                       | `360h`                     |
+| `tls.existingSecret`                                                                        | Name of the existing secret containing the TLS certificates for the Kafka nodes.                                                                                                                                                                                                                                           | `""`                       |
+| `tls.passwordsSecret`                                                                       | Name of the secret containing the password to access the JKS files or PEM key when they are password-protected. (`key`: `password`)                                                                                                                                                                                        | `""`                       |
+| `tls.passwordsSecretKeystoreKey`                                                            | The secret key from the tls.passwordsSecret containing the password for the Keystore.                                                                                                                                                                                                                                      | `keystore-password`        |
+| `tls.passwordsSecretTruststoreKey`                                                          | The secret key from the tls.passwordsSecret containing the password for the Truststore.                                                                                                                                                                                                                                    | `truststore-password`      |
+| `tls.passwordsSecretPemPasswordKey`                                                         | The secret key from the tls.passwordsSecret containing the password for the PEM key inside 'tls.passwordsSecret'.                                                                                                                                                                                                          | `""`                       |
+| `tls.keystorePassword`                                                                      | Password to access the JKS keystore when it is password-protected. Ignored when 'tls.passwordsSecret' is provided.                                                                                                                                                                                                         | `""`                       |
+| `tls.truststorePassword`                                                                    | Password to access the JKS truststore when it is password-protected. Ignored when 'tls.passwordsSecret' is provided.                                                                                                                                                                                                       | `""`                       |
+| `tls.keyPassword`                                                                           | Password to access the PEM key when it is password-protected.                                                                                                                                                                                                                                                              | `""`                       |
+| `tls.jksKeystoreKey`                                                                        | The secret key from the `tls.existingSecret` containing the keystore                                                                                                                                                                                                                                                       | `""`                       |
+| `tls.jksTruststoreSecret`                                                                   | Name of the existing secret containing your truststore if truststore not existing or different from the one in the `tls.existingSecret`                                                                                                                                                                                    | `""`                       |
+| `tls.jksTruststoreKey`                                                                      | The secret key from the `tls.existingSecret` or `tls.jksTruststoreSecret` containing the truststore                                                                                                                                                                                                                        | `""`                       |
+| `tls.endpointIdentificationAlgorithm`                                                       | The endpoint identification algorithm to validate server hostname using server certificate                                                                                                                                                                                                                                 | `https`                    |
+| `tls.sslClientAuth`                                                                         | Sets the default value for the ssl.client.auth Kafka setting.                                                                                                                                                                                                                                                              | `required`                 |
+| `extraEnvVars`                                                                              | Extra environment variables to add to Kafka pods                                                                                                                                                                                                                                                                           | `[]`                       |
+| `extraEnvVarsCM`                                                                            | ConfigMap with extra environment variables                                                                                                                                                                                                                                                                                 | `""`                       |
+| `extraEnvVarsSecret`                                                                        | Secret with extra environment variables                                                                                                                                                                                                                                                                                    | `""`                       |
+| `extraVolumes`                                                                              | Optionally specify extra list of additional volumes for the Kafka pod(s)                                                                                                                                                                                                                                                   | `[]`                       |
+| `extraVolumeMounts`                                                                         | Optionally specify extra list of additional volumeMounts for the Kafka container(s)                                                                                                                                                                                                                                        | `[]`                       |
+| `sidecars`                                                                                  | Add additional sidecar containers to the Kafka pod(s)                                                                                                                                                                                                                                                                      | `[]`                       |
+| `initContainers`                                                                            | Add additional Add init containers to the Kafka pod(s)                                                                                                                                                                                                                                                                     | `[]`                       |
+| `dnsPolicy`                                                                                 | Specifies the DNS policy for the Kafka pods                                                                                                                                                                                                                                                                                | `""`                       |
+| `dnsConfig`                                                                                 | allows users more control on the DNS settings for a Pod. Required if `dnsPolicy` is set to `None`                                                                                                                                                                                                                          | `{}`                       |
+| `defaultInitContainers.volumePermissions.enabled`                                           | Enable init container that changes the owner and group of the persistent volume                                                                                                                                                                                                                                            | `false`                    |
+| `defaultInitContainers.volumePermissions.image.registry`                                    | "volume-permissions" init-containers' image registry                                                                                                                                                                                                                                                                       | `REGISTRY_NAME`            |
+| `defaultInitContainers.volumePermissions.image.repository`                                  | "volume-permissions" init-containers' image repository                                                                                                                                                                                                                                                                     | `REPOSITORY_NAME/os-shell` |
+| `defaultInitContainers.volumePermissions.image.digest`                                      | "volume-permissions" init-containers' image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag                                                                                                                                                                                      | `""`                       |
+| `defaultInitContainers.volumePermissions.image.pullPolicy`                                  | "volume-permissions" init-containers' image pull policy                                                                                                                                                                                                                                                                    | `IfNotPresent`             |
+| `defaultInitContainers.volumePermissions.image.pullSecrets`                                 | "volume-permissions" init-containers' image pull secrets                                                                                                                                                                                                                                                                   | `[]`                       |
+| `defaultInitContainers.volumePermissions.containerSecurityContext.enabled`                  | Enabled "volume-permissions" init-containers' Security Context                                                                                                                                                                                                                                                             | `true`                     |
+| `defaultInitContainers.volumePermissions.containerSecurityContext.seLinuxOptions`           | Set SELinux options in "volume-permissions" init-containers                                                                                                                                                                                                                                                                | `{}`                       |
+| `defaultInitContainers.volumePermissions.containerSecurityContext.runAsUser`                | Set runAsUser in "volume-permissions" init-containers' Security Context                                                                                                                                                                                                                                                    | `0`                        |
+| `defaultInitContainers.volumePermissions.containerSecurityContext.privileged`               | Set privileged in "volume-permissions" init-containers' Security Context                                                                                                                                                                                                                                                   | `false`                    |
+| `defaultInitContainers.volumePermissions.containerSecurityContext.allowPrivilegeEscalation` | Set allowPrivilegeEscalation in "volume-permissions" init-containers' Security Context                                                                                                                                                                                                                                     | `false`                    |
+| `defaultInitContainers.volumePermissions.containerSecurityContext.capabilities.add`         | List of capabilities to be added in "volume-permissions" init-containers                                                                                                                                                                                                                                                   | `[]`                       |
+| `defaultInitContainers.volumePermissions.containerSecurityContext.capabilities.drop`        | List of capabilities to be dropped in "volume-permissions" init-containers                                                                                                                                                                                                                                                 | `["ALL"]`                  |
+| `defaultInitContainers.volumePermissions.containerSecurityContext.seccompProfile.type`      | Set seccomp profile in "volume-permissions" init-containers                                                                                                                                                                                                                                                                | `RuntimeDefault`           |
+| `defaultInitContainers.volumePermissions.resourcesPreset`                                   | Set Kafka "volume-permissions" init container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if defaultInitContainers.volumePermissions.resources is set (defaultInitContainers.volumePermissions.resources is recommended for production). | `nano`                     |
+| `defaultInitContainers.volumePermissions.resources`                                         | Set Kafka "volume-permissions" init container requests and limits for different resources like CPU or memory (essential for production workloads)                                                                                                                                                                          | `{}`                       |
+| `defaultInitContainers.prepareConfig.containerSecurityContext.enabled`                      | Enabled "prepare-config" init-containers' Security Context                                                                                                                                                                                                                                                                 | `true`                     |
+| `defaultInitContainers.prepareConfig.containerSecurityContext.seLinuxOptions`               | Set SELinux options in "prepare-config" init-containers                                                                                                                                                                                                                                                                    | `{}`                       |
+| `defaultInitContainers.prepareConfig.containerSecurityContext.runAsUser`                    | Set runAsUser in "prepare-config" init-containers' Security Context                                                                                                                                                                                                                                                        | `1001`                     |
+| `defaultInitContainers.prepareConfig.containerSecurityContext.runAsGroup`                   | Set runAsUser in "prepare-config" init-containers' Security Context                                                                                                                                                                                                                                                        | `1001`                     |
+| `defaultInitContainers.prepareConfig.containerSecurityContext.runAsNonRoot`                 | Set runAsNonRoot in "prepare-config" init-containers' Security Context                                                                                                                                                                                                                                                     | `true`                     |
+| `defaultInitContainers.prepareConfig.containerSecurityContext.readOnlyRootFilesystem`       | Set readOnlyRootFilesystem in "prepare-config" init-containers' Security Context                                                                                                                                                                                                                                           | `true`                     |
+| `defaultInitContainers.prepareConfig.containerSecurityContext.privileged`                   | Set privileged in "prepare-config" init-containers' Security Context                                                                                                                                                                                                                                                       | `false`                    |
+| `defaultInitContainers.prepareConfig.containerSecurityContext.allowPrivilegeEscalation`     | Set allowPrivilegeEscalation in "prepare-config" init-containers' Security Context                                                                                                                                                                                                                                         | `false`                    |
+| `defaultInitContainers.prepareConfig.containerSecurityContext.capabilities.add`             | List of capabilities to be added in "prepare-config" init-containers                                                                                                                                                                                                                                                       | `[]`                       |
+| `defaultInitContainers.prepareConfig.containerSecurityContext.capabilities.drop`            | List of capabilities to be dropped in "prepare-config" init-containers                                                                                                                                                                                                                                                     | `["ALL"]`                  |
+| `defaultInitContainers.prepareConfig.containerSecurityContext.seccompProfile.type`          | Set seccomp profile in "prepare-config" init-containers                                                                                                                                                                                                                                                                    | `RuntimeDefault`           |
+| `defaultInitContainers.prepareConfig.resourcesPreset`                                       | Set Kafka "prepare-config" init container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if defaultInitContainers.prepareConfig.resources is set (defaultInitContainers.prepareConfig.resources is recommended for production).             | `nano`                     |
+| `defaultInitContainers.prepareConfig.resources`                                             | Set Kafka "prepare-config" init container requests and limits for different resources like CPU or memory (essential for production workloads)                                                                                                                                                                              | `{}`                       |
+| `defaultInitContainers.prepareConfig.extraInit`                                             | Additional content for the "prepare-config" init script, rendered as a template.                                                                                                                                                                                                                                           | `""`                       |
+| `defaultInitContainers.autoDiscovery.enabled`                                               | Enable init container that auto-detects external IPs/ports by querying the K8s API                                                                                                                                                                                                                                         | `false`                    |
+| `defaultInitContainers.autoDiscovery.image.registry`                                        | "auto-discovery" init-containers' image registry                                                                                                                                                                                                                                                                           | `REGISTRY_NAME`            |
+| `defaultInitContainers.autoDiscovery.image.repository`                                      | "auto-discovery" init-containers' image repository                                                                                                                                                                                                                                                                         | `REPOSITORY_NAME/os-shell` |
+| `defaultInitContainers.autoDiscovery.image.digest`                                          | "auto-discovery" init-containers' image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag                                                                                                                                                                                          | `""`                       |
+| `defaultInitContainers.autoDiscovery.image.pullPolicy`                                      | "auto-discovery" init-containers' image pull policy                                                                                                                                                                                                                                                                        | `IfNotPresent`             |
+| `defaultInitContainers.autoDiscovery.image.pullSecrets`                                     | "auto-discovery" init-containers' image pull secrets                                                                                                                                                                                                                                                                       | `[]`                       |
+| `defaultInitContainers.autoDiscovery.containerSecurityContext.enabled`                      | Enabled "auto-discovery" init-containers' Security Context                                                                                                                                                                                                                                                                 | `true`                     |
+| `defaultInitContainers.autoDiscovery.containerSecurityContext.seLinuxOptions`               | Set SELinux options in "auto-discovery" init-containers                                                                                                                                                                                                                                                                    | `{}`                       |
+| `defaultInitContainers.autoDiscovery.containerSecurityContext.runAsUser`                    | Set runAsUser in "auto-discovery" init-containers' Security Context                                                                                                                                                                                                                                                        | `1001`                     |
+| `defaultInitContainers.autoDiscovery.containerSecurityContext.runAsGroup`                   | Set runAsUser in "auto-discovery" init-containers' Security Context                                                                                                                                                                                                                                                        | `1001`                     |
+| `defaultInitContainers.autoDiscovery.containerSecurityContext.runAsNonRoot`                 | Set runAsNonRoot in "auto-discovery" init-containers' Security Context                                                                                                                                                                                                                                                     | `true`                     |
+| `defaultInitContainers.autoDiscovery.containerSecurityContext.readOnlyRootFilesystem`       | Set readOnlyRootFilesystem in "auto-discovery" init-containers' Security Context                                                                                                                                                                                                                                           | `true`                     |
+| `defaultInitContainers.autoDiscovery.containerSecurityContext.privileged`                   | Set privileged in "auto-discovery" init-containers' Security Context                                                                                                                                                                                                                                                       | `false`                    |
+| `defaultInitContainers.autoDiscovery.containerSecurityContext.allowPrivilegeEscalation`     | Set allowPrivilegeEscalation in "auto-discovery" init-containers' Security Context                                                                                                                                                                                                                                         | `false`                    |
+| `defaultInitContainers.autoDiscovery.containerSecurityContext.capabilities.add`             | List of capabilities to be added in "auto-discovery" init-containers                                                                                                                                                                                                                                                       | `[]`                       |
+| `defaultInitContainers.autoDiscovery.containerSecurityContext.capabilities.drop`            | List of capabilities to be dropped in "auto-discovery" init-containers                                                                                                                                                                                                                                                     | `["ALL"]`                  |
+| `defaultInitContainers.autoDiscovery.containerSecurityContext.seccompProfile.type`          | Set seccomp profile in "auto-discovery" init-containers                                                                                                                                                                                                                                                                    | `RuntimeDefault`           |
+| `defaultInitContainers.autoDiscovery.resourcesPreset`                                       | Set Kafka "auto-discovery" init container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if defaultInitContainers.autoDiscovery.resources is set (defaultInitContainers.autoDiscovery.resources is recommended for production).             | `nano`                     |
+| `defaultInitContainers.autoDiscovery.resources`                                             | Set Kafka "auto-discovery" init container requests and limits for different resources like CPU or memory (essential for production workloads)                                                                                                                                                                              | `{}`                       |
 
 ### Controller-eligible statefulset parameters
 
@@ -581,14 +579,13 @@ You can enable this initContainer by setting `volumePermissions.enabled` to `tru
 | -------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------- |
 | `controller.replicaCount`                                      | Number of Kafka controller-eligible nodes                                                                                                                                                                                               | `3`                   |
 | `controller.controllerOnly`                                    | If set to true, controller nodes will be deployed as dedicated controllers, instead of controller+broker processes.                                                                                                                     | `false`               |
+| `controller.quorumBootstrapServers`                            | Override the Kafka controller quorum bootstrap servers of the Kafka Kraft cluster. If not set, it will be automatically configured to use all controller-eligible nodes.                                                                | `""`                  |
 | `controller.minId`                                             | Minimal node.id values for controller-eligible nodes. Do not change after first initialization.                                                                                                                                         | `0`                   |
-| `controller.zookeeperMigrationMode`                            | Set to true to deploy cluster controller quorum                                                                                                                                                                                         | `false`               |
-| `controller.config`                                            | Configuration file for Kafka controller-eligible nodes, rendered as a template. Auto-generated based on chart values when not specified.                                                                                                | `""`                  |
-| `controller.existingConfigmap`                                 | ConfigMap with Kafka Configuration for controller-eligible nodes.                                                                                                                                                                       | `""`                  |
-| `controller.extraConfig`                                       | Additional configuration to be appended at the end of the generated Kafka controller-eligible nodes configuration file.                                                                                                                 | `""`                  |
-| `controller.extraConfigYaml`                                   | Additional configuration in yaml format to be appended at the end of the generated Kafka controller-eligible nodes configuration file.                                                                                                  | `{}`                  |
-| `controller.secretConfig`                                      | Additional configuration to be appended at the end of the generated Kafka controller-eligible nodes configuration file.                                                                                                                 | `""`                  |
-| `controller.existingSecretConfig`                              | Secret with additonal configuration that will be appended to the end of the generated Kafka controller-eligible nodes configuration file                                                                                                | `""`                  |
+| `controller.config`                                            | Specify content for Kafka configuration for Kafka controller-eligible nodes (auto-generated based on other parameters otherwise)                                                                                                        | `{}`                  |
+| `controller.overrideConfiguration`                             | Kafka configuration override for Kafka controller-eligible nodes. Values defined here takes precedence over the ones defined at `controller.config`                                                                                     | `{}`                  |
+| `controller.existingConfigmap`                                 | Name of an existing ConfigMap with the Kafka configuration for Kafka controller-eligible nodes                                                                                                                                          | `""`                  |
+| `controller.secretConfig`                                      | Additional configuration to be appended at the end of the generated Kafka configuration for Kafka controller-eligible nodes (store in a secret)                                                                                         | `""`                  |
+| `controller.existingSecretConfig`                              | Secret with additional configuration that will be appended to the end of the generated Kafka configuration for Kafka controller-eligible nodes                                                                                          | `""`                  |
 | `controller.heapOpts`                                          | Kafka Java Heap size for controller-eligible nodes                                                                                                                                                                                      | `-Xmx1024m -Xms1024m` |
 | `controller.command`                                           | Override Kafka container command                                                                                                                                                                                                        | `[]`                  |
 | `controller.args`                                              | Override Kafka container arguments                                                                                                                                                                                                      | `[]`                  |
@@ -618,8 +615,6 @@ You can enable this initContainer by setting `volumePermissions.enabled` to `tru
 | `controller.customReadinessProbe`                              | Custom readinessProbe that overrides the default one                                                                                                                                                                                    | `{}`                  |
 | `controller.customStartupProbe`                                | Custom startupProbe that overrides the default one                                                                                                                                                                                      | `{}`                  |
 | `controller.lifecycleHooks`                                    | lifecycleHooks for the Kafka container to automate configuration before or after startup                                                                                                                                                | `{}`                  |
-| `controller.initContainerResources.limits`                     | The resources limits for the init container                                                                                                                                                                                             | `{}`                  |
-| `controller.initContainerResources.requests`                   | The requested resources for the init container                                                                                                                                                                                          | `{}`                  |
 | `controller.resourcesPreset`                                   | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if controller.resources is set (controller.resources is recommended for production). | `small`               |
 | `controller.resources`                                         | Set container requests and limits for different resources like CPU or memory (essential for production workloads)                                                                                                                       | `{}`                  |
 | `controller.podSecurityContext.enabled`                        | Enable security context for the pods                                                                                                                                                                                                    | `true`                |
@@ -676,6 +671,7 @@ You can enable this initContainer by setting `volumePermissions.enabled` to `tru
 | `controller.autoscaling.vpa.minAllowed`              | VPA Min allowed resources for the pod                                                                                                                                  | `{}`                      |
 | `controller.autoscaling.vpa.updatePolicy.updateMode` | Autoscaling update policy Specifies whether recommended updates are applied when a Pod is started and whether recommended updates are applied during the life of a Pod | `Auto`                    |
 | `controller.autoscaling.hpa.enabled`                 | Enable HPA for Kafka Controller                                                                                                                                        | `false`                   |
+| `controller.autoscaling.hpa.annotations`             | Annotations for HPA resource                                                                                                                                           | `{}`                      |
 | `controller.autoscaling.hpa.minReplicas`             | Minimum number of Kafka Controller replicas                                                                                                                            | `""`                      |
 | `controller.autoscaling.hpa.maxReplicas`             | Maximum number of Kafka Controller replicas                                                                                                                            | `""`                      |
 | `controller.autoscaling.hpa.targetCPU`               | Target CPU utilization percentage                                                                                                                                      | `""`                      |
@@ -683,7 +679,7 @@ You can enable this initContainer by setting `volumePermissions.enabled` to `tru
 | `controller.pdb.create`                              | Deploy a pdb object for the Kafka pod                                                                                                                                  | `true`                    |
 | `controller.pdb.minAvailable`                        | Minimum number/percentage of available Kafka replicas                                                                                                                  | `""`                      |
 | `controller.pdb.maxUnavailable`                      | Maximum number/percentage of unavailable Kafka replicas                                                                                                                | `""`                      |
-| `controller.persistence.enabled`                     | Enable Kafka data persistence using PVC, note that ZooKeeper persistence is unaffected                                                                                 | `true`                    |
+| `controller.persistence.enabled`                     | Enable Kafka data persistence using PVC                                                                                                                                | `true`                    |
 | `controller.persistence.existingClaim`               | A manually managed Persistent Volume and Claim                                                                                                                         | `""`                      |
 | `controller.persistence.storageClass`                | PVC Storage Class for Kafka data volume                                                                                                                                | `""`                      |
 | `controller.persistence.accessModes`                 | Persistent Volume Access Modes                                                                                                                                         | `["ReadWriteOnce"]`       |
@@ -692,7 +688,7 @@ You can enable this initContainer by setting `volumePermissions.enabled` to `tru
 | `controller.persistence.labels`                      | Labels for the PVC                                                                                                                                                     | `{}`                      |
 | `controller.persistence.selector`                    | Selector to match an existing Persistent Volume for Kafka data PVC. If set, the PVC can't have a PV dynamically provisioned for it                                     | `{}`                      |
 | `controller.persistence.mountPath`                   | Mount path of the Kafka data volume                                                                                                                                    | `/bitnami/kafka`          |
-| `controller.logPersistence.enabled`                  | Enable Kafka logs persistence using PVC, note that ZooKeeper persistence is unaffected                                                                                 | `false`                   |
+| `controller.logPersistence.enabled`                  | Enable Kafka logs persistence using PVC                                                                                                                                | `false`                   |
 | `controller.logPersistence.existingClaim`            | A manually managed Persistent Volume and Claim                                                                                                                         | `""`                      |
 | `controller.logPersistence.storageClass`             | PVC Storage Class for Kafka logs volume                                                                                                                                | `""`                      |
 | `controller.logPersistence.accessModes`              | Persistent Volume Access Modes                                                                                                                                         | `["ReadWriteOnce"]`       |
@@ -707,13 +703,11 @@ You can enable this initContainer by setting `volumePermissions.enabled` to `tru
 | ---------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------- |
 | `broker.replicaCount`                                      | Number of Kafka broker-only nodes                                                                                                                                                                                               | `0`                   |
 | `broker.minId`                                             | Minimal node.id values for broker-only nodes. Do not change after first initialization.                                                                                                                                         | `100`                 |
-| `broker.zookeeperMigrationMode`                            | Set to true to deploy cluster controller quorum                                                                                                                                                                                 | `false`               |
-| `broker.config`                                            | Configuration file for Kafka broker-only nodes, rendered as a template. Auto-generated based on chart values when not specified.                                                                                                | `""`                  |
-| `broker.existingConfigmap`                                 | ConfigMap with Kafka Configuration for broker-only nodes.                                                                                                                                                                       | `""`                  |
-| `broker.extraConfig`                                       | Additional configuration to be appended at the end of the generated Kafka broker-only nodes configuration file.                                                                                                                 | `""`                  |
-| `broker.extraConfigYaml`                                   | Additional configuration in yaml format to be appended at the end of the generated Kafka broker-only nodes configuration file.                                                                                                  | `{}`                  |
-| `broker.secretConfig`                                      | Additional configuration to be appended at the end of the generated Kafka broker-only nodes configuration file.                                                                                                                 | `""`                  |
-| `broker.existingSecretConfig`                              | Secret with additonal configuration that will be appended to the end of the generated Kafka broker-only nodes configuration file                                                                                                | `""`                  |
+| `broker.config`                                            | Specify content for Kafka configuration for Kafka broker-only nodes (auto-generated based on other parameters otherwise)                                                                                                        | `{}`                  |
+| `broker.overrideConfiguration`                             | Kafka configuration override for Kafka broker-only nodes. Values defined here takes precedence over the ones defined at `broker.config`                                                                                         | `{}`                  |
+| `broker.existingConfigmap`                                 | Name of an existing ConfigMap with the Kafka configuration for Kafka broker-only nodes                                                                                                                                          | `""`                  |
+| `broker.secretConfig`                                      | Additional configuration to be appended at the end of the generated Kafka configuration for Kafka broker-only nodes (store in a secret)                                                                                         | `""`                  |
+| `broker.existingSecretConfig`                              | Secret with additional configuration that will be appended to the end of the generated Kafka configuration for Kafka broker-only nodes                                                                                          | `""`                  |
 | `broker.heapOpts`                                          | Kafka Java Heap size for broker-only nodes                                                                                                                                                                                      | `-Xmx1024m -Xms1024m` |
 | `broker.command`                                           | Override Kafka container command                                                                                                                                                                                                | `[]`                  |
 | `broker.args`                                              | Override Kafka container arguments                                                                                                                                                                                              | `[]`                  |
@@ -743,8 +737,6 @@ You can enable this initContainer by setting `volumePermissions.enabled` to `tru
 | `broker.customReadinessProbe`                              | Custom readinessProbe that overrides the default one                                                                                                                                                                            | `{}`                  |
 | `broker.customStartupProbe`                                | Custom startupProbe that overrides the default one                                                                                                                                                                              | `{}`                  |
 | `broker.lifecycleHooks`                                    | lifecycleHooks for the Kafka container to automate configuration before or after startup                                                                                                                                        | `{}`                  |
-| `broker.initContainerResources.limits`                     | The resources limits for the container                                                                                                                                                                                          | `{}`                  |
-| `broker.initContainerResources.requests`                   | The requested resources for the container                                                                                                                                                                                       | `{}`                  |
 | `broker.resourcesPreset`                                   | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if broker.resources is set (broker.resources is recommended for production). | `small`               |
 | `broker.resources`                                         | Set container requests and limits for different resources like CPU or memory (essential for production workloads)                                                                                                               | `{}`                  |
 | `broker.podSecurityContext.enabled`                        | Enable security context for the pods                                                                                                                                                                                            | `true`                |
@@ -803,11 +795,12 @@ You can enable this initContainer by setting `volumePermissions.enabled` to `tru
 | `broker.autoscaling.vpa.minAllowed`              | VPA Min allowed resources for the pod                                                                                                                                  | `{}`                      |
 | `broker.autoscaling.vpa.updatePolicy.updateMode` | Autoscaling update policy Specifies whether recommended updates are applied when a Pod is started and whether recommended updates are applied during the life of a Pod | `Auto`                    |
 | `broker.autoscaling.hpa.enabled`                 | Enable HPA for Kafka Broker                                                                                                                                            | `false`                   |
+| `broker.autoscaling.hpa.annotations`             | Annotations for HPA resource                                                                                                                                           | `{}`                      |
 | `broker.autoscaling.hpa.minReplicas`             | Minimum number of Kafka Broker replicas                                                                                                                                | `""`                      |
 | `broker.autoscaling.hpa.maxReplicas`             | Maximum number of Kafka Broker replicas                                                                                                                                | `""`                      |
 | `broker.autoscaling.hpa.targetCPU`               | Target CPU utilization percentage                                                                                                                                      | `""`                      |
 | `broker.autoscaling.hpa.targetMemory`            | Target Memory utilization percentage                                                                                                                                   | `""`                      |
-| `broker.persistence.enabled`                     | Enable Kafka data persistence using PVC, note that ZooKeeper persistence is unaffected                                                                                 | `true`                    |
+| `broker.persistence.enabled`                     | Enable Kafka data persistence using PVC                                                                                                                                | `true`                    |
 | `broker.persistence.existingClaim`               | A manually managed Persistent Volume and Claim                                                                                                                         | `""`                      |
 | `broker.persistence.storageClass`                | PVC Storage Class for Kafka data volume                                                                                                                                | `""`                      |
 | `broker.persistence.accessModes`                 | Persistent Volume Access Modes                                                                                                                                         | `["ReadWriteOnce"]`       |
@@ -816,7 +809,7 @@ You can enable this initContainer by setting `volumePermissions.enabled` to `tru
 | `broker.persistence.labels`                      | Labels for the PVC                                                                                                                                                     | `{}`                      |
 | `broker.persistence.selector`                    | Selector to match an existing Persistent Volume for Kafka data PVC. If set, the PVC can't have a PV dynamically provisioned for it                                     | `{}`                      |
 | `broker.persistence.mountPath`                   | Mount path of the Kafka data volume                                                                                                                                    | `/bitnami/kafka`          |
-| `broker.logPersistence.enabled`                  | Enable Kafka logs persistence using PVC, note that ZooKeeper persistence is unaffected                                                                                 | `false`                   |
+| `broker.logPersistence.enabled`                  | Enable Kafka logs persistence using PVC                                                                                                                                | `false`                   |
 | `broker.logPersistence.existingClaim`            | A manually managed Persistent Volume and Claim                                                                                                                         | `""`                      |
 | `broker.logPersistence.storageClass`             | PVC Storage Class for Kafka logs volume                                                                                                                                | `""`                      |
 | `broker.logPersistence.accessModes`              | Persistent Volume Access Modes                                                                                                                                         | `["ReadWriteOnce"]`       |
@@ -827,112 +820,80 @@ You can enable this initContainer by setting `volumePermissions.enabled` to `tru
 
 ### Traffic Exposure parameters
 
-| Name                                                                             | Description                                                                                                                                                                                                                                                                 | Value                     |
-| -------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------- |
-| `service.type`                                                                   | Kubernetes Service type                                                                                                                                                                                                                                                     | `ClusterIP`               |
-| `service.ports.client`                                                           | Kafka svc port for client connections                                                                                                                                                                                                                                       | `9092`                    |
-| `service.ports.controller`                                                       | Kafka svc port for controller connections. It is used if "kraft.enabled: true"                                                                                                                                                                                              | `9093`                    |
-| `service.ports.interbroker`                                                      | Kafka svc port for inter-broker connections                                                                                                                                                                                                                                 | `9094`                    |
-| `service.ports.external`                                                         | Kafka svc port for external connections                                                                                                                                                                                                                                     | `9095`                    |
-| `service.extraPorts`                                                             | Extra ports to expose in the Kafka service (normally used with the `sidecar` value)                                                                                                                                                                                         | `[]`                      |
-| `service.nodePorts.client`                                                       | Node port for the Kafka client connections                                                                                                                                                                                                                                  | `""`                      |
-| `service.nodePorts.external`                                                     | Node port for the Kafka external connections                                                                                                                                                                                                                                | `""`                      |
-| `service.sessionAffinity`                                                        | Control where client requests go, to the same pod or round-robin                                                                                                                                                                                                            | `None`                    |
-| `service.sessionAffinityConfig`                                                  | Additional settings for the sessionAffinity                                                                                                                                                                                                                                 | `{}`                      |
-| `service.clusterIP`                                                              | Kafka service Cluster IP                                                                                                                                                                                                                                                    | `""`                      |
-| `service.loadBalancerIP`                                                         | Kafka service Load Balancer IP                                                                                                                                                                                                                                              | `""`                      |
-| `service.loadBalancerClass`                                                      | Kafka service Load Balancer Class                                                                                                                                                                                                                                           | `""`                      |
-| `service.loadBalancerSourceRanges`                                               | Kafka service Load Balancer sources                                                                                                                                                                                                                                         | `[]`                      |
-| `service.allocateLoadBalancerNodePorts`                                          | Whether to allocate node ports when service type is LoadBalancer                                                                                                                                                                                                            | `true`                    |
-| `service.externalTrafficPolicy`                                                  | Kafka service external traffic policy                                                                                                                                                                                                                                       | `Cluster`                 |
-| `service.annotations`                                                            | Additional custom annotations for Kafka service                                                                                                                                                                                                                             | `{}`                      |
-| `service.headless.controller.annotations`                                        | Annotations for the controller-eligible headless service.                                                                                                                                                                                                                   | `{}`                      |
-| `service.headless.controller.labels`                                             | Labels for the controller-eligible headless service.                                                                                                                                                                                                                        | `{}`                      |
-| `service.headless.broker.annotations`                                            | Annotations for the broker-only headless service.                                                                                                                                                                                                                           | `{}`                      |
-| `service.headless.broker.labels`                                                 | Labels for the broker-only headless service.                                                                                                                                                                                                                                | `{}`                      |
-| `service.headless.ipFamilies`                                                    | IP families for the headless service                                                                                                                                                                                                                                        | `[]`                      |
-| `service.headless.ipFamilyPolicy`                                                | IP family policy for the headless service                                                                                                                                                                                                                                   | `""`                      |
-| `externalAccess.enabled`                                                         | Enable Kubernetes external cluster access to Kafka brokers                                                                                                                                                                                                                  | `false`                   |
-| `externalAccess.autoDiscovery.enabled`                                           | Enable using an init container to auto-detect external IPs/ports by querying the K8s API                                                                                                                                                                                    | `false`                   |
-| `externalAccess.autoDiscovery.image.registry`                                    | Init container auto-discovery image registry                                                                                                                                                                                                                                | `REGISTRY_NAME`           |
-| `externalAccess.autoDiscovery.image.repository`                                  | Init container auto-discovery image repository                                                                                                                                                                                                                              | `REPOSITORY_NAME/kubectl` |
-| `externalAccess.autoDiscovery.image.digest`                                      | Kubectl image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag                                                                                                                                                                     | `""`                      |
-| `externalAccess.autoDiscovery.image.pullPolicy`                                  | Init container auto-discovery image pull policy                                                                                                                                                                                                                             | `IfNotPresent`            |
-| `externalAccess.autoDiscovery.image.pullSecrets`                                 | Init container auto-discovery image pull secrets                                                                                                                                                                                                                            | `[]`                      |
-| `externalAccess.autoDiscovery.resourcesPreset`                                   | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if externalAccess.autoDiscovery.resources is set (externalAccess.autoDiscovery.resources is recommended for production). | `nano`                    |
-| `externalAccess.autoDiscovery.resources`                                         | Set container requests and limits for different resources like CPU or memory (essential for production workloads)                                                                                                                                                           | `{}`                      |
-| `externalAccess.autoDiscovery.containerSecurityContext.enabled`                  | Enable Kafka auto-discovery containers' Security Context                                                                                                                                                                                                                    | `true`                    |
-| `externalAccess.autoDiscovery.containerSecurityContext.seLinuxOptions`           | Set SELinux options in container                                                                                                                                                                                                                                            | `{}`                      |
-| `externalAccess.autoDiscovery.containerSecurityContext.runAsUser`                | Set containers' Security Context runAsUser                                                                                                                                                                                                                                  | `1001`                    |
-| `externalAccess.autoDiscovery.containerSecurityContext.runAsGroup`               | Set containers' Security Context runAsGroup                                                                                                                                                                                                                                 | `1001`                    |
-| `externalAccess.autoDiscovery.containerSecurityContext.runAsNonRoot`             | Set Kafka auto-discovery containers' Security Context runAsNonRoot                                                                                                                                                                                                          | `true`                    |
-| `externalAccess.autoDiscovery.containerSecurityContext.allowPrivilegeEscalation` | Set Kafka auto-discovery containers' Security Context allowPrivilegeEscalation                                                                                                                                                                                              | `false`                   |
-| `externalAccess.autoDiscovery.containerSecurityContext.readOnlyRootFilesystem`   | Set Kafka auto-discovery containers' Security Context readOnlyRootFilesystem                                                                                                                                                                                                | `true`                    |
-| `externalAccess.autoDiscovery.containerSecurityContext.capabilities.drop`        | Set Kafka auto-discovery containers' Security Context capabilities to be dropped                                                                                                                                                                                            | `["ALL"]`                 |
-| `externalAccess.autoDiscovery.containerSecurityContext.seccompProfile.type`      | Set Kafka auto-discovery seccomp profile type                                                                                                                                                                                                                               | `RuntimeDefault`          |
-| `externalAccess.controller.forceExpose`                                          | If set to true, force exposing controller-eligible nodes although they are configured as controller-only nodes                                                                                                                                                              | `false`                   |
-| `externalAccess.controller.service.type`                                         | Kubernetes Service type for external access. It can be NodePort, LoadBalancer or ClusterIP                                                                                                                                                                                  | `LoadBalancer`            |
-| `externalAccess.controller.service.ports.external`                               | Kafka port used for external access when service type is LoadBalancer                                                                                                                                                                                                       | `9094`                    |
-| `externalAccess.controller.service.loadBalancerClass`                            | Kubernetes Service Load Balancer class for external access when service type is LoadBalancer                                                                                                                                                                                | `""`                      |
-| `externalAccess.controller.service.loadBalancerIPs`                              | Array of load balancer IPs for each Kafka broker. Length must be the same as replicaCount                                                                                                                                                                                   | `[]`                      |
-| `externalAccess.controller.service.loadBalancerNames`                            | Array of load balancer Names for each Kafka broker. Length must be the same as replicaCount                                                                                                                                                                                 | `[]`                      |
-| `externalAccess.controller.service.loadBalancerAnnotations`                      | Array of load balancer annotations for each Kafka broker. Length must be the same as replicaCount                                                                                                                                                                           | `[]`                      |
-| `externalAccess.controller.service.loadBalancerSourceRanges`                     | Address(es) that are allowed when service is LoadBalancer                                                                                                                                                                                                                   | `[]`                      |
-| `externalAccess.controller.service.allocateLoadBalancerNodePorts`                | Whether to allocate node ports when service type is LoadBalancer                                                                                                                                                                                                            | `true`                    |
-| `externalAccess.controller.service.nodePorts`                                    | Array of node ports used for each Kafka broker. Length must be the same as replicaCount                                                                                                                                                                                     | `[]`                      |
-| `externalAccess.controller.service.externalIPs`                                  | Use distinct service host IPs to configure Kafka external listener when service type is NodePort. Length must be the same as replicaCount                                                                                                                                   | `[]`                      |
-| `externalAccess.controller.service.useHostIPs`                                   | Use service host IPs to configure Kafka external listener when service type is NodePort                                                                                                                                                                                     | `false`                   |
-| `externalAccess.controller.service.usePodIPs`                                    | using the MY_POD_IP address for external access.                                                                                                                                                                                                                            | `false`                   |
-| `externalAccess.controller.service.domain`                                       | Domain or external ip used to configure Kafka external listener when service type is NodePort or ClusterIP                                                                                                                                                                  | `""`                      |
-| `externalAccess.controller.service.publishNotReadyAddresses`                     | Indicates that any agent which deals with endpoints for this Service should disregard any indications of ready/not-ready                                                                                                                                                    | `false`                   |
-| `externalAccess.controller.service.labels`                                       | Service labels for external access                                                                                                                                                                                                                                          | `{}`                      |
-| `externalAccess.controller.service.annotations`                                  | Service annotations for external access                                                                                                                                                                                                                                     | `{}`                      |
-| `externalAccess.controller.service.extraPorts`                                   | Extra ports to expose in the Kafka external service                                                                                                                                                                                                                         | `[]`                      |
-| `externalAccess.controller.service.ipFamilies`                                   | IP families for the external controller service                                                                                                                                                                                                                             | `[]`                      |
-| `externalAccess.controller.service.ipFamilyPolicy`                               | IP family policy for the external controller service                                                                                                                                                                                                                        | `""`                      |
-| `externalAccess.broker.service.type`                                             | Kubernetes Service type for external access. It can be NodePort, LoadBalancer or ClusterIP                                                                                                                                                                                  | `LoadBalancer`            |
-| `externalAccess.broker.service.ports.external`                                   | Kafka port used for external access when service type is LoadBalancer                                                                                                                                                                                                       | `9094`                    |
-| `externalAccess.broker.service.loadBalancerClass`                                | Kubernetes Service Load Balancer class for external access when service type is LoadBalancer                                                                                                                                                                                | `""`                      |
-| `externalAccess.broker.service.loadBalancerIPs`                                  | Array of load balancer IPs for each Kafka broker. Length must be the same as replicaCount                                                                                                                                                                                   | `[]`                      |
-| `externalAccess.broker.service.loadBalancerNames`                                | Array of load balancer Names for each Kafka broker. Length must be the same as replicaCount                                                                                                                                                                                 | `[]`                      |
-| `externalAccess.broker.service.loadBalancerAnnotations`                          | Array of load balancer annotations for each Kafka broker. Length must be the same as replicaCount                                                                                                                                                                           | `[]`                      |
-| `externalAccess.broker.service.loadBalancerSourceRanges`                         | Address(es) that are allowed when service is LoadBalancer                                                                                                                                                                                                                   | `[]`                      |
-| `externalAccess.broker.service.allocateLoadBalancerNodePorts`                    | Whether to allocate node ports when service type is LoadBalancer                                                                                                                                                                                                            | `true`                    |
-| `externalAccess.broker.service.nodePorts`                                        | Array of node ports used for each Kafka broker. Length must be the same as replicaCount                                                                                                                                                                                     | `[]`                      |
-| `externalAccess.broker.service.externalIPs`                                      | Use distinct service host IPs to configure Kafka external listener when service type is NodePort. Length must be the same as replicaCount                                                                                                                                   | `[]`                      |
-| `externalAccess.broker.service.useHostIPs`                                       | Use service host IPs to configure Kafka external listener when service type is NodePort                                                                                                                                                                                     | `false`                   |
-| `externalAccess.broker.service.usePodIPs`                                        | using the MY_POD_IP address for external access.                                                                                                                                                                                                                            | `false`                   |
-| `externalAccess.broker.service.domain`                                           | Domain or external ip used to configure Kafka external listener when service type is NodePort or ClusterIP                                                                                                                                                                  | `""`                      |
-| `externalAccess.broker.service.publishNotReadyAddresses`                         | Indicates that any agent which deals with endpoints for this Service should disregard any indications of ready/not-ready                                                                                                                                                    | `false`                   |
-| `externalAccess.broker.service.labels`                                           | Service labels for external access                                                                                                                                                                                                                                          | `{}`                      |
-| `externalAccess.broker.service.annotations`                                      | Service annotations for external access                                                                                                                                                                                                                                     | `{}`                      |
-| `externalAccess.broker.service.extraPorts`                                       | Extra ports to expose in the Kafka external service                                                                                                                                                                                                                         | `[]`                      |
-| `externalAccess.broker.service.ipFamilies`                                       | IP families for the external broker service                                                                                                                                                                                                                                 | `[]`                      |
-| `externalAccess.broker.service.ipFamilyPolicy`                                   | IP family policy for the external broker service                                                                                                                                                                                                                            | `""`                      |
-| `networkPolicy.enabled`                                                          | Specifies whether a NetworkPolicy should be created                                                                                                                                                                                                                         | `true`                    |
-| `networkPolicy.allowExternal`                                                    | Don't require client label for connections                                                                                                                                                                                                                                  | `true`                    |
-| `networkPolicy.allowExternalEgress`                                              | Allow the pod to access any range of port and all destinations.                                                                                                                                                                                                             | `true`                    |
-| `networkPolicy.addExternalClientAccess`                                          | Allow access from pods with client label set to "true". Ignored if `networkPolicy.allowExternal` is true.                                                                                                                                                                   | `true`                    |
-| `networkPolicy.extraIngress`                                                     | Add extra ingress rules to the NetworkPolicy                                                                                                                                                                                                                                | `[]`                      |
-| `networkPolicy.extraEgress`                                                      | Add extra ingress rules to the NetworkPolicy                                                                                                                                                                                                                                | `[]`                      |
-| `networkPolicy.ingressPodMatchLabels`                                            | Labels to match to allow traffic from other pods. Ignored if `networkPolicy.allowExternal` is true.                                                                                                                                                                         | `{}`                      |
-| `networkPolicy.ingressNSMatchLabels`                                             | Labels to match to allow traffic from other namespaces. Ignored if `networkPolicy.allowExternal` is true.                                                                                                                                                                   | `{}`                      |
-| `networkPolicy.ingressNSPodMatchLabels`                                          | Pod labels to match to allow traffic from other namespaces. Ignored if `networkPolicy.allowExternal` is true.                                                                                                                                                               | `{}`                      |
-
-### Volume Permissions parameters
-
-| Name                                                        | Description                                                                                                                                                                                                                                           | Value                      |
-| ----------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------- |
-| `volumePermissions.enabled`                                 | Enable init container that changes the owner and group of the persistent volume                                                                                                                                                                       | `false`                    |
-| `volumePermissions.image.registry`                          | Init container volume-permissions image registry                                                                                                                                                                                                      | `REGISTRY_NAME`            |
-| `volumePermissions.image.repository`                        | Init container volume-permissions image repository                                                                                                                                                                                                    | `REPOSITORY_NAME/os-shell` |
-| `volumePermissions.image.digest`                            | Init container volume-permissions image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag                                                                                                                     | `""`                       |
-| `volumePermissions.image.pullPolicy`                        | Init container volume-permissions image pull policy                                                                                                                                                                                                   | `IfNotPresent`             |
-| `volumePermissions.image.pullSecrets`                       | Init container volume-permissions image pull secrets                                                                                                                                                                                                  | `[]`                       |
-| `volumePermissions.resourcesPreset`                         | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if volumePermissions.resources is set (volumePermissions.resources is recommended for production). | `nano`                     |
-| `volumePermissions.resources`                               | Set container requests and limits for different resources like CPU or memory (essential for production workloads)                                                                                                                                     | `{}`                       |
-| `volumePermissions.containerSecurityContext.seLinuxOptions` | Set SELinux options in container                                                                                                                                                                                                                      | `{}`                       |
-| `volumePermissions.containerSecurityContext.runAsUser`      | User ID for the init container                                                                                                                                                                                                                        | `0`                        |
+| Name                                                              | Description                                                                                                                               | Value          |
+| ----------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- | -------------- |
+| `service.type`                                                    | Kubernetes Service type                                                                                                                   | `ClusterIP`    |
+| `service.ports.client`                                            | Kafka svc port for client connections                                                                                                     | `9092`         |
+| `service.ports.controller`                                        | Kafka svc port for controller connections                                                                                                 | `9093`         |
+| `service.ports.interbroker`                                       | Kafka svc port for inter-broker connections                                                                                               | `9094`         |
+| `service.ports.external`                                          | Kafka svc port for external connections                                                                                                   | `9095`         |
+| `service.extraPorts`                                              | Extra ports to expose in the Kafka service (normally used with the `sidecar` value)                                                       | `[]`           |
+| `service.nodePorts.client`                                        | Node port for the Kafka client connections                                                                                                | `""`           |
+| `service.nodePorts.external`                                      | Node port for the Kafka external connections                                                                                              | `""`           |
+| `service.sessionAffinity`                                         | Control where client requests go, to the same pod or round-robin                                                                          | `None`         |
+| `service.sessionAffinityConfig`                                   | Additional settings for the sessionAffinity                                                                                               | `{}`           |
+| `service.clusterIP`                                               | Kafka service Cluster IP                                                                                                                  | `""`           |
+| `service.loadBalancerIP`                                          | Kafka service Load Balancer IP                                                                                                            | `""`           |
+| `service.loadBalancerClass`                                       | Kafka service Load Balancer Class                                                                                                         | `""`           |
+| `service.loadBalancerSourceRanges`                                | Kafka service Load Balancer sources                                                                                                       | `[]`           |
+| `service.allocateLoadBalancerNodePorts`                           | Whether to allocate node ports when service type is LoadBalancer                                                                          | `true`         |
+| `service.externalTrafficPolicy`                                   | Kafka service external traffic policy                                                                                                     | `Cluster`      |
+| `service.annotations`                                             | Additional custom annotations for Kafka service                                                                                           | `{}`           |
+| `service.headless.controller.annotations`                         | Annotations for the controller-eligible headless service.                                                                                 | `{}`           |
+| `service.headless.controller.labels`                              | Labels for the controller-eligible headless service.                                                                                      | `{}`           |
+| `service.headless.broker.annotations`                             | Annotations for the broker-only headless service.                                                                                         | `{}`           |
+| `service.headless.broker.labels`                                  | Labels for the broker-only headless service.                                                                                              | `{}`           |
+| `service.headless.ipFamilies`                                     | IP families for the headless service                                                                                                      | `[]`           |
+| `service.headless.ipFamilyPolicy`                                 | IP family policy for the headless service                                                                                                 | `""`           |
+| `externalAccess.enabled`                                          | Enable Kubernetes external cluster access to Kafka brokers                                                                                | `false`        |
+| `externalAccess.controller.forceExpose`                           | If set to true, force exposing controller-eligible nodes although they are configured as controller-only nodes                            | `false`        |
+| `externalAccess.controller.service.type`                          | Kubernetes Service type for external access. It can be NodePort, LoadBalancer or ClusterIP                                                | `LoadBalancer` |
+| `externalAccess.controller.service.ports.external`                | Kafka port used for external access when service type is LoadBalancer                                                                     | `9094`         |
+| `externalAccess.controller.service.loadBalancerClass`             | Kubernetes Service Load Balancer class for external access when service type is LoadBalancer                                              | `""`           |
+| `externalAccess.controller.service.loadBalancerIPs`               | Array of load balancer IPs for each Kafka broker. Length must be the same as replicaCount                                                 | `[]`           |
+| `externalAccess.controller.service.loadBalancerNames`             | Array of load balancer Names for each Kafka broker. Length must be the same as replicaCount                                               | `[]`           |
+| `externalAccess.controller.service.loadBalancerAnnotations`       | Array of load balancer annotations for each Kafka broker. Length must be the same as replicaCount                                         | `[]`           |
+| `externalAccess.controller.service.loadBalancerSourceRanges`      | Address(es) that are allowed when service is LoadBalancer                                                                                 | `[]`           |
+| `externalAccess.controller.service.allocateLoadBalancerNodePorts` | Whether to allocate node ports when service type is LoadBalancer                                                                          | `true`         |
+| `externalAccess.controller.service.nodePorts`                     | Array of node ports used for each Kafka broker. Length must be the same as replicaCount                                                   | `[]`           |
+| `externalAccess.controller.service.externalIPs`                   | Use distinct service host IPs to configure Kafka external listener when service type is NodePort. Length must be the same as replicaCount | `[]`           |
+| `externalAccess.controller.service.useHostIPs`                    | Use service host IPs to configure Kafka external listener when service type is NodePort                                                   | `false`        |
+| `externalAccess.controller.service.usePodIPs`                     | using the MY_POD_IP address for external access.                                                                                          | `false`        |
+| `externalAccess.controller.service.domain`                        | Domain or external ip used to configure Kafka external listener when service type is NodePort or ClusterIP                                | `""`           |
+| `externalAccess.controller.service.publishNotReadyAddresses`      | Indicates that any agent which deals with endpoints for this Service should disregard any indications of ready/not-ready                  | `false`        |
+| `externalAccess.controller.service.labels`                        | Service labels for external access                                                                                                        | `{}`           |
+| `externalAccess.controller.service.annotations`                   | Service annotations for external access                                                                                                   | `{}`           |
+| `externalAccess.controller.service.extraPorts`                    | Extra ports to expose in the Kafka external service                                                                                       | `[]`           |
+| `externalAccess.controller.service.ipFamilies`                    | IP families for the external controller service                                                                                           | `[]`           |
+| `externalAccess.controller.service.ipFamilyPolicy`                | IP family policy for the external controller service                                                                                      | `""`           |
+| `externalAccess.broker.service.type`                              | Kubernetes Service type for external access. It can be NodePort, LoadBalancer or ClusterIP                                                | `LoadBalancer` |
+| `externalAccess.broker.service.ports.external`                    | Kafka port used for external access when service type is LoadBalancer                                                                     | `9094`         |
+| `externalAccess.broker.service.loadBalancerClass`                 | Kubernetes Service Load Balancer class for external access when service type is LoadBalancer                                              | `""`           |
+| `externalAccess.broker.service.loadBalancerIPs`                   | Array of load balancer IPs for each Kafka broker. Length must be the same as replicaCount                                                 | `[]`           |
+| `externalAccess.broker.service.loadBalancerNames`                 | Array of load balancer Names for each Kafka broker. Length must be the same as replicaCount                                               | `[]`           |
+| `externalAccess.broker.service.loadBalancerAnnotations`           | Array of load balancer annotations for each Kafka broker. Length must be the same as replicaCount                                         | `[]`           |
+| `externalAccess.broker.service.loadBalancerSourceRanges`          | Address(es) that are allowed when service is LoadBalancer                                                                                 | `[]`           |
+| `externalAccess.broker.service.allocateLoadBalancerNodePorts`     | Whether to allocate node ports when service type is LoadBalancer                                                                          | `true`         |
+| `externalAccess.broker.service.nodePorts`                         | Array of node ports used for each Kafka broker. Length must be the same as replicaCount                                                   | `[]`           |
+| `externalAccess.broker.service.externalIPs`                       | Use distinct service host IPs to configure Kafka external listener when service type is NodePort. Length must be the same as replicaCount | `[]`           |
+| `externalAccess.broker.service.useHostIPs`                        | Use service host IPs to configure Kafka external listener when service type is NodePort                                                   | `false`        |
+| `externalAccess.broker.service.usePodIPs`                         | using the MY_POD_IP address for external access.                                                                                          | `false`        |
+| `externalAccess.broker.service.domain`                            | Domain or external ip used to configure Kafka external listener when service type is NodePort or ClusterIP                                | `""`           |
+| `externalAccess.broker.service.publishNotReadyAddresses`          | Indicates that any agent which deals with endpoints for this Service should disregard any indications of ready/not-ready                  | `false`        |
+| `externalAccess.broker.service.labels`                            | Service labels for external access                                                                                                        | `{}`           |
+| `externalAccess.broker.service.annotations`                       | Service annotations for external access                                                                                                   | `{}`           |
+| `externalAccess.broker.service.extraPorts`                        | Extra ports to expose in the Kafka external service                                                                                       | `[]`           |
+| `externalAccess.broker.service.ipFamilies`                        | IP families for the external broker service                                                                                               | `[]`           |
+| `externalAccess.broker.service.ipFamilyPolicy`                    | IP family policy for the external broker service                                                                                          | `""`           |
+| `networkPolicy.enabled`                                           | Specifies whether a NetworkPolicy should be created                                                                                       | `true`         |
+| `networkPolicy.allowExternal`                                     | Don't require client label for connections                                                                                                | `true`         |
+| `networkPolicy.allowExternalEgress`                               | Allow the pod to access any range of port and all destinations.                                                                           | `true`         |
+| `networkPolicy.addExternalClientAccess`                           | Allow access from pods with client label set to "true". Ignored if `networkPolicy.allowExternal` is true.                                 | `true`         |
+| `networkPolicy.extraIngress`                                      | Add extra ingress rules to the NetworkPolicy                                                                                              | `[]`           |
+| `networkPolicy.extraEgress`                                       | Add extra ingress rules to the NetworkPolicy                                                                                              | `[]`           |
+| `networkPolicy.ingressPodMatchLabels`                             | Labels to match to allow traffic from other pods. Ignored if `networkPolicy.allowExternal` is true.                                       | `{}`           |
+| `networkPolicy.ingressNSMatchLabels`                              | Labels to match to allow traffic from other namespaces. Ignored if `networkPolicy.allowExternal` is true.                                 | `{}`           |
+| `networkPolicy.ingressNSPodMatchLabels`                           | Pod labels to match to allow traffic from other namespaces. Ignored if `networkPolicy.allowExternal` is true.                             | `{}`           |
 
 ### Other Parameters
 
@@ -1068,42 +1029,15 @@ You can enable this initContainer by setting `volumePermissions.enabled` to `tru
 | `provisioning.waitForKafka`                                      | If true use an init container to wait until kafka is ready before starting provisioning                                                                                                                                                     | `true`                |
 | `provisioning.useHelmHooks`                                      | Flag to indicate usage of helm hooks                                                                                                                                                                                                        | `true`                |
 
-### KRaft chart parameters
-
-| Name                            | Description                                                                                                                                                                            | Value  |
-| ------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ |
-| `kraft.enabled`                 | Switch to enable or disable the KRaft mode for Kafka                                                                                                                                   | `true` |
-| `kraft.existingClusterIdSecret` | Name of the secret containing the cluster ID for the Kafka KRaft cluster. This is incompatible with the clusterId parameter. If both are set, the existingClusterIdSecret will be used | `""`   |
-| `kraft.clusterId`               | Kafka Kraft cluster ID. If not set, a random cluster ID will be generated the first time Kraft is initialized.                                                                         | `""`   |
-| `kraft.controllerQuorumVoters`  | Override the Kafka controller quorum voters of the Kafka Kraft cluster. If not set, it will be automatically configured to use all controller-elegible nodes.                          | `""`   |
-
-### ZooKeeper chart parameters
-
-| Name                                    | Description                                                                                                                                                             | Value               |
-| --------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------- |
-| `zookeeperChrootPath`                   | Path which puts data under some path in the global ZooKeeper namespace                                                                                                  | `""`                |
-| `zookeeper.enabled`                     | Switch to enable or disable the ZooKeeper helm chart. Must be false if you use KRaft mode.                                                                              | `false`             |
-| `zookeeper.replicaCount`                | Number of ZooKeeper nodes                                                                                                                                               | `1`                 |
-| `zookeeper.auth.client.enabled`         | Enable ZooKeeper auth                                                                                                                                                   | `false`             |
-| `zookeeper.auth.client.clientUser`      | User that will use ZooKeeper client (zkCli.sh) to authenticate. Must exist in the serverUsers comma-separated list.                                                     | `""`                |
-| `zookeeper.auth.client.clientPassword`  | Password that will use ZooKeeper client (zkCli.sh) to authenticate. Must exist in the serverPasswords comma-separated list.                                             | `""`                |
-| `zookeeper.auth.client.serverUsers`     | Comma, semicolon or whitespace separated list of user to be created. Specify them as a string, for example: "user1,user2,admin"                                         | `""`                |
-| `zookeeper.auth.client.serverPasswords` | Comma, semicolon or whitespace separated list of passwords to assign to users when created. Specify them as a string, for example: "pass4user1, pass4user2, pass4admin" | `""`                |
-| `zookeeper.persistence.enabled`         | Enable persistence on ZooKeeper using PVC(s)                                                                                                                            | `true`              |
-| `zookeeper.persistence.storageClass`    | Persistent Volume storage class                                                                                                                                         | `""`                |
-| `zookeeper.persistence.accessModes`     | Persistent Volume access modes                                                                                                                                          | `["ReadWriteOnce"]` |
-| `zookeeper.persistence.size`            | Persistent Volume size                                                                                                                                                  | `8Gi`               |
-| `externalZookeeper.servers`             | List of external zookeeper servers to use. Typically used in combination with 'zookeeperChrootPath'. Must be empty if you use KRaft mode.                               | `[]`                |
-
 ```console
 helm install my-release \
-  --set replicaCount=3 \
+  --set controller.replicaCount=3 \
   oci://REGISTRY_NAME/REPOSITORY_NAME/kafka
 ```
 
 > Note: You need to substitute the placeholders `REGISTRY_NAME` and `REPOSITORY_NAME` with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use `REGISTRY_NAME=registry-1.docker.io` and `REPOSITORY_NAME=bitnamicharts`.
 
-The above command deploys Kafka with 3 brokers (replicas).
+The above command deploys Kafka with 3 Kafka controller-eligible nodes.
 
 Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
 
@@ -1120,6 +1054,27 @@ Find more information about how to deal with common errors related to Bitnami's
 
 ## Upgrading
 
+### To 32.0.0
+
+This major release bumps Kafka major version to `4.y.z` series. This version implies a significant milestone given now Kafka operates operate entirely without Apache ZooKeeper, running in KRaft mode by default. As a consequence, **ZooKeeper is no longer a chart dependency and every related parameter has been removed.**. Upgrading from `31.y.z` chart version is not supported unless KRaft mode was already enabled.
+
+Also, some KRaft-related parameters have been renamed or removed:
+
+- `kraft.enabled` has been removed. Kafka now operates in KRaft mode by default.
+- `kraft.controllerQuorumVoters` has been renamed to `controller.quorumVoters`.
+- `kraft.clusterId` and `kraft.existingClusterIdSecret` have been renamed to `clusterId` and `existingKraftSecret`, respectively.
+
+Other notable changes:
+
+- `log4j` and `existingLog4jConfig` parameters have been renamed to `log4j2` and `existingLog4j2ConfigMap`, respectively.
+- `controller.quorumVoters` has been removed in favor of `controller.quorumBootstrapServers`.
+- `brokerRackAssignment` and `brokerRackAssignmentApiVersion` are deprecated in favor of `brokerRackAwareness.*` parameters.
+- `tls.autoGenerated` boolean is now an object with extended configuration options.
+- `volumePermissions` parameters have been moved under `defaultInitContainers` parameter.
+- `externalAccess.autoDiscovery` parameters have been moved under `defaultInitContainers` parameter.
+- `controller.initContainerResources` and `broker.initContainerResources` have been removed. Use `defaultInitContainers.prepareConfig.resources` instead.
+- `extraInit` has been renamed to `defaultInitContainers.prepareConfig.extraInit`.
+
 ### To 31.1.0
 
 This version introduces image verification for security purposes. To disable it, set `global.security.allowInsecureImages` to `true`. More details at [GitHub issue](https://github.com/bitnami/charts/issues/30850).
@@ -1568,4 +1523,4 @@ Unless required by applicable law or agreed to in writing, software
 distributed under the License is distributed on an "AS IS" BASIS,
 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 See the License for the specific language governing permissions and
-limitations under the License.
+limitations under the License.

+ 15 - 15
bitnami/kafka/templates/NOTES.txt

@@ -52,9 +52,11 @@ Kafka can be accessed by consumers via port {{ $clientPort }} on the following D
 Each Kafka broker can be accessed by producers via port {{ $clientPort }} on the following DNS name(s) from within your cluster:
 
 {{- $brokerList := list }}
+{{- if not .Values.controller.controllerOnly }}
 {{- range $i := until (int .Values.controller.replicaCount) }}
 {{- $brokerList = append $brokerList (printf "%s-controller-%d.%s-controller-headless.%s.svc.%s:%d" $fullname $i $fullname $releaseNamespace $clusterDomain $clientPort) }}
 {{- end }}
+{{- end }}
 {{- range $i := until (int .Values.broker.replicaCount) }}
 {{- $brokerList = append $brokerList (printf "%s-broker-%d.%s-broker-headless.%s.svc.%s:%d" $fullname $i $fullname $releaseNamespace $clusterDomain $clientPort) }}
 {{- end }}
@@ -164,11 +166,9 @@ To create a pod that you can use as a Kafka client run the following commands:
             --from-beginning
 
 {{- if .Values.externalAccess.enabled }}
-{{- if or (not .Values.kraft.enabled) (not .Values.controller.controllerOnly) .Values.externalAccess.controller.forceExpose }}
+{{- if or (not .Values.controller.controllerOnly) .Values.externalAccess.controller.forceExpose }}
 
-{{- if not .Values.kraft.enabled }}
-To connect to your Kafka nodes from outside the cluster, follow these instructions:
-{{- else if and .Values.controller.controllerOnly .Values.externalAccess.controller.forceExpose }}
+{{- if and .Values.controller.controllerOnly .Values.externalAccess.controller.forceExpose }}
 To connect to your Kafka controller-only nodes from outside the cluster, follow these instructions:
 {{- else }}
 To connect to your Kafka controller+broker nodes from outside the cluster, follow these instructions:
@@ -183,7 +183,7 @@ To connect to your Kafka controller+broker nodes from outside the cluster, follo
 
         1. Obtain the pod name:
 
-        kubectl get pods --namespace {{ include "common.names.namespace" . }} -l "app.kubernetes.io/name={{ template "kafka.name" . }},app.kubernetes.io/instance={{ .Release.Name }},app.kubernetes.io/component=kafka"
+        kubectl get pods --namespace {{ include "common.names.namespace" . }} -l "app.kubernetes.io/instance={{ .Release.Name }},app.kubernetes.io/component=kafka"
 
         2. Obtain pod configuration:
 
@@ -192,16 +192,16 @@ To connect to your Kafka controller+broker nodes from outside the cluster, follo
     {{- end }}
     Kafka brokers port: You will have a different node port for each Kafka broker. You can get the list of configured node ports using the command below:
 
-        echo "$(kubectl get svc --namespace {{ include "common.names.namespace" . }} -l "app.kubernetes.io/name={{ template "kafka.name" . }},app.kubernetes.io/instance={{ .Release.Name }},app.kubernetes.io/component=kafka,pod" -o jsonpath='{.items[*].spec.ports[0].nodePort}' | tr ' ' '\n')"
+        echo "$(kubectl get svc --namespace {{ include "common.names.namespace" . }} -l "app.kubernetes.io/instance={{ .Release.Name }},app.kubernetes.io/component=kafka,pod" -o jsonpath='{.items[*].spec.ports[0].nodePort}' | tr ' ' '\n')"
 
 {{- else if eq "LoadBalancer" .Values.externalAccess.controller.service.type }}
     NOTE: It may take a few minutes for the LoadBalancer IPs to be available.
 
-        Watch the status with: 'kubectl get svc --namespace {{ include "common.names.namespace" . }} -l "app.kubernetes.io/name={{ template "kafka.name" . }},app.kubernetes.io/instance={{ .Release.Name }},app.kubernetes.io/component=kafka,pod" -w'
+        Watch the status with: 'kubectl get svc --namespace {{ include "common.names.namespace" . }} -l "app.kubernetes.io/instance={{ .Release.Name }},app.kubernetes.io/component=kafka,pod" -w'
 
     Kafka Brokers domain: You will have a different external IP for each Kafka broker. You can get the list of external IPs using the command below:
 
-        echo "$(kubectl get svc --namespace {{ include "common.names.namespace" . }} -l "app.kubernetes.io/name={{ template "kafka.name" . }},app.kubernetes.io/instance={{ .Release.Name }},app.kubernetes.io/component=kafka,pod" -o jsonpath='{.items[*].status.loadBalancer.ingress[0].ip}' | tr ' ' '\n')"
+        echo "$(kubectl get svc --namespace {{ include "common.names.namespace" . }} -l "app.kubernetes.io/instance={{ .Release.Name }},app.kubernetes.io/component=kafka,pod" -o jsonpath='{.items[*].status.loadBalancer.ingress[0].ip}' | tr ' ' '\n')"
 
     Kafka Brokers port: {{ .Values.externalAccess.controller.service.ports.external }}
 
@@ -226,7 +226,7 @@ To connect to your Kafka broker nodes from outside the cluster, follow these ins
 
         1. Obtain the pod name:
 
-        kubectl get pods --namespace {{ include "common.names.namespace" . }} -l "app.kubernetes.io/name={{ template "kafka.name" . }},app.kubernetes.io/instance={{ .Release.Name }},app.kubernetes.io/component=kafka"
+        kubectl get pods --namespace {{ include "common.names.namespace" . }} -l "app.kubernetes.io/instance={{ .Release.Name }},app.kubernetes.io/component=kafka"
 
         2. Obtain pod configuration:
 
@@ -235,16 +235,16 @@ To connect to your Kafka broker nodes from outside the cluster, follow these ins
     {{- end }}
     Kafka brokers port: You will have a different node port for each Kafka broker. You can get the list of configured node ports using the command below:
 
-        echo "$(kubectl get svc --namespace {{ include "common.names.namespace" . }} -l "app.kubernetes.io/name={{ template "kafka.name" . }},app.kubernetes.io/instance={{ .Release.Name }},app.kubernetes.io/component=kafka,pod" -o jsonpath='{.items[*].spec.ports[0].nodePort}' | tr ' ' '\n')"
+        echo "$(kubectl get svc --namespace {{ include "common.names.namespace" . }} -l "app.kubernetes.io/instance={{ .Release.Name }},app.kubernetes.io/component=kafka,pod" -o jsonpath='{.items[*].spec.ports[0].nodePort}' | tr ' ' '\n')"
 
 {{- else if eq "LoadBalancer" .Values.externalAccess.broker.service.type }}
     NOTE: It may take a few minutes for the LoadBalancer IPs to be available.
 
-        Watch the status with: 'kubectl get svc --namespace {{ include "common.names.namespace" . }} -l "app.kubernetes.io/name={{ template "kafka.name" . }},app.kubernetes.io/instance={{ .Release.Name }},app.kubernetes.io/component=kafka,pod" -w'
+        Watch the status with: 'kubectl get svc --namespace {{ include "common.names.namespace" . }} -l "app.kubernetes.io/instance={{ .Release.Name }},app.kubernetes.io/component=kafka,pod" -w'
 
     Kafka Brokers domain: You will have a different external IP for each Kafka broker. You can get the list of external IPs using the command below:
 
-        echo "$(kubectl get svc --namespace {{ include "common.names.namespace" . }} -l "app.kubernetes.io/name={{ template "kafka.name" . }},app.kubernetes.io/instance={{ .Release.Name }},app.kubernetes.io/component=kafka,pod" -o jsonpath='{.items[*].status.loadBalancer.ingress[0].ip}' | tr ' ' '\n')"
+        echo "$(kubectl get svc --namespace {{ include "common.names.namespace" . }} -l "app.kubernetes.io/instance={{ .Release.Name }},app.kubernetes.io/component=kafka,pod" -o jsonpath='{.items[*].status.loadBalancer.ingress[0].ip}' | tr ' ' '\n')"
 
     Kafka Brokers port: {{ .Values.externalAccess.broker.service.ports.external }}
 
@@ -331,8 +331,8 @@ ssl.endpoint.identification.algorithm=
 {{- end }}
 {{- end }}
 
+{{- include "common.warnings.resources" (dict "sections" (list "broker" "controller" "metrics.jmx" "provisioning" "defaultInitContainers.volumePermissions" "defaultInitContainers.prepareConfig" "defaultInitContainers.autoDiscovery") "context" $) }}
+{{- include "common.warnings.modifiedImages" (dict "images" (list .Values.image .Values.defaultInitContainers.volumePermissions.image .Values.defaultInitContainers.autoDiscovery.image .Values.metrics.jmx.image) "context" $) }}
+{{- include "common.errors.insecureImages" (dict "images" (list .Values.image .Values.defaultInitContainers.volumePermissions.image .Values.defaultInitContainers.autoDiscovery.image .Values.metrics.jmx.image) "context" $) }}
 {{- include "kafka.checkRollingTags" . }}
 {{- include "kafka.validateValues" . }}
-{{- include "common.warnings.resources" (dict "sections" (list "broker" "controller" "externalAccess.autoDiscovery" "metrics.jmx" "provisioning" "volumePermissions") "context" $) }}
-{{- include "common.warnings.modifiedImages" (dict "images" (list .Values.image .Values.externalAccess.autoDiscovery.image .Values.volumePermissions.image .Values.metrics.jmx.image) "context" $) }}
-{{- include "common.errors.insecureImages" (dict "images" (list .Values.image .Values.externalAccess.autoDiscovery.image .Values.volumePermissions.image .Values.metrics.jmx.image) "context" $) }}

文件差异内容过多而无法显示
+ 236 - 465
bitnami/kafka/templates/_helpers.tpl


+ 512 - 0
bitnami/kafka/templates/_init_containers.tpl

@@ -0,0 +1,512 @@
+{{/*
+Copyright Broadcom, Inc. All Rights Reserved.
+SPDX-License-Identifier: APACHE-2.0
+*/}}
+
+{{/* vim: set filetype=mustache: */}}
+
+{{/*
+Returns an init-container that changes the owner and group of the persistent volume(s) mountpoint(s) to 'runAsUser:fsGroup' on each node
+*/}}
+{{- define "kafka.defaultInitContainers.volumePermissions" -}}
+{{- $roleValues := index .context.Values .role -}}
+- name: volume-permissions
+  image: {{ include "kafka.volumePermissions.image" .context }}
+  imagePullPolicy: {{ .context.Values.defaultInitContainers.volumePermissions.image.pullPolicy | quote }}
+  {{- if .context.Values.defaultInitContainers.volumePermissions.containerSecurityContext.enabled }}
+  securityContext: {{- include "common.compatibility.renderSecurityContext" (dict "secContext" .context.Values.defaultInitContainers.volumePermissions.containerSecurityContext "context" .context) | nindent 4 }}
+  {{- end }}
+  {{- if .context.Values.defaultInitContainers.volumePermissions.resources }}
+  resources: {{- toYaml .context.Values.defaultInitContainers.volumePermissions.resources | nindent 4 }}
+  {{- else if ne .context.Values.defaultInitContainers.volumePermissions.resourcesPreset "none" }}
+  resources: {{- include "common.resources.preset" (dict "type" .context.Values.defaultInitContainers.volumePermissions.resourcesPreset) | nindent 4 }}
+  {{- end }}
+  command:
+    - /bin/bash
+  args:
+    - -ec
+    - |
+      mkdir -p {{ $roleValues.persistence.mountPath }} {{ $roleValues.logPersistence.mountPath }}
+      {{- if eq ( toString ( .context.Values.defaultInitContainers.volumePermissions.containerSecurityContext.runAsUser )) "auto" }}
+      find {{ $roleValues.persistence.mountPath }} -mindepth 1 -maxdepth 1 -not -name ".snapshot" -not -name "lost+found" |  xargs -r chown -R $(id -u):$(id -G | cut -d " " -f2)
+      find {{ $roleValues.logPersistence.mountPath }} -mindepth 1 -maxdepth 1 -not -name ".snapshot" -not -name "lost+found" |  xargs -r chown -R $(id -u):$(id -G | cut -d " " -f2)
+      {{- else }}
+      find {{ $roleValues.persistence.mountPath }} -mindepth 1 -maxdepth 1 -not -name ".snapshot" -not -name "lost+found" |  xargs -r chown -R {{ $roleValues.containerSecurityContext.runAsUser }}:{{ $roleValues.podSecurityContext.fsGroup }}
+      find {{ $roleValues.logPersistence.mountPath }} -mindepth 1 -maxdepth 1 -not -name ".snapshot" -not -name "lost+found" |  xargs -r chown -R {{ $roleValues.containerSecurityContext.runAsUser }}:{{ $roleValues.podSecurityContext.fsGroup }}
+      {{- end }}
+  volumeMounts:
+    - name: data
+      mountPath: {{ $roleValues.persistence.mountPath }}
+    - name: logs
+      mountPath: {{ $roleValues.logPersistence.mountPath }}
+{{- end -}}
+
+{{/*
+Returns an init-container that auto-discovers the external access details
+*/}}
+{{- define "kafka.defaultInitContainers.autoDiscovery" -}}
+{{- $externalAccess := index .context.Values.externalAccess .role }}
+- name: auto-discovery
+  image: {{ include "kafka.autoDiscovery.image" .context }}
+  imagePullPolicy: {{ .context.Values.defaultInitContainers.autoDiscovery.image.pullPolicy | quote }}
+  {{- if .context.Values.defaultInitContainers.autoDiscovery.containerSecurityContext.enabled }}
+  securityContext: {{- include "common.compatibility.renderSecurityContext" (dict "secContext" .context.Values.defaultInitContainers.autoDiscovery.containerSecurityContext "context" .context) | nindent 4 }}
+  {{- end }}
+  {{- if .context.Values.defaultInitContainers.autoDiscovery.resources }}
+  resources: {{- toYaml .context.Values.defaultInitContainers.autoDiscovery.resources | nindent 4 }}
+  {{- else if ne .context.Values.defaultInitContainers.autoDiscovery.resourcesPreset "none" }}
+  resources: {{- include "common.resources.preset" (dict "type" .context.Values.defaultInitContainers.autoDiscovery.resourcesPreset) | nindent 4 }}
+  {{- end }}
+  command:
+    - /bin/bash
+  args:
+    - -ec
+    - |
+      SVC_NAME="${MY_POD_NAME}-external"
+      AUTODISCOVERY_SERVICE_TYPE="${AUTODISCOVERY_SERVICE_TYPE:-}"
+
+      # Auxiliary functions
+      retry_while() {
+          local -r cmd="${1:?cmd is missing}"
+          local -r retries="${2:-12}"
+          local -r sleep_time="${3:-5}"
+          local return_value=1
+          read -r -a command <<< "$cmd"
+          for ((i = 1 ; i <= retries ; i+=1 )); do
+              "${command[@]}" && return_value=0 && break
+              sleep "$sleep_time"
+          done
+          return $return_value
+      }
+      k8s_svc_lb_ip() {
+          local namespace=${1:?namespace is missing}
+          local service=${2:?service is missing}
+          local service_ip=$(kubectl get svc "$service" -n "$namespace" -o jsonpath="{.status.loadBalancer.ingress[0].ip}")
+          local service_hostname=$(kubectl get svc "$service" -n "$namespace" -o jsonpath="{.status.loadBalancer.ingress[0].hostname}")
+          if [[ -n ${service_ip} ]]; then
+              echo "${service_ip}"
+          else
+              echo "${service_hostname}"
+          fi
+      }
+      k8s_svc_lb_ip_ready() {
+          local namespace=${1:?namespace is missing}
+          local service=${2:?service is missing}
+          [[ -n "$(k8s_svc_lb_ip "$namespace" "$service")" ]]
+      }
+      k8s_svc_node_port() {
+          local namespace=${1:?namespace is missing}
+          local service=${2:?service is missing}
+          local index=${3:-0}
+          local node_port="$(kubectl get svc "$service" -n "$namespace" -o jsonpath="{.spec.ports[$index].nodePort}")"
+          echo "$node_port"
+      }
+
+      if [[ "$AUTODISCOVERY_SERVICE_TYPE" = "LoadBalancer" ]]; then
+          # Wait until LoadBalancer IP is ready
+          retry_while "k8s_svc_lb_ip_ready $MY_POD_NAMESPACE $SVC_NAME" || exit 1
+          # Obtain LoadBalancer external IP
+          k8s_svc_lb_ip "$MY_POD_NAMESPACE" "$SVC_NAME" | tee "/shared/external-host.txt"
+      elif [[ "$AUTODISCOVERY_SERVICE_TYPE" = "NodePort" ]]; then
+          k8s_svc_node_port "$MY_POD_NAMESPACE" "$SVC_NAME" | tee "/shared/external-port.txt"
+      else
+          echo "Unsupported autodiscovery service type: '$AUTODISCOVERY_SERVICE_TYPE'"
+          exit 1
+      fi
+
+  env:
+    - name: MY_POD_NAME
+      valueFrom:
+        fieldRef:
+          fieldPath: metadata.name
+    - name: MY_POD_NAMESPACE
+      valueFrom:
+        fieldRef:
+          fieldPath: metadata.namespace
+    - name: AUTODISCOVERY_SERVICE_TYPE
+      value: {{ $externalAccess.service.type | quote }}
+  volumeMounts:
+    - name: init-shared
+      mountPath: /shared
+{{- end -}}
+
+{{/*
+Returns an init-container that prepares the Kafka configuration files for main containers to use them
+*/}}
+{{- define "kafka.defaultInitContainers.prepareConfig" -}}
+{{- $roleValues := index .context.Values .role -}}
+{{- $externalAccessEnabled := or (and (eq .role "broker") .context.Values.externalAccess.enabled) (and (eq .role "controller") .context.Values.externalAccess.enabled (or .context.Values.externalAccess.controller.forceExpose (not .context.Values.controller.controllerOnly))) }}
+- name: prepare-config
+  image: {{ include "kafka.image" .context }}
+  imagePullPolicy: {{ .context.Values.image.pullPolicy }}
+  {{- if .context.Values.defaultInitContainers.prepareConfig.containerSecurityContext.enabled }}
+  securityContext: {{- include "common.compatibility.renderSecurityContext" (dict "secContext" .context.Values.defaultInitContainers.prepareConfig.containerSecurityContext "context" .context) | nindent 4 }}
+  {{- end }}
+  {{- if .context.Values.defaultInitContainers.prepareConfig.resources }}
+  resources: {{- toYaml .context.Values.defaultInitContainers.prepareConfig.resources | nindent 4 }}
+  {{- else if ne .context.Values.defaultInitContainers.prepareConfig.resourcesPreset "none" }}
+  resources: {{- include "common.resources.preset" (dict "type" .context.Values.defaultInitContainers.prepareConfig.resourcesPreset) | nindent 4 }}
+  {{- end }}
+  command:
+    - /bin/bash
+  args:
+    - -ec
+    - |
+      . /opt/bitnami/scripts/libkafka.sh
+
+      {{- if $externalAccessEnabled }}
+      configure_external_access() {
+          local host port
+          # Configure external hostname
+          if [[ -f "/shared/external-host.txt" ]]; then
+              host=$(cat "/shared/external-host.txt")
+          elif [[ -n "${EXTERNAL_ACCESS_HOST:-}" ]]; then
+              host="$EXTERNAL_ACCESS_HOST"
+          elif [[ -n "${EXTERNAL_ACCESS_HOSTS_LIST:-}" ]]; then
+              read -r -a hosts <<< "$(tr ',' ' ' <<<"${EXTERNAL_ACCESS_HOSTS_LIST}")"
+              host="${hosts[$POD_ID]}"
+          elif is_boolean_yes "$EXTERNAL_ACCESS_HOST_USE_PUBLIC_IP"; then
+              host=$(curl -s https://ipinfo.io/ip)
+          else
+              error "External access hostname not provided"
+          fi
+          # Configure external port
+          if [[ -f "/shared/external-port.txt" ]]; then
+              port=$(cat "/shared/external-port.txt")
+          elif [[ -n "${EXTERNAL_ACCESS_PORT:-}" ]]; then
+              port="$EXTERNAL_ACCESS_PORT"
+              if is_boolean_yes "${EXTERNAL_ACCESS_PORT_AUTOINCREMENT:-}"; then
+                  port="$((port + POD_ID))"
+              fi
+          elif [[ -n "${EXTERNAL_ACCESS_PORTS_LIST:-}" ]]; then
+              read -r -a ports <<<"$(tr ',' ' ' <<<"${EXTERNAL_ACCESS_PORTS_LIST}")"
+              port="${ports[$POD_ID]}"
+          else
+              error "External access port not provided"
+          fi
+          # Configure Kafka advertised listeners
+          sed -i -E "s|^(advertised\.listeners=\S+)$|\1,${EXTERNAL_ACCESS_LISTENER_NAME}://${host}:${port}|" "$KAFKA_CONF_FILE"
+      }
+      {{- end }}
+      {{- if include "kafka.sslEnabled" .context }}
+      configure_kafka_tls() {
+          # Remove previously existing keystores and certificates, if any
+          rm -f /certs/kafka.keystore.jks /certs/kafka.truststore.jks
+          rm -f /certs/tls.crt /certs/tls.key /certs/ca.crt
+          find /certs -name "xx*" -exec rm {} \;
+          if [[ "${KAFKA_TLS_TYPE}" = "PEM" ]]; then
+              # Copy PEM certificate and key
+              if [[ -f "/mounted-certs/kafka-${POD_ROLE}-${POD_ID}.crt" && "/mounted-certs/kafka-${POD_ROLE}-${POD_ID}.key" ]]; then
+                  cp "/mounted-certs/kafka-${POD_ROLE}-${POD_ID}.crt" /certs/tls.crt
+                  # Copy the PEM key ensuring the key used PEM format with PKCS#8
+                  openssl pkcs8 -topk8 -nocrypt -passin pass:"${KAFKA_TLS_PEM_KEY_PASSWORD:-}" -in "/mounted-certs/kafka-${POD_ROLE}-${POD_ID}.key" > /certs/tls.key
+              elif [[ -f /mounted-certs/tls.crt && -f /mounted-certs/tls.key ]]; then
+                  cp "/mounted-certs/tls.crt" /certs/tls.crt
+                  # Copy the PEM key ensuring the key used PEM format with PKCS#8
+                  openssl pkcs8 -topk8 -passin pass:"${KAFKA_TLS_PEM_KEY_PASSWORD:-}" -nocrypt -in "/mounted-certs/tls.key" > /certs/tls.key
+              else
+                  error "PEM key and cert files not found"
+              fi
+      {{- if not .context.Values.tls.pemChainIncluded }}
+              # Copy CA certificate
+              if [[ -f /mounted-certs/ca.crt ]]; then
+                  cp /mounted-certs/ca.crt /certs/ca.crt
+              else
+                  error "CA certificate file not found"
+              fi
+      {{- else }}
+              # CA certificates are also included in the same certificate
+              # All public certs will be included in the truststore
+              cp /certs/tls.crt /certs/ca.crt
+      {{- end }}
+              # Create JKS keystore from PEM cert and key
+              openssl pkcs12 -export -in "/certs/tls.crt" \
+                  -passout pass:"$KAFKA_TLS_KEYSTORE_PASSWORD" \
+                  -inkey "/certs/tls.key" \
+                  -out "/certs/kafka.keystore.p12"
+              keytool -importkeystore -srckeystore "/certs/kafka.keystore.p12" \
+                  -srcstoretype PKCS12 \
+                  -srcstorepass "$KAFKA_TLS_KEYSTORE_PASSWORD" \
+                  -deststorepass "$KAFKA_TLS_KEYSTORE_PASSWORD" \
+                  -destkeystore "/certs/kafka.keystore.jks" \
+                  -noprompt
+              # Create JKS truststore from CA cert
+              keytool -keystore /certs/kafka.truststore.jks -alias CARoot -import -file /certs/ca.crt -storepass "$KAFKA_TLS_TRUSTSTORE_PASSWORD" -noprompt
+              # Remove extra files
+              rm -f "/certs/kafka.keystore.p12" "/certs/tls.crt" "/certs/tls.key" "/certs/ca.crt"
+          elif [[ "$KAFKA_TLS_TYPE" = "JKS" ]]; then
+              if [[ -f "/mounted-certs/kafka-${POD_ROLE}-${POD_ID}.keystore.jks" ]]; then
+                  cp "/mounted-certs/kafka-${POD_ROLE}-${POD_ID}.keystore.jks" /certs/kafka.keystore.jks
+              elif [[ -f "$KAFKA_TLS_KEYSTORE_FILE" ]]; then
+                  cp "$KAFKA_TLS_KEYSTORE_FILE" /certs/kafka.keystore.jks
+              else
+                  error "Keystore file not found"
+              fi
+              if [[ -f "$KAFKA_TLS_TRUSTSTORE_FILE" ]]; then
+                  cp "$KAFKA_TLS_TRUSTSTORE_FILE" /certs/kafka.truststore.jks
+              else
+                  error "Truststore file not found"
+              fi
+          else
+              error "Invalid type $KAFKA_TLS_TYPE"
+          fi
+          # Configure TLS password settings in Kafka configuration
+          [[ -n "${KAFKA_TLS_KEYSTORE_PASSWORD:-}" ]] && kafka_server_conf_set "ssl.keystore.password" "$KAFKA_TLS_KEYSTORE_PASSWORD"
+          [[ -n "${KAFKA_TLS_TRUSTSTORE_PASSWORD:-}" ]] && kafka_server_conf_set "ssl.truststore.password" "$KAFKA_TLS_TRUSTSTORE_PASSWORD"
+          [[ -n "${KAFKA_TLS_PEM_KEY_PASSWORD:-}" ]] && kafka_server_conf_set "ssl.key.password" "$KAFKA_TLS_PEM_KEY_PASSWORD"
+          # Avoid errors caused by previous checks
+          true
+      }
+      {{- end }}
+      {{- if include "kafka.saslEnabled" .context }}
+      configure_kafka_sasl() {
+          # Replace placeholders with passwords
+      {{- if regexFind "SASL" (upper .context.Values.listeners.interbroker.protocol) }}
+        {{- if include "kafka.saslUserPasswordsEnabled" .context }}
+          replace_in_file "$KAFKA_CONF_FILE" "interbroker-password-placeholder" "$KAFKA_INTER_BROKER_PASSWORD"
+        {{- end }}
+        {{- if include "kafka.saslClientSecretsEnabled" .context }}
+          replace_in_file "$KAFKA_CONF_FILE" "interbroker-client-secret-placeholder" "$KAFKA_INTER_BROKER_CLIENT_SECRET"
+        {{- end }}
+      {{- end }}
+      {{- if regexFind "SASL" (upper .context.Values.listeners.controller.protocol) }}
+        {{- if include "kafka.saslUserPasswordsEnabled" .context }}
+          replace_in_file "$KAFKA_CONF_FILE" "controller-password-placeholder" "$KAFKA_CONTROLLER_PASSWORD"
+        {{- end }}
+        {{- if include "kafka.saslClientSecretsEnabled" .context }}
+          replace_in_file "$KAFKA_CONF_FILE" "controller-client-secret-placeholder" "$KAFKA_CONTROLLER_CLIENT_SECRET"
+       {{- end }}
+      {{- end }}
+      {{- if include "kafka.client.saslEnabled" .context }}
+          read -r -a passwords <<< "$(tr ',;' ' ' <<<"${KAFKA_CLIENT_PASSWORDS:-}")"
+          for ((i = 0; i < ${#passwords[@]}; i++)); do
+              replace_in_file "$KAFKA_CONF_FILE" "password-placeholder-${i}\"" "${passwords[i]}\""
+          done
+      {{- end }}
+      }
+      {{- end }}
+      {{- if .context.Values.brokerRackAwareness.enabled }}
+      configure_kafka_broker_rack() {
+          local -r metadata_api_ip="169.254.169.254"
+          local broker_rack=""
+      {{- if eq .context.Values.brokerRackAwareness.cloudProvider "aws-az" }}
+          echo "Obtaining broker.rack for aws-az rack assignment"
+          ec2_metadata_token=$(curl -X PUT "http://${metadata_api_ip}/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 60")
+          broker_rack=$(curl -H "X-aws-ec2-metadata-token: $ec2_metadata_token" "http://${metadata_api_ip}/latest/meta-data/placement/availability-zone-id")
+      {{- else if eq .context.Values.brokerRackAwareness.cloudProvider "azure" }}
+          echo "Obtaining broker.rack for azure rack assignment"
+          location=$(curl -s -H Metadata:true --noproxy "*" "http://${metadata_api_ip}/metadata/instance/compute/location?api-version={{ .context.Values.brokerRackAwareness.azureApiVersion }}&format=text")
+          zone=$(curl -s -H Metadata:true --noproxy "*" "http://${metadata_api_ip}/metadata/instance/compute/zone?api-version={{ .context.Values.brokerRackAwareness.azureApiVersion }}&format=text")
+          broker_rack="${location}-${zone}"
+      {{- end }}
+          kafka_server_conf_set "broker.rack" "$broker_rack"
+      }
+      {{- end }}
+      {{- if and $externalAccessEnabled .context.Values.defaultInitContainers.autoDiscovery.enabled }}
+      # Wait for autodiscovery to finish
+      retry_while "test -f /shared/external-host.txt -o -f /shared/external-port.txt" || error "Timed out waiting for autodiscovery init-container"
+      {{- end }}
+
+      cp /configmaps/server.properties $KAFKA_CONF_FILE
+
+      # Get pod ID and role, last and second last fields in the pod name respectively
+      POD_ID="${MY_POD_NAME##*-}"
+      POD_ROLE="${MY_POD_NAME%-*}"; POD_ROLE="${POD_ROLE##*-}"
+
+      # Configure node.id
+      ID=$((POD_ID + KAFKA_MIN_ID))
+      [[ -f "/bitnami/kafka/data/meta.properties" ]] && ID="$(grep "node.id" /bitnami/kafka/data/meta.properties | awk -F '=' '{print $2}')"
+      kafka_server_conf_set "node.id" "$ID"
+      # Configure initial controllers
+      if [[ "controller" =~ "$POD_ROLE" ]]; then
+          INITIAL_CONTROLLERS=()
+          for ((i = 0; i < {{ int .context.Values.controller.replicaCount }}; i++)); do
+              var="KAFKA_CONTROLLER_${i}_DIR_ID"; DIR_ID="${!var}"
+              [[ $i -eq $POD_ID ]] && [[ -f "/bitnami/kafka/data/meta.properties" ]] && DIR_ID="$(grep "directory.id" /bitnami/kafka/data/meta.properties | awk -F '=' '{print $2}')"
+              INITIAL_CONTROLLERS+=("${i}@${KAFKA_FULLNAME}-${POD_ROLE}-${i}.${KAFKA_CONTROLLER_SVC_NAME}.${MY_POD_NAMESPACE}.svc.${CLUSTER_DOMAIN}:${KAFKA_CONTROLLER_PORT}:${DIR_ID}")
+          done
+          echo "${INITIAL_CONTROLLERS[*]}" | awk -v OFS=',' '{$1=$1}1' > /shared/initial-controllers.txt
+      fi
+      {{- if not .context.Values.listeners.advertisedListeners }}
+      replace_in_file "$KAFKA_CONF_FILE" "advertised-address-placeholder" "${MY_POD_NAME}.${KAFKA_FULLNAME}-${POD_ROLE}-headless.${MY_POD_NAMESPACE}.svc.${CLUSTER_DOMAIN}"
+      {{- if $externalAccessEnabled }}
+      configure_external_access
+      {{- end }}
+      {{- end }}
+      {{- if include "kafka.sslEnabled" .context }}
+      configure_kafka_tls
+      {{- end }}
+      {{- if include "kafka.saslEnabled" .context }}
+      sasl_env_vars=(
+        KAFKA_CLIENT_PASSWORDS
+        KAFKA_INTER_BROKER_PASSWORD
+        KAFKA_INTER_BROKER_CLIENT_SECRET
+        KAFKA_CONTROLLER_PASSWORD
+        KAFKA_CONTROLLER_CLIENT_SECRET
+      )
+      for env_var in "${sasl_env_vars[@]}"; do
+          file_env_var="${env_var}_FILE"
+          if [[ -n "${!file_env_var:-}" ]]; then
+              if [[ -r "${!file_env_var:-}" ]]; then
+                  export "${env_var}=$(< "${!file_env_var}")"
+                  unset "${file_env_var}"
+              else
+                  warn "Skipping export of '${env_var}'. '${!file_env_var:-}' is not readable."
+              fi
+          fi
+      done
+      configure_kafka_sasl
+      {{- end }}
+      {{- if .context.Values.brokerRackAwareness.enabled }}
+      configure_kafka_broker_rack
+      {{- end }}
+      if [[ -f /secret-config/server-secret.properties ]]; then
+          cat /secret-config/server-secret.properties >> $KAFKA_CONF_FILE
+      fi
+
+      {{- include "common.tplvalues.render" ( dict "value" .context.Values.defaultInitContainers.prepareConfig.extraInit "context" .context ) | nindent 6 }}
+  env:
+    - name: BITNAMI_DEBUG
+      value: {{ ternary "true" "false" (or .context.Values.image.debug .context.Values.diagnosticMode.enabled) | quote }}
+    - name: MY_POD_NAME
+      valueFrom:
+        fieldRef:
+            fieldPath: metadata.name
+    - name: MY_POD_NAMESPACE
+      valueFrom:
+        fieldRef:
+          fieldPath: metadata.namespace
+    - name: KAFKA_FULLNAME
+      value: {{ include "common.names.fullname" .context | quote }}
+    - name: CLUSTER_DOMAIN
+      value: {{ .context.Values.clusterDomain | quote }}
+    - name: KAFKA_VOLUME_DIR
+      value: {{ $roleValues.persistence.mountPath | quote }}
+    - name: KAFKA_CONF_FILE
+      value: /config/server.properties
+    - name: KAFKA_MIN_ID
+      value: {{ $roleValues.minId | quote }}
+    - name: KAFKA_CONTROLLER_SVC_NAME
+      value: {{ printf "%s-headless" (include "kafka.controller.fullname" .context) | trunc 63 | trimSuffix "-" }}
+    - name: KAFKA_CONTROLLER_PORT
+      value: {{ .context.Values.listeners.controller.containerPort | quote }}
+    {{- $kraftSecret := default (printf "%s-kraft" (include "common.names.fullname" .context)) .context.Values.existingKraftSecret }}
+    {{- range $i := until (int .context.Values.controller.replicaCount) }}
+    - name: KAFKA_CONTROLLER_{{ $i }}_DIR_ID
+      valueFrom:
+        secretKeyRef:
+          name: {{ $kraftSecret }}
+          key: controller-{{ $i }}-id
+    {{- end }}
+    {{- if $externalAccessEnabled }}
+    - name: EXTERNAL_ACCESS_LISTENER_NAME
+      value: {{ upper .context.Values.listeners.external.name | quote }}
+    {{- $externalAccess := index .context.Values.externalAccess .role }}
+    {{- if eq $externalAccess.service.type "LoadBalancer" }}
+    {{- if not .context.Values.defaultInitContainers.autoDiscovery.enabled }}
+    - name: EXTERNAL_ACCESS_HOSTS_LIST
+      value: {{ join "," (default $externalAccess.service.loadBalancerIPs $externalAccess.service.loadBalancerNames) | quote }}
+    {{- end }}
+    - name: EXTERNAL_ACCESS_PORT
+      value: {{ $externalAccess.service.ports.external | quote }}
+    {{- else if eq $externalAccess.service.type "NodePort" }}
+    {{- if $externalAccess.service.domain }}
+    - name: EXTERNAL_ACCESS_HOST
+      value: {{ $externalAccess.service.domain | quote }}
+    {{- else if and $externalAccess.service.usePodIPs .context.Values.defaultInitContainers.autoDiscovery.enabled }}
+    - name: MY_POD_IP
+      valueFrom:
+        fieldRef:
+          fieldPath: status.podIP
+    - name: EXTERNAL_ACCESS_HOST
+      value: "$(MY_POD_IP)"
+    {{- else if or $externalAccess.service.useHostIPs .context.Values.defaultInitContainers.autoDiscovery.enabled }}
+    - name: HOST_IP
+      valueFrom:
+        fieldRef:
+          fieldPath: status.hostIP
+    - name: EXTERNAL_ACCESS_HOST
+      value: "$(HOST_IP)"
+    {{- else if and $externalAccess.service.externalIPs (not .context.Values.defaultInitContainers.autoDiscovery.enabled) }}
+    - name: EXTERNAL_ACCESS_HOSTS_LIST
+      value: {{ join "," $externalAccess.service.externalIPs }}
+    {{- else }}
+    - name: EXTERNAL_ACCESS_HOST_USE_PUBLIC_IP
+      value: "true"
+    {{- end }}
+    {{- if not .context.Values.defaultInitContainers.autoDiscovery.enabled }}
+    {{- if and $externalAccess.service.externalIPs (empty $externalAccess.service.nodePorts)}}
+    - name: EXTERNAL_ACCESS_PORT
+      value: {{ $externalAccess.service.ports.external | quote }}
+    {{- else }}
+    - name: EXTERNAL_ACCESS_PORTS_LIST
+      value: {{ join "," $externalAccess.service.nodePorts | quote }}
+    {{- end }}
+    {{- end }}
+    {{- else if eq $externalAccess.service.type "ClusterIP" }}
+    - name: EXTERNAL_ACCESS_HOST
+      value: {{ $externalAccess.service.domain | quote }}
+    - name: EXTERNAL_ACCESS_PORT
+      value: {{ $externalAccess.service.ports.external | quote}}
+    - name: EXTERNAL_ACCESS_PORT_AUTOINCREMENT
+      value: "true"
+    {{- end }}
+    {{- end }}
+    {{- if include "kafka.saslEnabled" .context }}
+    {{- include "kafka.saslEnv" .context | nindent 4 }}
+    {{- end }}
+    {{- if include "kafka.sslEnabled" .context }}
+    - name: KAFKA_TLS_TYPE
+      value: {{ ternary "PEM" "JKS" (or .context.Values.tls.autoGenerated.enabled (eq (upper .context.Values.tls.type) "PEM")) }}
+    {{- if eq (upper .context.Values.tls.type) "JKS" }}
+    - name: KAFKA_TLS_KEYSTORE_FILE
+      value: {{ printf "/mounted-certs/%s" ( default "kafka.keystore.jks" .context.Values.tls.jksKeystoreKey) | quote }}
+    - name: KAFKA_TLS_TRUSTSTORE_FILE
+      value: {{ printf "/mounted-certs/%s" ( default "kafka.truststore.jks" .context.Values.tls.jksTruststoreKey) | quote }}
+    {{- end }}
+    - name: KAFKA_TLS_KEYSTORE_PASSWORD
+      valueFrom:
+        secretKeyRef:
+          name: {{ include "kafka.tlsPasswordsSecretName" .context }}
+          key: {{ .context.Values.tls.passwordsSecretKeystoreKey | quote }}
+    - name: KAFKA_TLS_TRUSTSTORE_PASSWORD
+      valueFrom:
+        secretKeyRef:
+          name: {{ include "kafka.tlsPasswordsSecretName" .context }}
+          key: {{ .context.Values.tls.passwordsSecretTruststoreKey | quote }}
+    {{- if and (not .context.Values.tls.autoGenerated.enabled) (or .context.Values.tls.keyPassword (and .context.Values.tls.passwordsSecret .context.Values.tls.passwordsSecretPemPasswordKey)) }}
+    - name: KAFKA_TLS_PEM_KEY_PASSWORD
+      valueFrom:
+        secretKeyRef:
+          name: {{ include "kafka.tlsPasswordsSecretName" .context }}
+          key: {{ default "key-password" .context.Values.tls.passwordsSecretPemPasswordKey | quote }}
+    {{- end }}
+    {{- end }}
+  volumeMounts:
+    - name: data
+      mountPath: /bitnami/kafka
+    - name: kafka-config
+      mountPath: /config
+    - name: kafka-configmaps
+      mountPath: /configmaps
+    - name: kafka-secret-config
+      mountPath: /secret-config
+    - name: tmp
+      mountPath: /tmp
+    - name: init-shared
+      mountPath: /shared
+    {{- if include "kafka.sslEnabled" .context }}
+    - name: kafka-shared-certs
+      mountPath: /certs
+    {{- if and (include "kafka.sslEnabled" .context) (or .context.Values.tls.existingSecret .context.Values.tls.autoGenerated.enabled) }}
+    - name: kafka-certs
+      mountPath: /mounted-certs
+      readOnly: true
+    {{- end }}
+    {{- end }}
+    {{- if and .context.Values.usePasswordFiles (include "kafka.saslEnabled" .context) }}
+    - name: kafka-sasl
+      mountPath: /opt/bitnami/kafka/config/secrets
+      readOnly: true
+    {{- end }}
+{{- end -}}

+ 4 - 6
bitnami/kafka/templates/broker/config-secrets.yaml

@@ -5,16 +5,14 @@ SPDX-License-Identifier: APACHE-2.0
 
 {{- $replicaCount := int .Values.broker.replicaCount }}
 {{- if and (include "kafka.broker.createSecretConfig" .) (gt $replicaCount 0) }}
-{{- $secretName := printf "%s-broker-secret-configuration" (include "common.names.fullname" .) }}
 apiVersion: v1
 kind: Secret
 metadata:
-  name: {{ $secretName }}
+  name: {{ printf "%s-secret-configuration" (include "kafka.broker.fullname" .) }}
   namespace: {{ include "common.names.namespace" . | quote }}
-  labels: {{- include "common.labels.standard" . | nindent 4 }}
-    {{- if .Values.commonLabels }}
-    {{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
-    {{- end }}
+  labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }}
+    app.kubernetes.io/component: broker
+    app.kubernetes.io/part-of: kafka
   {{- if .Values.commonAnnotations }}
   annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
   {{- end }}

+ 32 - 35
bitnami/kafka/templates/broker/configmap.yaml

@@ -3,12 +3,35 @@ Copyright Broadcom, Inc. All Rights Reserved.
 SPDX-License-Identifier: APACHE-2.0
 */}}
 
+{{/*
+Return the Kafka broker configuration.
+ref: https://kafka.apache.org/documentation/#configuration
+*/}}
+{{- define "kafka.broker.config" -}}
+{{- if or .Values.config .Values.broker.config }}
+{{- include "common.tplvalues.render" (dict "value" (coalesce .Values.broker.config .Values.config) "context" .) }}
+{{- else }}
+# Listeners configuration
+listeners: {{ include "kafka.listeners" (dict "isController" false "context" .) }}
+listener.security.protocol.map: {{ include "kafka.securityProtocolMap" . }}
+advertised.listeners: {{ include "kafka.advertisedListeners" . }}
+# Kafka data logs directory
+log.dir: {{ printf "%s/data" .Values.broker.persistence.mountPath }}
+# Kafka application logs directory
+logs.dir: {{ .Values.broker.logPersistence.mountPath }}
+# KRaft node role
+process.roles: broker
+# Common Kafka Configuration
+{{ include "kafka.commonConfig" . }}
+{{- end -}}
+{{- end -}}
+
 {{- $replicaCount := int .Values.broker.replicaCount }}
 {{- if and (include "kafka.broker.createConfigmap" .) (gt $replicaCount 0) }}
 apiVersion: v1
 kind: ConfigMap
 metadata:
-  name: {{ printf "%s-broker-configuration" (include "common.names.fullname" .) }}
+  name: {{ printf "%s-configuration" (include "kafka.broker.fullname" .) }}
   namespace: {{ include "common.names.namespace" . | quote }}
   labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }}
     app.kubernetes.io/component: broker
@@ -17,40 +40,14 @@ metadata:
   annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
   {{- end }}
 data:
-  {{- if or .Values.config .Values.broker.config }}
-  server.properties: {{- include "common.tplvalues.render" ( dict "value" (coalesce .Values.broker.config .Values.config) "context" $ ) | nindent 4 }}
-  {{- else }}
+  {{- $configuration := include "kafka.broker.config" . | fromYaml -}}
+  {{- if or .Values.overrideConfiguration .Values.broker.overrideConfiguration }}
+  {{- $overrideConfiguration := include "common.tplvalues.render" (dict "value" .Values.overrideConfiguration "context" .) | fromYaml }}
+  {{- $brokerOverrideConfiguration := include "common.tplvalues.render" (dict "value" .Values.broker.overrideConfiguration "context" .) | fromYaml }}
+  {{- $configuration = mustMergeOverwrite $configuration $overrideConfiguration $brokerOverrideConfiguration }}
+  {{- end }}
   server.properties: |-
-    # Listeners configuration
-    listeners={{ include "kafka.listeners" ( dict "isController" false "context" $ ) }}
-    listener.security.protocol.map={{ include "kafka.securityProtocolMap" . }}
-    advertised.listeners={{ include "kafka.advertisedListeners" . }}
-    {{- if .Values.kraft.enabled }}
-    {{- if not .Values.broker.zookeeperMigrationMode }}
-    # KRaft node role
-    process.roles=broker
-    {{- end -}}
-    {{- include "kafka.kraftConfig" . | nindent 4 }}
-    {{- end }}
-    {{- if or .Values.zookeeper.enabled .Values.externalZookeeper.servers }}
-    # Zookeeper configuration
-    {{- include "kafka.zookeeperConfig" . | nindent 4 }}
-    {{- if .Values.broker.zookeeperMigrationMode }}
-    zookeeper.metadata.migration.enable=true
-    inter.broker.protocol.version={{ default (regexFind "^[0-9].[0-9]+" .Chart.AppVersion) .Values.interBrokerProtocolVersion }}
+    {{- range $key, $value := $configuration }}
+    {{ $key }}={{ include "common.tplvalues.render" (dict "value" $value "context" $) }}
     {{- end }}
-    {{- end }}
-    # Kafka data logs directory
-    log.dir={{ printf "%s/data" .Values.broker.persistence.mountPath }}
-    # Kafka application logs directory
-    logs.dir={{ .Values.broker.logPersistence.mountPath }}
-
-    # Common Kafka Configuration
-    {{- include "kafka.commonConfig" . | nindent 4 }}
-
-    # Custom Kafka Configuration
-    {{- include "common.tplvalues.render" ( dict "value" .Values.extraConfig "context" $ ) | nindent 4 }}
-    {{- include "common.tplvalues.render" ( dict "value" .Values.broker.extraConfig "context" $ ) | nindent 4 }}
-    {{- include "kafka.properties.render" (merge .Values.broker.extraConfigYaml .Values.extraConfigYaml ) | nindent 4 }}
-  {{- end }}
 {{- end }}

+ 3 - 4
bitnami/kafka/templates/broker/hpa.yaml

@@ -3,12 +3,11 @@ Copyright Broadcom, Inc. All Rights Reserved.
 SPDX-License-Identifier: APACHE-2.0
 */}}
 
-{{- $replicaCount := int .Values.broker.replicaCount }}
-{{- if and (gt $replicaCount 0) .Values.broker.autoscaling.hpa.enabled }}
+{{- if .Values.broker.autoscaling.hpa.enabled }}
 apiVersion: {{ include "common.capabilities.hpa.apiVersion" ( dict "context" $ ) }}
 kind: HorizontalPodAutoscaler
 metadata:
-  name: {{ printf "%s-broker" (include "common.names.fullname" .) }}
+  name: {{ template "kafka.broker.fullname" . }}
   namespace: {{ include "common.names.namespace" . | quote }}
   labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }}
     app.kubernetes.io/component: broker
@@ -21,7 +20,7 @@ spec:
   scaleTargetRef:
     apiVersion: {{ template "common.capabilities.statefulset.apiVersion" . }}
     kind: StatefulSet
-    name: {{ printf "%s-broker" (include "common.names.fullname" .) }}
+    name: {{ template "kafka.broker.fullname" . }}
   minReplicas: {{ .Values.broker.autoscaling.hpa.minReplicas }}
   maxReplicas: {{ .Values.broker.autoscaling.hpa.maxReplicas }}
   metrics:

+ 1 - 1
bitnami/kafka/templates/broker/pdb.yaml

@@ -7,7 +7,7 @@ SPDX-License-Identifier: APACHE-2.0
 apiVersion: {{ include "common.capabilities.policy.apiVersion" . }}
 kind: PodDisruptionBudget
 metadata:
-  name: {{ printf "%s-broker" (include "common.names.fullname" .) }}
+  name: {{ template "kafka.broker.fullname" . }}
   namespace: {{ include "common.names.namespace" . | quote }}
   labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }}
     app.kubernetes.io/component: broker

+ 39 - 142
bitnami/kafka/templates/broker/statefulset.yaml

@@ -4,11 +4,11 @@ SPDX-License-Identifier: APACHE-2.0
 */}}
 
 {{- $replicaCount := int .Values.broker.replicaCount }}
-{{- if or (gt $replicaCount 0) .Values.broker.autoscaling.enabled }}
+{{- if or (gt $replicaCount 0) .Values.broker.autoscaling.hpa.enabled }}
 apiVersion: {{ include "common.capabilities.statefulset.apiVersion" . }}
 kind: StatefulSet
 metadata:
-  name: {{ printf "%s-broker" (include "common.names.fullname" .) }}
+  name: {{ template "kafka.broker.fullname" . }}
   namespace: {{ include "common.names.namespace" . | quote }}
   labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }}
     app.kubernetes.io/component: broker
@@ -18,7 +18,7 @@ metadata:
   {{- end }}
 spec:
   podManagementPolicy: {{ .Values.broker.podManagementPolicy }}
-  {{- if not .Values.broker.autoscaling.enabled }}
+  {{- if not .Values.broker.autoscaling.hpa.enabled }}
   replicas: {{ .Values.broker.replicaCount }}
   {{- end }}
   {{- $podLabels := include "common.tplvalues.merge" ( dict "values" ( list .Values.broker.podLabels .Values.commonLabels ) "context" . ) }}
@@ -26,9 +26,9 @@ spec:
     matchLabels: {{- include "common.labels.matchLabels" ( dict "customLabels" $podLabels "context" $ ) | nindent 6 }}
       app.kubernetes.io/component: broker
       app.kubernetes.io/part-of: kafka
-  serviceName: {{ printf "%s-broker-headless" (include "common.names.fullname" .) | trunc 63 | trimSuffix "-" }}
+  serviceName: {{ printf "%s-headless" (include "kafka.broker.fullname" .) | trunc 63 | trimSuffix "-" }}
   updateStrategy: {{- include "common.tplvalues.render" (dict "value" .Values.broker.updateStrategy "context" $ ) | nindent 4 }}
-  {{- if and .Values.broker.minReadySeconds (semverCompare ">= 1.23-0" (include "common.capabilities.kubeVersion" .)) }}
+  {{- if .Values.broker.minReadySeconds }}
   minReadySeconds: {{ .Values.broker.minReadySeconds }}
   {{- end }}
   template:
@@ -37,16 +37,16 @@ spec:
         app.kubernetes.io/component: broker
         app.kubernetes.io/part-of: kafka
       annotations:
-        {{- if (include "kafka.broker.createConfigmap" .) }}
+        {{- if include "kafka.broker.createConfigmap" . }}
         checksum/configuration: {{ include (print $.Template.BasePath "/broker/configmap.yaml") . | sha256sum }}
         {{- end }}
-        {{- if (include "kafka.createSaslSecret" .) }}
+        {{- if include "kafka.createSaslSecret" . }}
         checksum/passwords-secret: {{ include (print $.Template.BasePath "/secrets.yaml") . | sha256sum }}
         {{- end }}
-         {{- if (include "kafka.createTlsSecret" .) }}
+         {{- if include "kafka.createTlsSecret" . }}
         checksum/tls-secret: {{ include (print $.Template.BasePath "/tls-secret.yaml") . | sha256sum }}
         {{- end }}
-        {{- if (include "kafka.metrics.jmx.createConfigmap" .) }}
+        {{- if include "kafka.metrics.jmx.createConfigmap" . }}
         checksum/jmx-configuration: {{ include (print $.Template.BasePath "/metrics/jmx-configmap.yaml") . | sha256sum }}
         {{- end }}
         {{- if .Values.broker.podAnnotations }}
@@ -95,39 +95,13 @@ spec:
       serviceAccountName: {{ include "kafka.serviceAccountName" . }}
       enableServiceLinks: {{ .Values.broker.enableServiceLinks }}
       initContainers:
-        {{- if and .Values.volumePermissions.enabled .Values.broker.persistence.enabled }}
-        - name: volume-permissions
-          image: {{ include "kafka.volumePermissions.image" . }}
-          imagePullPolicy: {{ .Values.volumePermissions.image.pullPolicy | quote }}
-          command:
-            - /bin/bash
-          args:
-            - -ec
-            - |
-              mkdir -p "{{ .Values.broker.persistence.mountPath }}" "{{ .Values.broker.logPersistence.mountPath }}"
-              chown -R {{ .Values.broker.containerSecurityContext.runAsUser }}:{{ .Values.broker.podSecurityContext.fsGroup }} "{{ .Values.broker.persistence.mountPath }}" "{{ .Values.broker.logPersistence.mountPath }}"
-              find "{{ .Values.broker.persistence.mountPath }}" -mindepth 1 -maxdepth 1 -not -name ".snapshot" -not -name "lost+found" | xargs -r chown -R {{ .Values.broker.containerSecurityContext.runAsUser }}:{{ .Values.broker.podSecurityContext.fsGroup }}
-              find "{{ .Values.broker.logPersistence.mountPath }}" -mindepth 1 -maxdepth 1 -not -name ".snapshot" -not -name "lost+found" | xargs -r chown -R {{ .Values.broker.containerSecurityContext.runAsUser }}:{{ .Values.broker.podSecurityContext.fsGroup }}
-          {{- if eq ( toString ( .Values.volumePermissions.containerSecurityContext.runAsUser )) "auto" }}
-          securityContext: {{- omit .Values.volumePermissions.containerSecurityContext "runAsUser" | toYaml | nindent 12 }}
-          {{- else }}
-          securityContext: {{- .Values.volumePermissions.containerSecurityContext | toYaml | nindent 12 }}
-          {{- end }}
-          {{- if .Values.volumePermissions.resources }}
-          resources: {{- toYaml .Values.volumePermissions.resources | nindent 12 }}
-          {{- else if ne .Values.volumePermissions.resourcesPreset "none" }}
-          resources: {{- include "common.resources.preset" (dict "type" .Values.volumePermissions.resourcesPreset) | nindent 12 }}
-          {{- end }}
-          volumeMounts:
-            - name: data
-              mountPath: {{ .Values.broker.persistence.mountPath }}
-            - name: logs
-              mountPath: {{ .Values.broker.logPersistence.mountPath }}
+        {{- if and .Values.defaultInitContainers.volumePermissions.enabled .Values.broker.persistence.enabled }}
+        {{- include "kafka.defaultInitContainers.volumePermissions" (dict "context" . "role" "broker") | nindent 8 }}
         {{- end }}
-        {{- if and .Values.externalAccess.enabled .Values.externalAccess.autoDiscovery.enabled }}
-        {{- include "kafka.autoDiscoveryInitContainer" ( dict "role" "broker" "context" $) | nindent 8 }}
+        {{- if and .Values.externalAccess.enabled .Values.defaultInitContainers.autoDiscovery.enabled }}
+        {{- include "kafka.defaultInitContainers.autoDiscovery" (dict "context" . "role" "broker") | nindent 8 }}
         {{- end }}
-        {{- include "kafka.prepareKafkaInitContainer" ( dict "role" "broker" "context" $) | nindent 8 }}
+        {{- include "kafka.defaultInitContainers.prepareConfig" (dict "context" . "role" "broker") | nindent 8 }}
         {{- if .Values.broker.initContainers }}
         {{- include "common.tplvalues.render" ( dict "value" .Values.broker.initContainers "context" $ ) | nindent 8 }}
         {{- end }}
@@ -152,83 +126,11 @@ spec:
           args: {{- include "common.tplvalues.render" (dict "value" .Values.broker.args "context" $) | nindent 12 }}
           {{- end }}
           env:
-            - name: BITNAMI_DEBUG
-              value: {{ ternary "true" "false" (or .Values.image.debug .Values.diagnosticMode.enabled) | quote }}
             - name: KAFKA_HEAP_OPTS
               value: {{ coalesce .Values.broker.heapOpts .Values.heapOpts | quote }}
-            {{- if .Values.kraft.enabled }}
-            - name: KAFKA_KRAFT_CLUSTER_ID
-              valueFrom:
-                secretKeyRef:
-                  name: {{ default (printf "%s-kraft-cluster-id" (include "common.names.fullname" .)) .Values.kraft.existingClusterIdSecret }}
-                  key: kraft-cluster-id
-            {{- if .Values.broker.zookeeperMigrationMode }}
-            - name: KAFKA_SKIP_KRAFT_STORAGE_INIT
-              value: "true"
-            {{- end }}
-            {{- end }}
-            {{- if and (include "kafka.saslEnabled" .) (or (regexFind "SCRAM" (upper .Values.sasl.enabledMechanisms)) (regexFind "SCRAM" (upper .Values.sasl.controllerMechanism)) (regexFind "SCRAM" (upper .Values.sasl.interBrokerMechanism))) }}
-            {{- if or .Values.zookeeper.enabled .Values.externalZookeeper.servers }}
-            - name: KAFKA_ZOOKEEPER_BOOTSTRAP_SCRAM_USERS
-              value: "true"
-            {{- else }}
-            - name: KAFKA_KRAFT_BOOTSTRAP_SCRAM_USERS
-              value: "true"
-            {{- end }}
-            {{- if and (include "kafka.client.saslEnabled" . ) .Values.sasl.client.users (include "kafka.saslUserPasswordsEnabled" .) }}
-            - name: KAFKA_CLIENT_USERS
-              value: {{ join "," .Values.sasl.client.users | quote }}
-            - name: KAFKA_CLIENT_PASSWORDS
-              valueFrom:
-                secretKeyRef:
-                  name: {{ include "kafka.saslSecretName" . }}
-                  key: client-passwords
-            {{- end }}
-            {{- if regexFind "SASL" (upper .Values.listeners.interbroker.protocol) }}
-            {{- if (include "kafka.saslUserPasswordsEnabled" .) }}
-            - name: KAFKA_INTER_BROKER_USER
-              value: {{ .Values.sasl.interbroker.user | quote }}
-            - name: KAFKA_INTER_BROKER_PASSWORD
-              valueFrom:
-                secretKeyRef:
-                  name: {{ include "kafka.saslSecretName" . }}
-                  key: inter-broker-password
-            {{- end }}
-            {{- if (include "kafka.saslClientSecretsEnabled" .) }}
-            - name: KAFKA_INTER_BROKER_CLIENT_ID
-              value: {{ .Values.sasl.interbroker.clientId | quote }}
-            - name: KAFKA_INTER_BROKER_CLIENT_SECRET
-              valueFrom:
-                secretKeyRef:
-                  name: {{ include "kafka.saslSecretName" . }}
-                  key: inter-broker-client-secret
-            {{- end }}
-            {{- end }}
-            {{- if and .Values.kraft.enabled (regexFind "SASL" (upper .Values.listeners.controller.protocol)) }}
-            {{- if (include "kafka.saslUserPasswordsEnabled" .) }}
-            - name: KAFKA_CONTROLLER_USER
-              value: {{ .Values.sasl.controller.user | quote }}
-            - name: KAFKA_CONTROLLER_PASSWORD
-              valueFrom:
-                secretKeyRef:
-                  name: {{ include "kafka.saslSecretName" . }}
-                  key: controller-password
-            {{- end }}
-            {{- if (include "kafka.saslClientSecretsEnabled" .) }}
-            - name: KAFKA_CONTROLLER_CLIENT_ID
-              value: {{ .Values.sasl.controller.clientId | quote }}
-            - name: KAFKA_CONTROLLER_CLIENT_SECRET
-              valueFrom:
-                secretKeyRef:
-                  name: {{ include "kafka.saslSecretName" . }}
-                  key: controller-client-secret
-            {{- end }}
-            {{- end }}
-            {{- end }}
-            {{- if .Values.metrics.jmx.enabled }}
-            - name: JMX_PORT
-              value: {{ .Values.metrics.jmx.kafkaJmxPort | quote }}
-            {{- end }}
+            - name: KAFKA_CFG_PROCESS_ROLES
+              value: broker
+            {{- include "kafka.commonEnv" . | nindent 12 }}
             {{- if .Values.broker.extraEnvVars }}
             {{- include "common.tplvalues.render" ( dict "value" .Values.broker.extraEnvVars "context" $) | nindent 12 }}
             {{- end }}
@@ -311,23 +213,23 @@ spec:
             - name: kafka-config
               mountPath: /opt/bitnami/kafka/config/server.properties
               subPath: server.properties
-            {{- if .Values.sasl.zookeeper.user }}
-            - name: kafka-config
-              mountPath: /opt/bitnami/kafka/config/kafka_jaas.conf
-              subPath: kafka_jaas.conf
-            {{- end }}
             - name: tmp
               mountPath: /tmp
-            {{- if or .Values.log4j .Values.existingLog4jConfigMap }}
-            - name: log4j-config
-              mountPath: /opt/bitnami/kafka/config/log4j.properties
-              subPath: log4j.properties
+            {{- if or .Values.log4j2 .Values.existingLog4j2ConfigMap }}
+            - name: log4j2-config
+              mountPath: /opt/bitnami/kafka/config/log4j2.yaml
+              subPath: log4j2.yaml
             {{- end }}
-            {{- if or .Values.tls.zookeeper.enabled (include "kafka.sslEnabled" .) }}
+            {{- if include "kafka.sslEnabled" . }}
             - name: kafka-shared-certs
               mountPath: /opt/bitnami/kafka/config/certs
               readOnly: true
             {{- end }}
+            {{- if and .Values.usePasswordFiles (include "kafka.saslEnabled" .) (or (regexFind "SCRAM" (upper .Values.sasl.enabledMechanisms)) (regexFind "SCRAM" (upper .Values.sasl.controllerMechanism)) (regexFind "SCRAM" (upper .Values.sasl.interBrokerMechanism))) }}
+            - name: kafka-sasl
+              mountPath: /opt/bitnami/kafka/config/secrets
+              readOnly: true
+            {{- end }}
             {{- if .Values.extraVolumeMounts }}
             {{- include "common.tplvalues.render" (dict "value" .Values.extraVolumeMounts "context" $) | nindent 12 }}
             {{- end }}
@@ -399,28 +301,22 @@ spec:
           emptyDir: {}
         - name: tmp
           emptyDir: {}
-        - name: scripts
-          configMap:
-            name: {{ include "common.names.fullname" . }}-scripts
-            defaultMode: 493
-        {{- if and .Values.externalAccess.enabled .Values.externalAccess.autoDiscovery.enabled }}
-        - name: kafka-autodiscovery-shared
+        - name: init-shared
           emptyDir: {}
-        {{- end }}
-        {{- if or .Values.log4j .Values.existingLog4jConfigMap }}
-        - name: log4j-config
+        {{- if or .Values.log4j2 .Values.existingLog4j2ConfigMap }}
+        - name: log4j2-config
           configMap:
-            name: {{ include "kafka.log4j.configMapName" . }}
+            name: {{ include "kafka.log4j2.configMapName" . }}
         {{- end }}
         {{- if .Values.metrics.jmx.enabled }}
         - name: jmx-config
           configMap:
             name: {{ include "kafka.metrics.jmx.configmapName" . }}
         {{- end }}
-        {{- if or .Values.tls.zookeeper.enabled (include "kafka.sslEnabled" .) }}
+        {{- if include "kafka.sslEnabled" . }}
         - name: kafka-shared-certs
           emptyDir: {}
-        {{- if and (include "kafka.sslEnabled" .) (or .Values.tls.existingSecret .Values.tls.autoGenerated) }}
+        {{- if or .Values.tls.existingSecret .Values.tls.autoGenerated.enabled }}
         - name: kafka-certs
           projected:
             defaultMode: 256
@@ -432,12 +328,13 @@ spec:
                   name: {{ .Values.tls.jksTruststoreSecret }}
               {{- end }}
         {{- end }}
-        {{- if and .Values.tls.zookeeper.enabled .Values.tls.zookeeper.existingSecret }}
-        - name: kafka-zookeeper-cert
-          secret:
-            secretName: {{ .Values.tls.zookeeper.existingSecret }}
-            defaultMode: 256
         {{- end }}
+        {{- if and .Values.usePasswordFiles (include "kafka.saslEnabled" .) (or (regexFind "SCRAM" (upper .Values.sasl.enabledMechanisms)) (regexFind "SCRAM" (upper .Values.sasl.controllerMechanism)) (regexFind "SCRAM" (upper .Values.sasl.interBrokerMechanism))) }}
+        - name: kafka-sasl
+          projected:
+            sources:
+              - secret:
+                  name:  {{ include "kafka.saslSecretName" . }}
         {{- end }}
         {{- if .Values.extraVolumes }}
         {{- include "common.tplvalues.render" (dict "value" .Values.extraVolumes "context" $) | nindent 8 }}

+ 3 - 3
bitnami/kafka/templates/broker/svc-external-access.yaml

@@ -4,15 +4,15 @@ SPDX-License-Identifier: APACHE-2.0
 */}}
 
 {{- if .Values.externalAccess.enabled }}
-{{- $fullname := include "common.names.fullname" . }}
+{{- $fullname := include "kafka.broker.fullname" . }}
 {{- $replicaCount := .Values.broker.replicaCount | int }}
 {{- range $i := until $replicaCount }}
-{{- $targetPod := printf "%s-broker-%d" (printf "%s" $fullname) $i }}
+{{- $targetPod := printf "%s-%d" (printf "%s" $fullname) $i }}
 {{- $_ := set $ "targetPod" $targetPod }}
 apiVersion: v1
 kind: Service
 metadata:
-  name: {{ printf "%s-broker-%d-external" (include "common.names.fullname" $) $i | trunc 63 | trimSuffix "-" }}
+  name: {{ printf "%s-%d-external" $fullname $i | trunc 63 | trimSuffix "-" }}
   namespace: {{ include "common.names.namespace" $ | quote }}
   {{- $labels := include "common.tplvalues.merge" ( dict "values" ( list $.Values.externalAccess.broker.service.labels $.Values.commonLabels ) "context" $ ) }}
   labels: {{- include "common.labels.standard" ( dict "customLabels" $labels "context" $ ) | nindent 4 }}

+ 1 - 1
bitnami/kafka/templates/broker/svc-headless.yaml

@@ -8,7 +8,7 @@ SPDX-License-Identifier: APACHE-2.0
 apiVersion: v1
 kind: Service
 metadata:
-  name: {{ printf "%s-broker-headless" (include "common.names.fullname" .) | trunc 63 | trimSuffix "-" }}
+  name: {{ printf "%s-headless" (include "kafka.broker.fullname" .) | trunc 63 | trimSuffix "-" }}
   namespace: {{ include "common.names.namespace" . | quote }}
   {{- $labels := include "common.tplvalues.merge" ( dict "values" ( list .Values.externalAccess.broker.service.labels .Values.commonLabels ) "context" . ) }}
   labels: {{- include "common.labels.standard" ( dict "customLabels" $labels "context" $ ) | nindent 4 }}

+ 2 - 2
bitnami/kafka/templates/broker/vpa.yaml

@@ -8,7 +8,7 @@ SPDX-License-Identifier: APACHE-2.0
 apiVersion: {{ include "common.capabilities.vpa.apiVersion" . }}
 kind: VerticalPodAutoscaler
 metadata:
-  name: {{ printf "%s-broker" (include "common.names.fullname" .) }}
+  name: {{ template "kafka.broker.fullname" . }}
   namespace: {{ include "common.names.namespace" . | quote }}
   labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }}
     app.kubernetes.io/component: broker
@@ -36,7 +36,7 @@ spec:
   targetRef:
     apiVersion: {{ (include "common.capabilities.statefulset.apiVersion" .) }}
     kind: StatefulSet
-    name: {{ printf "%s-broker" (include "common.names.fullname" .) }}
+    name: {{ template "kafka.broker.fullname" . }}
   {{- if .Values.broker.autoscaling.vpa.updatePolicy }}
   updatePolicy:
     {{- with .Values.broker.autoscaling.vpa.updatePolicy.updateMode }}

+ 53 - 0
bitnami/kafka/templates/ca-cert.yaml

@@ -0,0 +1,53 @@
+{{- /*
+Copyright Broadcom, Inc. All Rights Reserved.
+SPDX-License-Identifier: APACHE-2.0
+*/}}
+
+{{- if include "kafka.createCertificate" . }}
+{{- if empty .Values.tls.autoGenerated.certManager.existingIssuer }}
+apiVersion: cert-manager.io/v1
+kind: Issuer
+metadata:
+  name: {{ printf "%s-clusterissuer" (include "common.names.fullname" .) }}
+  namespace: {{ include "common.names.namespace" . | quote }}
+  labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }}
+    app.kubernetes.io/part-of: kafka
+  {{- if .Values.commonAnnotations }}
+  annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
+  {{- end }}
+spec:
+  selfSigned: {}
+---
+{{- end }}
+apiVersion: cert-manager.io/v1
+kind: Certificate
+metadata:
+  name: {{ printf "%s-ca-crt" (include "common.names.fullname" .) }}
+  namespace: {{ include "common.names.namespace" . | quote }}
+  labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }}
+    app.kubernetes.io/part-of: kafka
+  {{- if .Values.commonAnnotations }}
+  annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
+  {{- end }}
+spec:
+  secretName: {{ printf "%s-ca-crt" (include "common.names.fullname" .) }}
+  commonName: {{ printf "%s-root-ca" (include "common.names.fullname" .) }}
+  isCA: true
+  issuerRef:
+    name: {{ default (printf "%s-clusterissuer" (include "common.names.fullname" .)) .Values.tls.autoGenerated.certManager.existingIssuer }}
+    kind: {{ default "Issuer" .Values.tls.autoGenerated.certManager.existingIssuerKind }}
+---
+apiVersion: cert-manager.io/v1
+kind: Issuer
+metadata:
+  name: {{ printf "%s-ca-issuer" (include "common.names.fullname" .) }}
+  namespace: {{ include "common.names.namespace" . | quote }}
+  labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }}
+    app.kubernetes.io/part-of: kafka
+  {{- if .Values.commonAnnotations }}
+  annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
+  {{- end }}
+spec:
+  ca:
+    secretName: {{ printf "%s-ca-crt" (include "common.names.fullname" .) }}
+{{- end }}

+ 56 - 0
bitnami/kafka/templates/cert.yaml

@@ -0,0 +1,56 @@
+{{- /*
+Copyright Broadcom, Inc. All Rights Reserved.
+SPDX-License-Identifier: APACHE-2.0
+*/}}
+
+{{- if include "kafka.createCertificate" . }}
+apiVersion: cert-manager.io/v1
+kind: Certificate
+metadata:
+  name: {{ printf "%s-crt" (include "common.names.fullname" .) }}
+  namespace: {{ include "common.names.namespace" . | quote }}
+  labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }}
+    app.kubernetes.io/part-of: kafka
+  {{- if .Values.commonAnnotations }}
+  annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
+  {{- end }}
+spec:
+  secretName: {{ include "kafka.tlsSecretName" . }}
+  commonName: {{ printf "%s.%s.svc.%s" (include "common.names.fullname" .) (include "common.names.namespace" .) .Values.clusterDomain }}
+  issuerRef:
+    name: {{ printf "%s-ca-issuer" (include "common.names.fullname" .) }}
+    kind: Issuer
+  subject:
+    organizations:
+      - "Kafka"
+  dnsNames:
+    {{- $controllerSvcName := printf "%s-headless" (include "kafka.controller.fullname" .) | trunc 63 | trimSuffix "-" }}
+    {{- $brokerSvcName := printf "%s-headless" (include "kafka.broker.fullname" .) | trunc 63 | trimSuffix "-" }}
+    - '*.{{ include "common.names.namespace" . }}'
+    - '*.{{ include "common.names.namespace" . }}.svc'
+    - '*.{{ include "common.names.namespace" . }}.svc.{{ .Values.clusterDomain }}'
+    - '*.{{ $controllerSvcName }}'
+    - '*.{{ $controllerSvcName }}.{{ include "common.names.namespace" . }}'
+    - '*.{{ $controllerSvcName }}.{{ include "common.names.namespace" . }}.svc'
+    - '*.{{ $controllerSvcName }}.{{ include "common.names.namespace" . }}.svc.{{ .Values.clusterDomain }}'
+    - '*.{{ $brokerSvcName }}'
+    - '*.{{ $brokerSvcName }}.{{ include "common.names.namespace" . }}'
+    - '*.{{ $brokerSvcName }}.{{ include "common.names.namespace" . }}.svc'
+    - '*.{{ $brokerSvcName }}.{{ include "common.names.namespace" . }}.svc.{{ .Values.clusterDomain }}'
+    {{- if .Values.externalAccess.enabled -}}
+    {{- with .Values.externalAccess.broker.service.domain }}
+    - '*.{{ . }}'
+    {{- end }}
+    {{- with .Values.externalAccess.controller.service.domain }}
+    - '*.{{ . }}'
+    {{- end }}
+    {{- end }}
+    {{- range .Values.tls.autoGenerated.customAltNames }}
+    - '{{ . }}'
+    {{- end }}
+  privateKey:
+    algorithm: {{ .Values.tls.autoGenerated.certManager.keyAlgorithm }}
+    size: {{ int .Values.tls.autoGenerated.certManager.keySize }}
+  duration: {{ .Values.tls.autoGenerated.certManager.duration }}
+  renewBefore: {{ .Values.tls.autoGenerated.certManager.renewBefore }}
+{{- end }}

+ 4 - 6
bitnami/kafka/templates/controller-eligible/config-secrets.yaml

@@ -5,16 +5,14 @@ SPDX-License-Identifier: APACHE-2.0
 
 {{- $replicaCount := int .Values.controller.replicaCount }}
 {{- if and (include "kafka.controller.createSecretConfig" .) (gt $replicaCount 0) }}
-{{- $secretName := printf "%s-controller-secret-configuration" (include "common.names.fullname" .) }}
 apiVersion: v1
 kind: Secret
 metadata:
-  name: {{ $secretName }}
+  name: {{ printf "%s-secret-configuration" (include "kafka.controller.fullname" .) }}
   namespace: {{ include "common.names.namespace" . | quote }}
-  labels: {{- include "common.labels.standard" . | nindent 4 }}
-    {{- if .Values.commonLabels }}
-    {{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
-    {{- end }}
+  labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }}
+    app.kubernetes.io/component: controller-eligible
+    app.kubernetes.io/part-of: kafka
   {{- if .Values.commonAnnotations }}
   annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
   {{- end }}

+ 35 - 35
bitnami/kafka/templates/controller-eligible/configmap.yaml

@@ -3,12 +3,37 @@ Copyright Broadcom, Inc. All Rights Reserved.
 SPDX-License-Identifier: APACHE-2.0
 */}}
 
+{{/*
+Return the Kafka controller configuration.
+ref: https://kafka.apache.org/documentation/#configuration
+*/}}
+{{- define "kafka.controller.config" -}}
+{{- if or .Values.config .Values.controller.config }}
+{{- include "common.tplvalues.render" (dict "value" (coalesce .Values.controller.config .Values.config) "context" .) }}
+{{- else }}
+# Listeners configuration
+listeners: {{ include "kafka.listeners" (dict "isController" true "context" .) }}
+listener.security.protocol.map: {{ include "kafka.securityProtocolMap" . }}
+{{- if not .Values.controller.controllerOnly }}
+advertised.listeners: {{ include "kafka.advertisedListeners" . }}
+{{- end }}
+# Kafka data logs directory
+log.dir: {{ printf "%s/data" .Values.controller.persistence.mountPath }}
+# Kafka application logs directory
+logs.dir: {{ .Values.controller.logPersistence.mountPath }}
+# KRaft node role
+process.roles: {{ ternary "controller" "controller,broker" .Values.controller.controllerOnly }}
+# Common Kafka Configuration
+{{ include "kafka.commonConfig" . }}
+{{- end -}}
+{{- end -}}
+
 {{- $replicaCount := int .Values.controller.replicaCount }}
-{{- if and .Values.kraft.enabled (include "kafka.controller.createConfigmap" .) (gt $replicaCount 0)}}
+{{- if and (include "kafka.controller.createConfigmap" .) (gt $replicaCount 0) }}
 apiVersion: v1
 kind: ConfigMap
 metadata:
-  name: {{ printf "%s-controller-configuration" (include "common.names.fullname" .) }}
+  name: {{ printf "%s-configuration" (include "kafka.controller.fullname" .) }}
   namespace: {{ include "common.names.namespace" . | quote }}
   labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }}
     app.kubernetes.io/component: controller-eligible
@@ -17,39 +42,14 @@ metadata:
   annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
   {{- end }}
 data:
-  {{- if or .Values.config .Values.controller.config }}
-  server.properties: {{- include "common.tplvalues.render" ( dict "value" (coalesce .Values.controller.config .Values.config) "context" $ ) | nindent 4 }}
-  {{- else }}
+  {{- $configuration := include "kafka.controller.config" . | fromYaml -}}
+  {{- if or .Values.overrideConfiguration .Values.controller.overrideConfiguration }}
+  {{- $overrideConfiguration := include "common.tplvalues.render" (dict "value" .Values.overrideConfiguration "context" .) | fromYaml }}
+  {{- $controllerOverrideConfiguration := include "common.tplvalues.render" (dict "value" .Values.controller.overrideConfiguration "context" .) | fromYaml }}
+  {{- $configuration = mustMergeOverwrite $configuration $overrideConfiguration $controllerOverrideConfiguration }}
+  {{- end }}
   server.properties: |-
-    # Listeners configuration
-    listeners={{ include "kafka.listeners" ( dict "isController" true "context" $ ) }}
-    {{- if not .Values.controller.controllerOnly }}
-    advertised.listeners={{ include "kafka.advertisedListeners" . }}
-    {{- end }}
-    listener.security.protocol.map={{ include "kafka.securityProtocolMap" . }}
-    {{- if .Values.kraft.enabled }}
-    # KRaft process roles
-    process.roles={{ ternary "controller" "controller,broker" .Values.controller.controllerOnly }}
-    {{- include "kafka.kraftConfig" . | nindent 4 }}
+    {{- range $key, $value := $configuration }}
+    {{ $key }}={{ include "common.tplvalues.render" (dict "value" $value "context" $) }}
     {{- end }}
-    {{- if or .Values.zookeeper.enabled .Values.externalZookeeper.servers }}
-    # Zookeeper configuration
-    zookeeper.metadata.migration.enable=true
-    inter.broker.protocol.version=3.4
-    inter.broker.protocol.version={{ default (regexFind "^[0-9].[0-9]+" .Chart.AppVersion) .Values.interBrokerProtocolVersion }}
-    {{- include "kafka.zookeeperConfig" . | nindent 4 }}
-    {{- end }}
-    # Kafka data logs directory
-    log.dir={{ printf "%s/data" .Values.controller.persistence.mountPath }}
-    # Kafka application logs directory
-    logs.dir={{ .Values.controller.logPersistence.mountPath }}
-
-    # Common Kafka Configuration
-    {{- include "kafka.commonConfig" . | nindent 4 }}
-
-    # Custom Kafka Configuration
-    {{- include "common.tplvalues.render" ( dict "value" .Values.extraConfig "context" $ ) | nindent 4 }}
-    {{- include "common.tplvalues.render" ( dict "value" .Values.controller.extraConfig "context" $ ) | nindent 4 }}
-    {{- include "kafka.properties.render" (merge .Values.controller.extraConfigYaml .Values.extraConfigYaml) | nindent 4 }}
-  {{- end }}
 {{- end }}

+ 3 - 4
bitnami/kafka/templates/controller-eligible/hpa.yaml

@@ -3,12 +3,11 @@ Copyright Broadcom, Inc. All Rights Reserved.
 SPDX-License-Identifier: APACHE-2.0
 */}}
 
-{{- $replicaCount := int .Values.controller.replicaCount }}
-{{- if and .Values.kraft.enabled (gt $replicaCount 0) .Values.controller.autoscaling.hpa.enabled }}
+{{- if .Values.controller.autoscaling.hpa.enabled }}
 apiVersion: {{ include "common.capabilities.hpa.apiVersion" ( dict "context" $ ) }}
 kind: HorizontalPodAutoscaler
 metadata:
-  name: {{ printf "%s-controller" (include "common.names.fullname" .) }}
+  name: {{ template "kafka.controller.fullname" . }}
   namespace: {{ include "common.names.namespace" . | quote }}
   labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }}
     app.kubernetes.io/component: controller
@@ -21,7 +20,7 @@ spec:
   scaleTargetRef:
     apiVersion: {{ template "common.capabilities.statefulset.apiVersion" . }}
     kind: StatefulSet
-    name: {{ printf "%s-controller" (include "common.names.fullname" .) }}
+    name: {{ template "kafka.controller.fullname" . }}
   minReplicas: {{ .Values.controller.autoscaling.hpa.minReplicas }}
   maxReplicas: {{ .Values.controller.autoscaling.hpa.maxReplicas }}
   metrics:

+ 2 - 2
bitnami/kafka/templates/controller-eligible/pdb.yaml

@@ -3,11 +3,11 @@ Copyright Broadcom, Inc. All Rights Reserved.
 SPDX-License-Identifier: APACHE-2.0
 */}}
 
-{{- if and .Values.controller.pdb.create .Values.kraft.enabled }}
+{{- if .Values.controller.pdb.create }}
 apiVersion: {{ include "common.capabilities.policy.apiVersion" . }}
 kind: PodDisruptionBudget
 metadata:
-  name: {{ printf "%s-controller" (include "common.names.fullname" .) }}
+  name: {{ template "kafka.controller.fullname" . }}
   namespace: {{ include "common.names.namespace" . | quote }}
   labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }}
     app.kubernetes.io/component: controller-eligible

+ 42 - 133
bitnami/kafka/templates/controller-eligible/statefulset.yaml

@@ -3,12 +3,10 @@ Copyright Broadcom, Inc. All Rights Reserved.
 SPDX-License-Identifier: APACHE-2.0
 */}}
 
-{{- $replicaCount := int .Values.controller.replicaCount }}
-{{- if and .Values.kraft.enabled (or (gt $replicaCount 0) .Values.controller.autoscaling.enabled) }}
 apiVersion: {{ include "common.capabilities.statefulset.apiVersion" . }}
 kind: StatefulSet
 metadata:
-  name: {{ printf "%s-controller" (include "common.names.fullname" .) }}
+  name: {{ template "kafka.controller.fullname" . }}
   namespace: {{ include "common.names.namespace" . | quote }}
   labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }}
     app.kubernetes.io/component: controller-eligible
@@ -18,7 +16,7 @@ metadata:
   {{- end }}
 spec:
   podManagementPolicy: {{ .Values.controller.podManagementPolicy }}
-  {{- if not .Values.controller.autoscaling.enabled }}
+  {{- if not .Values.controller.autoscaling.hpa.enabled }}
   replicas: {{ .Values.controller.replicaCount }}
   {{- end }}
   {{- $podLabels := include "common.tplvalues.merge" ( dict "values" ( list .Values.controller.podLabels .Values.commonLabels ) "context" . ) }}
@@ -26,9 +24,9 @@ spec:
     matchLabels: {{- include "common.labels.matchLabels" ( dict "customLabels" $podLabels "context" $ ) | nindent 6 }}
       app.kubernetes.io/component: controller-eligible
       app.kubernetes.io/part-of: kafka
-  serviceName: {{ printf "%s-controller-headless" (include "common.names.fullname" .) | trunc 63 | trimSuffix "-" }}
+  serviceName: {{ printf "%s-headless" (include "kafka.controller.fullname" .) | trunc 63 | trimSuffix "-" }}
   updateStrategy: {{- include "common.tplvalues.render" (dict "value" .Values.controller.updateStrategy "context" $ ) | nindent 4 }}
-  {{- if and .Values.controller.minReadySeconds (semverCompare ">= 1.23-0" (include "common.capabilities.kubeVersion" .)) }}
+  {{- if .Values.controller.minReadySeconds }}
   minReadySeconds: {{ .Values.controller.minReadySeconds }}
   {{- end }}
   template:
@@ -37,16 +35,16 @@ spec:
         app.kubernetes.io/component: controller-eligible
         app.kubernetes.io/part-of: kafka
       annotations:
-        {{- if (include "kafka.controller.createConfigmap" .) }}
+        {{- if include "kafka.controller.createConfigmap" . }}
         checksum/configuration: {{ include (print $.Template.BasePath "/controller-eligible/configmap.yaml") . | sha256sum }}
         {{- end }}
-        {{- if (include "kafka.createSaslSecret" .) }}
+        {{- if include "kafka.createSaslSecret" . }}
         checksum/passwords-secret: {{ include (print $.Template.BasePath "/secrets.yaml") . | sha256sum }}
         {{- end }}
-         {{- if (include "kafka.createTlsSecret" .) }}
+         {{- if include "kafka.createTlsSecret" . }}
         checksum/tls-secret: {{ include (print $.Template.BasePath "/tls-secret.yaml") . | sha256sum }}
         {{- end }}
-        {{- if (include "kafka.metrics.jmx.createConfigmap" .) }}
+        {{- if include "kafka.metrics.jmx.createConfigmap" . }}
         checksum/jmx-configuration: {{ include (print $.Template.BasePath "/metrics/jmx-configmap.yaml") . | sha256sum }}
         {{- end }}
         {{- if .Values.controller.podAnnotations }}
@@ -101,39 +99,13 @@ spec:
       dnsConfig: {{- include "common.tplvalues.render" (dict "value" .Values.dnsConfig "context" $) | nindent 8 }}
       {{- end }}
       initContainers:
-        {{- if and .Values.volumePermissions.enabled .Values.controller.persistence.enabled }}
-        - name: volume-permissions
-          image: {{ include "kafka.volumePermissions.image" . }}
-          imagePullPolicy: {{ .Values.volumePermissions.image.pullPolicy | quote }}
-          command:
-            - /bin/bash
-          args:
-            - -ec
-            - |
-              mkdir -p "{{ .Values.controller.persistence.mountPath }}" "{{ .Values.controller.logPersistence.mountPath }}"
-              chown -R {{ .Values.controller.containerSecurityContext.runAsUser }}:{{ .Values.controller.podSecurityContext.fsGroup }} "{{ .Values.controller.persistence.mountPath }}" "{{ .Values.controller.logPersistence.mountPath }}"
-              find "{{ .Values.controller.persistence.mountPath }}" -mindepth 1 -maxdepth 1 -not -name ".snapshot" -not -name "lost+found" | xargs -r chown -R {{ .Values.controller.containerSecurityContext.runAsUser }}:{{ .Values.controller.podSecurityContext.fsGroup }}
-              find "{{ .Values.controller.logPersistence.mountPath }}" -mindepth 1 -maxdepth 1 -not -name ".snapshot" -not -name "lost+found" | xargs -r chown -R {{ .Values.controller.containerSecurityContext.runAsUser }}:{{ .Values.controller.podSecurityContext.fsGroup }}
-          {{- if eq ( toString ( .Values.volumePermissions.containerSecurityContext.runAsUser )) "auto" }}
-          securityContext: {{- omit .Values.volumePermissions.containerSecurityContext "runAsUser" | toYaml | nindent 12 }}
-          {{- else }}
-          securityContext: {{- .Values.volumePermissions.containerSecurityContext | toYaml | nindent 12 }}
-          {{- end }}
-          {{- if .Values.volumePermissions.resources }}
-          resources: {{- toYaml .Values.volumePermissions.resources | nindent 12 }}
-          {{- else if ne .Values.volumePermissions.resourcesPreset "none" }}
-          resources: {{- include "common.resources.preset" (dict "type" .Values.volumePermissions.resourcesPreset) | nindent 12 }}
-          {{- end }}
-          volumeMounts:
-            - name: data
-              mountPath: {{ .Values.controller.persistence.mountPath }}
-            - name: logs
-              mountPath: {{ .Values.controller.logPersistence.mountPath }}
+        {{- if and .Values.defaultInitContainers.volumePermissions.enabled .Values.controller.persistence.enabled }}
+        {{- include "kafka.defaultInitContainers.volumePermissions" (dict "context" . "role" "controller") | nindent 8 }}
         {{- end }}
-        {{- if and .Values.externalAccess.enabled .Values.externalAccess.autoDiscovery.enabled (or .Values.externalAccess.controller.forceExpose (not .Values.controller.controllerOnly))}}
-        {{- include "kafka.autoDiscoveryInitContainer" ( dict "role" "controller" "context" $) | nindent 8 }}
+        {{- if and .Values.externalAccess.enabled .Values.defaultInitContainers.autoDiscovery.enabled (or .Values.externalAccess.controller.forceExpose (not .Values.controller.controllerOnly)) }}
+        {{- include "kafka.defaultInitContainers.autoDiscovery" (dict "context" . "role" "controller") | nindent 8 }}
         {{- end }}
-        {{- include "kafka.prepareKafkaInitContainer" ( dict "role" "controller" "context" $) | nindent 8 }}
+        {{- include "kafka.defaultInitContainers.prepareConfig" (dict "context" . "role" "controller") | nindent 8 }}
         {{- if .Values.controller.initContainers }}
         {{- include "common.tplvalues.render" ( dict "value" .Values.controller.initContainers "context" $ ) | nindent 8 }}
         {{- end }}
@@ -158,72 +130,13 @@ spec:
           args: {{- include "common.tplvalues.render" (dict "value" .Values.controller.args "context" $) | nindent 12 }}
           {{- end }}
           env:
-            - name: BITNAMI_DEBUG
-              value: {{ ternary "true" "false" (or .Values.image.debug .Values.diagnosticMode.enabled) | quote }}
             - name: KAFKA_HEAP_OPTS
               value: {{ coalesce .Values.controller.heapOpts .Values.heapOpts | quote }}
-            - name: KAFKA_KRAFT_CLUSTER_ID
-              valueFrom:
-                secretKeyRef:
-                  name: {{ default (printf "%s-kraft-cluster-id" (include "common.names.fullname" .)) .Values.kraft.existingClusterIdSecret }}
-                  key: kraft-cluster-id
-            {{- if and (include "kafka.saslEnabled" .) (or (regexFind "SCRAM" (upper .Values.sasl.enabledMechanisms)) (regexFind "SCRAM" (upper .Values.sasl.controllerMechanism)) (regexFind "SCRAM" (upper .Values.sasl.interBrokerMechanism))) }}
-            - name: KAFKA_KRAFT_BOOTSTRAP_SCRAM_USERS
-              value: "true"
-            {{- if and (include "kafka.client.saslEnabled" . ) .Values.sasl.client.users (include "kafka.saslUserPasswordsEnabled" .) }}
-            - name: KAFKA_CLIENT_USERS
-              value: {{ join "," .Values.sasl.client.users | quote }}
-            - name: KAFKA_CLIENT_PASSWORDS
-              valueFrom:
-                secretKeyRef:
-                  name: {{ include "kafka.saslSecretName" . }}
-                  key: client-passwords
-            {{- end }}
-            {{- if regexFind "SASL" (upper .Values.listeners.interbroker.protocol) }}
-            {{- if (include "kafka.saslUserPasswordsEnabled" .) }}
-            - name: KAFKA_INTER_BROKER_USER
-              value: {{ .Values.sasl.interbroker.user | quote }}
-            - name: KAFKA_INTER_BROKER_PASSWORD
-              valueFrom:
-                secretKeyRef:
-                  name: {{ include "kafka.saslSecretName" . }}
-                  key: inter-broker-password
-            {{- end }}
-            {{- if (include "kafka.saslClientSecretsEnabled" .) }}
-            - name: KAFKA_INTER_BROKER_CLIENT_ID
-              value: {{ .Values.sasl.interbroker.clientId | quote }}
-            - name: KAFKA_INTER_BROKER_CLIENT_SECRET
-              valueFrom:
-                secretKeyRef:
-                  name: {{ include "kafka.saslSecretName" . }}
-                  key: inter-broker-client-secret
-            {{- end }}
-            {{- end }}
-            {{- if regexFind "SASL" (upper .Values.listeners.controller.protocol) }}
-            {{- if (include "kafka.saslUserPasswordsEnabled" .) }}
-            - name: KAFKA_CONTROLLER_USER
-              value: {{ .Values.sasl.controller.user | quote }}
-            - name: KAFKA_CONTROLLER_PASSWORD
-              valueFrom:
-                secretKeyRef:
-                  name: {{ include "kafka.saslSecretName" . }}
-                  key: controller-password
-            {{- end }}
-            {{- if (include "kafka.saslClientSecretsEnabled" .) }}
-            - name: KAFKA_CONTROLLER_CLIENT_ID
-              value: {{ .Values.sasl.controller.clientId | quote }}
-            - name: KAFKA_CONTROLLER_CLIENT_SECRET
-              valueFrom:
-                secretKeyRef:
-                  name: {{ include "kafka.saslSecretName" . }}
-                  key: controller-client-secret
-            {{- end }}
-            {{- end }}
-            {{- end }}
-            {{- if .Values.metrics.jmx.enabled }}
-            - name: JMX_PORT
-              value: {{ .Values.metrics.jmx.kafkaJmxPort | quote }}
-            {{- end }}
+            - name: KAFKA_CFG_PROCESS_ROLES
+              value: {{ ternary "controller" "controller,broker" .Values.controller.controllerOnly | quote }}
+            - name: KAFKA_INITIAL_CONTROLLERS_FILE
+              value: /shared/initial-controllers.txt
+            {{- include "kafka.commonEnv" . | nindent 12 }}
             {{- if .Values.controller.extraEnvVars }}
             {{- include "common.tplvalues.render" ( dict "value" .Values.controller.extraEnvVars "context" $) | nindent 12 }}
             {{- end }}
@@ -310,23 +223,25 @@ spec:
             - name: kafka-config
               mountPath: /opt/bitnami/kafka/config/server.properties
               subPath: server.properties
-            {{- if .Values.sasl.zookeeper.user }}
-            - name: kafka-config
-              mountPath: /opt/bitnami/kafka/config/kafka_jaas.conf
-              subPath: kafka_jaas.conf
-            {{- end }}
             - name: tmp
               mountPath: /tmp
-            {{- if or .Values.log4j .Values.existingLog4jConfigMap }}
-            - name: log4j-config
-              mountPath: /opt/bitnami/kafka/config/log4j.properties
-              subPath: log4j.properties
+            - name: init-shared
+              mountPath: /shared
+            {{- if or .Values.log4j2 .Values.existingLog4j2ConfigMap }}
+            - name: log4j2-config
+              mountPath: /opt/bitnami/kafka/config/log4j2.yaml
+              subPath: log4j2.yaml
             {{- end }}
-            {{- if or .Values.tls.zookeeper.enabled (include "kafka.sslEnabled" .) }}
+            {{- if include "kafka.sslEnabled" . }}
             - name: kafka-shared-certs
               mountPath: /opt/bitnami/kafka/config/certs
               readOnly: true
             {{- end }}
+            {{- if and .Values.usePasswordFiles (include "kafka.saslEnabled" .) (or (regexFind "SCRAM" (upper .Values.sasl.enabledMechanisms)) (regexFind "SCRAM" (upper .Values.sasl.controllerMechanism)) (regexFind "SCRAM" (upper .Values.sasl.interBrokerMechanism))) }}
+            - name: kafka-sasl
+              mountPath: /opt/bitnami/kafka/config/secrets
+              readOnly: true
+            {{- end }}
             {{- if .Values.extraVolumeMounts }}
             {{- include "common.tplvalues.render" (dict "value" .Values.extraVolumeMounts "context" $) | nindent 12 }}
             {{- end }}
@@ -398,28 +313,22 @@ spec:
           emptyDir: {}
         - name: tmp
           emptyDir: {}
-        - name: scripts
-          configMap:
-            name: {{ include "common.names.fullname" . }}-scripts
-            defaultMode: 493
-        {{- if and .Values.externalAccess.enabled .Values.externalAccess.autoDiscovery.enabled }}
-        - name: kafka-autodiscovery-shared
+        - name: init-shared
           emptyDir: {}
-        {{- end }}
-        {{- if or .Values.log4j .Values.existingLog4jConfigMap }}
-        - name: log4j-config
+        {{- if or .Values.log4j2 .Values.existingLog4j2ConfigMap }}
+        - name: log4j2-config
           configMap:
-            name: {{ include "kafka.log4j.configMapName" . }}
+            name: {{ include "kafka.log4j2.configMapName" . }}
         {{- end }}
         {{- if .Values.metrics.jmx.enabled }}
         - name: jmx-config
           configMap:
             name: {{ include "kafka.metrics.jmx.configmapName" . }}
         {{- end }}
-        {{- if or .Values.tls.zookeeper.enabled (include "kafka.sslEnabled" .) }}
+        {{- if include "kafka.sslEnabled" . }}
         - name: kafka-shared-certs
           emptyDir: {}
-        {{- if and (include "kafka.sslEnabled" .) (or .Values.tls.existingSecret .Values.tls.autoGenerated) }}
+        {{- if or .Values.tls.existingSecret .Values.tls.autoGenerated.enabled }}
         - name: kafka-certs
           projected:
             defaultMode: 256
@@ -431,12 +340,13 @@ spec:
                   name: {{ .Values.tls.jksTruststoreSecret }}
               {{- end }}
         {{- end }}
-        {{- if and .Values.tls.zookeeper.enabled .Values.tls.zookeeper.existingSecret }}
-        - name: kafka-zookeeper-cert
-          secret:
-            secretName: {{ .Values.tls.zookeeper.existingSecret }}
-            defaultMode: 256
         {{- end }}
+        {{- if and .Values.usePasswordFiles (include "kafka.saslEnabled" .) (or (regexFind "SCRAM" (upper .Values.sasl.enabledMechanisms)) (regexFind "SCRAM" (upper .Values.sasl.controllerMechanism)) (regexFind "SCRAM" (upper .Values.sasl.interBrokerMechanism))) }}
+        - name: kafka-sasl
+          projected:
+            sources:
+              - secret:
+                  name:  {{ include "kafka.saslSecretName" . }}
         {{- end }}
         {{- if .Values.extraVolumes }}
         {{- include "common.tplvalues.render" (dict "value" .Values.extraVolumes "context" $) | nindent 8 }}
@@ -508,4 +418,3 @@ spec:
         {{- end -}}
     {{- end }}
   {{- end }}
-{{- end }}

+ 4 - 4
bitnami/kafka/templates/controller-eligible/svc-external-access.yaml

@@ -3,17 +3,17 @@ Copyright Broadcom, Inc. All Rights Reserved.
 SPDX-License-Identifier: APACHE-2.0
 */}}
 
-{{- if and .Values.kraft.enabled .Values.externalAccess.enabled }}
-{{- $fullname := include "common.names.fullname" . }}
+{{- if .Values.externalAccess.enabled }}
+{{- $fullname := include "kafka.controller.fullname" . }}
 {{- if or .Values.externalAccess.controller.forceExpose (not .Values.controller.controllerOnly)}}
 {{- $replicaCount := .Values.controller.replicaCount | int }}
 {{- range $i := until $replicaCount }}
-{{- $targetPod := printf "%s-controller-%d" $fullname $i }}
+{{- $targetPod := printf "%s-%d" $fullname $i }}
 {{- $_ := set $ "targetPod" $targetPod }}
 apiVersion: v1
 kind: Service
 metadata:
-  name: {{ printf "%s-controller-%d-external" $fullname $i | trunc 63 | trimSuffix "-" }}
+  name: {{ printf "%s-%d-external" $fullname $i | trunc 63 | trimSuffix "-" }}
   namespace: {{ include "common.names.namespace" $ | quote }}
   {{- $labels := include "common.tplvalues.merge" ( dict "values" ( list $.Values.externalAccess.controller.service.labels $.Values.commonLabels ) "context" $ ) }}
   labels: {{- include "common.labels.standard" ( dict "customLabels" $labels "context" $ ) | nindent 4 }}

+ 2 - 7
bitnami/kafka/templates/controller-eligible/svc-headless.yaml

@@ -3,12 +3,10 @@ Copyright Broadcom, Inc. All Rights Reserved.
 SPDX-License-Identifier: APACHE-2.0
 */}}
 
-{{- $replicaCount := int .Values.controller.replicaCount }}
-{{- if and .Values.kraft.enabled (gt $replicaCount 0) }}
 apiVersion: v1
 kind: Service
 metadata:
-  name: {{ printf "%s-controller-headless" (include "common.names.fullname" .) | trunc 63 | trimSuffix "-" }}
+  name: {{ printf "%s-headless" (include "kafka.controller.fullname" .) | trunc 63 | trimSuffix "-" }}
   namespace: {{ include "common.names.namespace" . | quote }}
   {{- $labels := include "common.tplvalues.merge" ( dict "values" ( list .Values.service.headless.controller.labels .Values.commonLabels ) "context" . ) }}
   labels: {{- include "common.labels.standard" ( dict "customLabels" $labels "context" $ ) | nindent 4 }}
@@ -23,7 +21,7 @@ spec:
   clusterIP: None
   publishNotReadyAddresses: true
   ports:
-    {{- if or (not .Values.kraft.enabled) (not .Values.controller.controllerOnly) }}
+    {{- if not .Values.controller.controllerOnly }}
     - name: tcp-interbroker
       port: {{ .Values.service.ports.interbroker }}
       protocol: TCP
@@ -33,12 +31,10 @@ spec:
       protocol: TCP
       targetPort: client
     {{- end }}
-    {{- if .Values.kraft.enabled }}
     - name: tcp-controller
       protocol: TCP
       port: {{ .Values.service.ports.controller }}
       targetPort: controller
-    {{- end }}
   {{- $podLabels := include "common.tplvalues.merge" ( dict "values" ( list .Values.controller.podLabels .Values.commonLabels ) "context" . ) }}
   selector: {{- include "common.labels.matchLabels" ( dict "customLabels" $podLabels "context" $ ) | nindent 4 }}
     app.kubernetes.io/component: controller-eligible
@@ -50,4 +46,3 @@ spec:
   ipFamilies:
   {{- . | toYaml | nindent 2 }}
   {{- end }}
-{{- end }}

+ 3 - 4
bitnami/kafka/templates/controller-eligible/vpa.yaml

@@ -3,12 +3,11 @@ Copyright Broadcom, Inc. All Rights Reserved.
 SPDX-License-Identifier: APACHE-2.0
 */}}
 
-{{- $replicaCount := int .Values.controller.replicaCount }}
-{{- if and .Values.kraft.enabled (gt $replicaCount 0) (include "common.capabilities.apiVersions.has" ( dict "version" "autoscaling.k8s.io/v1/VerticalPodAutoscaler" "context" . )) .Values.controller.autoscaling.vpa.enabled }}
+{{- if and (include "common.capabilities.apiVersions.has" ( dict "version" "autoscaling.k8s.io/v1/VerticalPodAutoscaler" "context" . )) .Values.controller.autoscaling.vpa.enabled }}
 apiVersion: {{ include "common.capabilities.vpa.apiVersion" . }}
 kind: VerticalPodAutoscaler
 metadata:
-  name: {{ printf "%s-controller" (include "common.names.fullname" .) }}
+  name: {{ template "kafka.controller.fullname" . }}
   namespace: {{ include "common.names.namespace" . | quote }}
   labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }}
     app.kubernetes.io/component: controller
@@ -36,7 +35,7 @@ spec:
   targetRef:
     apiVersion: {{ (include "common.capabilities.statefulset.apiVersion" .) }}
     kind: StatefulSet
-    name: {{ printf "%s-controller" (include "common.names.fullname" .) }}
+    name: {{ template "kafka.controller.fullname" . }}
   {{- if .Values.controller.autoscaling.vpa.updatePolicy }}
   updatePolicy:
     {{- with .Values.controller.autoscaling.vpa.updatePolicy.updateMode }}

+ 4 - 4
bitnami/kafka/templates/log4j-configmap.yaml → bitnami/kafka/templates/log4j2-configmap.yaml

@@ -3,11 +3,11 @@ Copyright Broadcom, Inc. All Rights Reserved.
 SPDX-License-Identifier: APACHE-2.0
 */}}
 
-{{- if and .Values.log4j (not .Values.existingLog4jConfigMap) }}
+{{- if and .Values.log4j2 (not .Values.existingLog4j2ConfigMap) }}
 apiVersion: v1
 kind: ConfigMap
 metadata:
-  name: {{ printf "%s-log4j-configuration" (include "common.names.fullname" .) }}
+  name: {{ printf "%s-log4j2-configuration" (include "common.names.fullname" .) }}
   namespace: {{ include "common.names.namespace" . | quote }}
   labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }}
     app.kubernetes.io/part-of: kafka
@@ -15,6 +15,6 @@ metadata:
   annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
   {{- end }}
 data:
-  log4j.properties: |-
-    {{- include "common.tplvalues.render" ( dict "value" .Values.log4j "context" $ ) | nindent 4 }}
+  log4j2.yaml: |-
+    {{- include "common.tplvalues.render" ( dict "value" .Values.log4j2 "context" $ ) | nindent 4 }}
 {{- end }}

+ 1 - 1
bitnami/kafka/templates/metrics/jmx-configmap.yaml

@@ -3,7 +3,7 @@ Copyright Broadcom, Inc. All Rights Reserved.
 SPDX-License-Identifier: APACHE-2.0
 */}}
 
-{{- if (include "kafka.metrics.jmx.createConfigmap" .) }}
+{{- if include "kafka.metrics.jmx.createConfigmap" . }}
 apiVersion: v1
 kind: ConfigMap
 metadata:

+ 1 - 1
bitnami/kafka/templates/networkpolicy.yaml

@@ -43,7 +43,7 @@ spec:
         - podSelector:
             matchLabels: {{- include "common.labels.matchLabels" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 14 }}
     {{- if .Values.networkPolicy.extraEgress }}
-    {{- include "common.tplvalues.render" ( dict "value" .Values.rts.networkPolicy.extraEgress "context" $ ) | nindent 4 }}
+    {{- include "common.tplvalues.render" ( dict "value" .Values.networkPolicy.extraEgress "context" $ ) | nindent 4 }}
     {{- end }}
   {{- end }}
   ingress:

+ 76 - 73
bitnami/kafka/templates/provisioning/job.yaml

@@ -100,71 +100,73 @@ spec:
           args:
             - -efc
             - |
-              echo "Configuring environment"
               . /opt/bitnami/scripts/libkafka.sh
+
               export CLIENT_CONF="${CLIENT_CONF:-/tmp/client.properties}"
-              if [ ! -f "$CLIENT_CONF" ]; then
-                touch $CLIENT_CONF
+              if [[ ! -f "$CLIENT_CONF" ]]; then
+                  touch $CLIENT_CONF
 
-                kafka_common_conf_set "$CLIENT_CONF" security.protocol {{ .Values.listeners.client.protocol | quote }}
-                {{- if (regexFind "SSL" (upper .Values.listeners.client.protocol)) }}
-                kafka_common_conf_set "$CLIENT_CONF" ssl.keystore.type {{ upper .Values.provisioning.auth.tls.type | quote }}
-                kafka_common_conf_set "$CLIENT_CONF" ssl.truststore.type {{ upper .Values.provisioning.auth.tls.type | quote }}
-                ! is_empty_value "$KAFKA_CLIENT_KEY_PASSWORD" && kafka_common_conf_set "$CLIENT_CONF" ssl.key.password "$KAFKA_CLIENT_KEY_PASSWORD"
-                {{- if eq (upper .Values.provisioning.auth.tls.type) "PEM" }}
-                {{- if .Values.provisioning.auth.tls.caCert }}
-                file_to_multiline_property() {
-                    awk 'NR > 1{print line" \\"}{line=$0;}END{print $0" "}' <"${1:?missing file}"
-                }
-                kafka_common_conf_set "$CLIENT_CONF" ssl.keystore.key "$(file_to_multiline_property "/certs/{{ .Values.provisioning.auth.tls.key }}")"
-                kafka_common_conf_set "$CLIENT_CONF" ssl.keystore.certificate.chain "$(file_to_multiline_property "/certs/{{ .Values.provisioning.auth.tls.cert }}")"
-                kafka_common_conf_set "$CLIENT_CONF" ssl.truststore.certificates "$(file_to_multiline_property "/certs/{{ .Values.provisioning.auth.tls.caCert }}")"
-                {{- else }}
-                kafka_common_conf_set "$CLIENT_CONF" ssl.keystore.location "/certs/{{ .Values.provisioning.auth.tls.keystore }}"
-                kafka_common_conf_set "$CLIENT_CONF" ssl.truststore.location "/certs/{{ .Values.provisioning.auth.tls.truststore }}"
-                {{- end }}
-                {{- else if eq (upper .Values.provisioning.auth.tls.type) "JKS" }}
-                kafka_common_conf_set "$CLIENT_CONF" ssl.keystore.location "/certs/{{ .Values.provisioning.auth.tls.keystore }}"
-                kafka_common_conf_set "$CLIENT_CONF" ssl.truststore.location "/certs/{{ .Values.provisioning.auth.tls.truststore }}"
-                ! is_empty_value "$KAFKA_CLIENT_KEYSTORE_PASSWORD" && kafka_common_conf_set "$CLIENT_CONF" ssl.keystore.password "$KAFKA_CLIENT_KEYSTORE_PASSWORD"
-                ! is_empty_value "$KAFKA_CLIENT_TRUSTSTORE_PASSWORD" && kafka_common_conf_set "$CLIENT_CONF" ssl.truststore.password "$KAFKA_CLIENT_TRUSTSTORE_PASSWORD"
-                {{- end }}
-                {{- end }}
-                {{- if regexFind "SASL" (upper .Values.listeners.client.protocol) }}
-                {{- if regexFind "PLAIN" ( upper .Values.sasl.enabledMechanisms) }}
-                kafka_common_conf_set "$CLIENT_CONF" sasl.mechanism PLAIN
-                kafka_common_conf_set "$CLIENT_CONF" sasl.jaas.config "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"$SASL_USERNAME\" password=\"$SASL_USER_PASSWORD\";"
-                {{- else if regexFind "SCRAM-SHA-256" ( upper .Values.sasl.enabledMechanisms) }}
-                kafka_common_conf_set "$CLIENT_CONF" sasl.mechanism SCRAM-SHA-256
-                kafka_common_conf_set "$CLIENT_CONF" sasl.jaas.config "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"$SASL_USERNAME\" password=\"$SASL_USER_PASSWORD\";"
-                {{- else if regexFind "SCRAM-SHA-512" ( upper .Values.sasl.enabledMechanisms) }}
-                kafka_common_conf_set "$CLIENT_CONF" sasl.mechanism SCRAM-SHA-512
-                kafka_common_conf_set "$CLIENT_CONF" sasl.jaas.config "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"$SASL_USERNAME\" password=\"$SASL_USER_PASSWORD\";"
-                {{- else if regexFind "OAUTHBEARER" ( upper .Values.sasl.enabledMechanisms) }}
-                kafka_common_conf_set "$CLIENT_CONF" sasl.mechanism OAUTHBEARER
-                kafka_common_conf_set "$CLIENT_CONF" sasl.jaas.config "org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required clientId=\"$SASL_CLIENT_ID\" password=\"$SASL_CLIENT_SECRET\";"
-                kafka_common_conf_set "$CLIENT_CONF" sasl.login.callback.handler.class "org.apache.kafka.common.security.oauthbearer.secured.OAuthBearerLoginCallbackHandler"
-                kafka_common_conf_set "$CLIENT_CONF" sasl.oauthbearer.token.endpoint.url {{ .Values.sasl.oauthbearer.tokenEndpointUrl | quote }}
-                {{- end }}
-                {{- end }}
+                  kafka_common_conf_set "$CLIENT_CONF" security.protocol {{ .Values.listeners.client.protocol | quote }}
+                  {{- if regexFind "SSL" (upper .Values.listeners.client.protocol) }}
+                  kafka_common_conf_set "$CLIENT_CONF" ssl.keystore.type {{ upper .Values.provisioning.auth.tls.type | quote }}
+                  kafka_common_conf_set "$CLIENT_CONF" ssl.truststore.type {{ upper .Values.provisioning.auth.tls.type | quote }}
+                  ! is_empty_value "$KAFKA_CLIENT_KEY_PASSWORD" && kafka_common_conf_set "$CLIENT_CONF" ssl.key.password "$KAFKA_CLIENT_KEY_PASSWORD"
+                  {{- if eq (upper .Values.provisioning.auth.tls.type) "PEM" }}
+                  {{- if .Values.provisioning.auth.tls.caCert }}
+                  file_to_multiline_property() {
+                      awk 'NR > 1{print line" \\"}{line=$0;}END{print $0" "}' <"${1:?missing file}"
+                  }
+                  kafka_common_conf_set "$CLIENT_CONF" ssl.keystore.key "$(file_to_multiline_property "/certs/{{ .Values.provisioning.auth.tls.key }}")"
+                  kafka_common_conf_set "$CLIENT_CONF" ssl.keystore.certificate.chain "$(file_to_multiline_property "/certs/{{ .Values.provisioning.auth.tls.cert }}")"
+                  kafka_common_conf_set "$CLIENT_CONF" ssl.truststore.certificates "$(file_to_multiline_property "/certs/{{ .Values.provisioning.auth.tls.caCert }}")"
+                  {{- else }}
+                  kafka_common_conf_set "$CLIENT_CONF" ssl.keystore.location "/certs/{{ .Values.provisioning.auth.tls.keystore }}"
+                  kafka_common_conf_set "$CLIENT_CONF" ssl.truststore.location "/certs/{{ .Values.provisioning.auth.tls.truststore }}"
+                  {{- end }}
+                  {{- else if eq (upper .Values.provisioning.auth.tls.type) "JKS" }}
+                  kafka_common_conf_set "$CLIENT_CONF" ssl.keystore.location "/certs/{{ .Values.provisioning.auth.tls.keystore }}"
+                  kafka_common_conf_set "$CLIENT_CONF" ssl.truststore.location "/certs/{{ .Values.provisioning.auth.tls.truststore }}"
+                  ! is_empty_value "$KAFKA_CLIENT_KEYSTORE_PASSWORD" && kafka_common_conf_set "$CLIENT_CONF" ssl.keystore.password "$KAFKA_CLIENT_KEYSTORE_PASSWORD"
+                  ! is_empty_value "$KAFKA_CLIENT_TRUSTSTORE_PASSWORD" && kafka_common_conf_set "$CLIENT_CONF" ssl.truststore.password "$KAFKA_CLIENT_TRUSTSTORE_PASSWORD"
+                  {{- end }}
+                  {{- end }}
+                  {{- if regexFind "SASL" (upper .Values.listeners.client.protocol) }}
+                  {{- if regexFind "PLAIN" ( upper .Values.sasl.enabledMechanisms) }}
+                  kafka_common_conf_set "$CLIENT_CONF" sasl.mechanism PLAIN
+                  kafka_common_conf_set "$CLIENT_CONF" sasl.jaas.config "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"$SASL_USERNAME\" password=\"$SASL_USER_PASSWORD\";"
+                  {{- else if regexFind "SCRAM-SHA-256" ( upper .Values.sasl.enabledMechanisms) }}
+                  kafka_common_conf_set "$CLIENT_CONF" sasl.mechanism SCRAM-SHA-256
+                  kafka_common_conf_set "$CLIENT_CONF" sasl.jaas.config "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"$SASL_USERNAME\" password=\"$SASL_USER_PASSWORD\";"
+                  {{- else if regexFind "SCRAM-SHA-512" ( upper .Values.sasl.enabledMechanisms) }}
+                  kafka_common_conf_set "$CLIENT_CONF" sasl.mechanism SCRAM-SHA-512
+                  kafka_common_conf_set "$CLIENT_CONF" sasl.jaas.config "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"$SASL_USERNAME\" password=\"$SASL_USER_PASSWORD\";"
+                  {{- else if regexFind "OAUTHBEARER" ( upper .Values.sasl.enabledMechanisms) }}
+                  kafka_common_conf_set "$CLIENT_CONF" sasl.mechanism OAUTHBEARER
+                  kafka_common_conf_set "$CLIENT_CONF" sasl.jaas.config "org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required clientId=\"$SASL_CLIENT_ID\" password=\"$SASL_CLIENT_SECRET\";"
+                  kafka_common_conf_set "$CLIENT_CONF" sasl.login.callback.handler.class "org.apache.kafka.common.security.oauthbearer.secured.OAuthBearerLoginCallbackHandler"
+                  kafka_common_conf_set "$CLIENT_CONF" sasl.oauthbearer.token.endpoint.url {{ .Values.sasl.oauthbearer.tokenEndpointUrl | quote }}
+                  {{- end }}
+                  {{- end }}
               fi
 
-              echo "Running pre-provisioning script if any given"
+              {{- if .Values.provisioning.preScript }}
+              echo "Running pre-provisioning script"
               {{ .Values.provisioning.preScript | nindent 14 }}
+              {{- end }}
 
               kafka_provisioning_commands=(
               {{- range $topic := .Values.provisioning.topics }}
-                "/opt/bitnami/kafka/bin/kafka-topics.sh \
-                    --create \
-                    --if-not-exists \
-                    --bootstrap-server ${KAFKA_SERVICE} \
-                    --replication-factor {{ $topic.replicationFactor | default $.Values.provisioning.replicationFactor }} \
-                    --partitions {{ $topic.partitions | default $.Values.provisioning.numPartitions }} \
-                    {{- range $name, $value := $topic.config }}
-                    --config {{ $name }}={{ $value }} \
-                    {{- end }}
-                    --command-config ${CLIENT_CONF} \
-                    --topic {{ $topic.name }}"
+                  "/opt/bitnami/kafka/bin/kafka-topics.sh \
+                      --create \
+                      --if-not-exists \
+                      --bootstrap-server ${KAFKA_SERVICE} \
+                      --replication-factor {{ $topic.replicationFactor | default $.Values.provisioning.replicationFactor }} \
+                      --partitions {{ $topic.partitions | default $.Values.provisioning.numPartitions }} \
+                      {{- range $name, $value := $topic.config }}
+                      --config {{ $name }}={{ $value }} \
+                      {{- end }}
+                      --command-config ${CLIENT_CONF} \
+                      --topic {{ $topic.name }}"
               {{- end }}
               {{- range $command := .Values.provisioning.extraProvisioningCommands }}
                 {{- $command | quote | nindent 16 }}
@@ -172,17 +174,18 @@ spec:
               )
 
               echo "Starting provisioning"
-              for ((index=0; index < ${#kafka_provisioning_commands[@]}; index+={{ .Values.provisioning.parallel }}))
-              do
-                for j in $(seq ${index} $((${index}+{{ .Values.provisioning.parallel }}-1)))
-                do
-                    ${kafka_provisioning_commands[j]} & # Async command
-                done
-                wait  # Wait the end of the jobs
+              for ((index=0; index < ${#kafka_provisioning_commands[@]}; index+={{ .Values.provisioning.parallel }})); do
+                  for j in $(seq ${index} $((${index}+{{ .Values.provisioning.parallel }}-1))); do
+                      ${kafka_provisioning_commands[j]} &
+                  done
+                  # Wait the end of the jobs
+                  wait
               done
 
-              echo "Running post-provisioning script if any given"
+              {{- if .Values.provisioning.preScript }}
+              echo "Running post-provisioning script"
               {{ .Values.provisioning.postScript | nindent 14 }}
+              {{- end }}
 
               echo "Provisioning succeeded"
           {{- end }}
@@ -209,7 +212,7 @@ spec:
             - name: KAFKA_SERVICE
               value: {{ printf "%s:%d" (include "common.names.fullname" .) (.Values.service.ports.client | int64) }}
             {{- if regexFind "SASL" (upper .Values.listeners.client.protocol) }}
-            {{- if (include "kafka.saslUserPasswordsEnabled" .) }}
+            {{- if include "kafka.saslUserPasswordsEnabled" . }}
             - name: SASL_USERNAME
               value: {{ index .Values.sasl.client.users 0 | quote }}
             - name: SASL_USER_PASSWORD
@@ -218,7 +221,7 @@ spec:
                   name: {{ include "kafka.saslSecretName" . }}
                   key: system-user-password
             {{- end }}
-            {{- if (include "kafka.saslClientSecretsEnabled" .) }}
+            {{- if include "kafka.saslClientSecretsEnabled" . }}
             - name: SASL_CLIENT_ID
               value: {{ .Values.sasl.interbroker.clientId | quote }}
             - name: SASL_USER_PASSWORD
@@ -248,10 +251,10 @@ spec:
           resources: {{- include "common.resources.preset" (dict "type" .Values.provisioning.resourcesPreset) | nindent 12 }}
           {{- end }}
           volumeMounts:
-            {{- if or .Values.log4j .Values.existingLog4jConfigMap }}
-            - name: log4j-config
-              mountPath: /opt/bitnami/kafka/config/log4j.properties
-              subPath: log4j.properties
+            {{- if or .Values.log4j2 .Values.existingLog4j2ConfigMap }}
+            - name: log4j2-config
+              mountPath: /opt/bitnami/kafka/config/log4j2.yaml
+              subPath: log4j2.yaml
             {{- end }}
             {{- if (regexFind "SSL" (upper .Values.listeners.client.protocol)) }}
             {{- if not (empty .Values.provisioning.auth.tls.certificatesSecret) }}
@@ -269,10 +272,10 @@ spec:
         {{- include "common.tplvalues.render" (dict "value" .Values.provisioning.sidecars "context" $) | nindent 8 }}
         {{- end }}
       volumes:
-        {{- if or .Values.log4j .Values.existingLog4jConfigMap }}
-        - name: log4j-config
+        {{- if or .Values.log4j2 .Values.existingLog4j2ConfigMap }}
+        - name: log4j2-config
           configMap:
-            name: {{ include "kafka.log4j.configMapName" . }}
+            name: {{ include "kafka.log4j2.configMapName" . }}
         {{- end }}
         {{- if (regexFind "SSL" (upper .Values.listeners.client.protocol)) }}
         {{- if not (empty .Values.provisioning.auth.tls.certificatesSecret) }}

+ 0 - 400
bitnami/kafka/templates/scripts-configmap.yaml

@@ -1,400 +0,0 @@
-{{- /*
-Copyright Broadcom, Inc. All Rights Reserved.
-SPDX-License-Identifier: APACHE-2.0
-*/}}
-
-{{- $releaseNamespace := include "common.names.namespace" . }}
-{{- $fullname := include "common.names.fullname" . }}
-{{- $clusterDomain := .Values.clusterDomain }}
-apiVersion: v1
-kind: ConfigMap
-metadata:
-  name: {{ printf "%s-scripts" $fullname }}
-  namespace: {{ $releaseNamespace  | quote }}
-  labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }}
-  {{- if .Values.commonAnnotations }}
-  annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
-  {{- end }}
-data:
-  {{- if .Values.externalAccess.autoDiscovery.enabled }}
-  auto-discovery.sh: |-
-    #!/bin/bash
-    SVC_NAME="${MY_POD_NAME}-external"
-    AUTODISCOVERY_SERVICE_TYPE="${AUTODISCOVERY_SERVICE_TYPE:-}"
-    # Auxiliary functions
-    retry_while() {
-        local -r cmd="${1:?cmd is missing}"
-        local -r retries="${2:-12}"
-        local -r sleep_time="${3:-5}"
-        local return_value=1
-
-        read -r -a command <<< "$cmd"
-        for ((i = 1 ; i <= retries ; i+=1 )); do
-            "${command[@]}" && return_value=0 && break
-            sleep "$sleep_time"
-        done
-        return $return_value
-    }
-    k8s_svc_lb_ip() {
-        local namespace=${1:?namespace is missing}
-        local service=${2:?service is missing}
-        local service_ip=$(kubectl get svc "$service" -n "$namespace" -o jsonpath="{.status.loadBalancer.ingress[0].ip}")
-        local service_hostname=$(kubectl get svc "$service" -n "$namespace" -o jsonpath="{.status.loadBalancer.ingress[0].hostname}")
-
-        if [[ -n ${service_ip} ]]; then
-            echo "${service_ip}"
-        else
-            echo "${service_hostname}"
-        fi
-    }
-    k8s_svc_lb_ip_ready() {
-        local namespace=${1:?namespace is missing}
-        local service=${2:?service is missing}
-        [[ -n "$(k8s_svc_lb_ip "$namespace" "$service")" ]]
-    }
-    k8s_svc_node_port() {
-        local namespace=${1:?namespace is missing}
-        local service=${2:?service is missing}
-        local index=${3:-0}
-        local node_port="$(kubectl get svc "$service" -n "$namespace" -o jsonpath="{.spec.ports[$index].nodePort}")"
-        echo "$node_port"
-    }
-
-    if [[ "$AUTODISCOVERY_SERVICE_TYPE" = "LoadBalancer" ]]; then
-      # Wait until LoadBalancer IP is ready
-      retry_while "k8s_svc_lb_ip_ready {{ $releaseNamespace }} $SVC_NAME" || exit 1
-      # Obtain LoadBalancer external IP
-      k8s_svc_lb_ip "{{ $releaseNamespace }}" "$SVC_NAME" | tee "/shared/external-host.txt"
-    elif [[ "$AUTODISCOVERY_SERVICE_TYPE" = "NodePort" ]]; then
-      k8s_svc_node_port "{{ $releaseNamespace }}" "$SVC_NAME" | tee "/shared/external-port.txt"
-    else
-      echo "Unsupported autodiscovery service type: '$AUTODISCOVERY_SERVICE_TYPE'"
-      exit 1
-    fi
-  {{- end }}
-  kafka-init.sh: |-
-    #!/bin/bash
-
-    set -o errexit
-    set -o nounset
-    set -o pipefail
-
-    error(){
-      local message="${1:?missing message}"
-      echo "ERROR: ${message}"
-      exit 1
-    }
-
-    retry_while() {
-        local -r cmd="${1:?cmd is missing}"
-        local -r retries="${2:-12}"
-        local -r sleep_time="${3:-5}"
-        local return_value=1
-
-        read -r -a command <<< "$cmd"
-        for ((i = 1 ; i <= retries ; i+=1 )); do
-            "${command[@]}" && return_value=0 && break
-            sleep "$sleep_time"
-        done
-        return $return_value
-    }
-
-    replace_in_file() {
-        local filename="${1:?filename is required}"
-        local match_regex="${2:?match regex is required}"
-        local substitute_regex="${3:?substitute regex is required}"
-        local posix_regex=${4:-true}
-
-        local result
-
-        # We should avoid using 'sed in-place' substitutions
-        # 1) They are not compatible with files mounted from ConfigMap(s)
-        # 2) We found incompatibility issues with Debian10 and "in-place" substitutions
-        local -r del=$'\001' # Use a non-printable character as a 'sed' delimiter to avoid issues
-        if [[ $posix_regex = true ]]; then
-            result="$(sed -E "s${del}${match_regex}${del}${substitute_regex}${del}g" "$filename")"
-        else
-            result="$(sed "s${del}${match_regex}${del}${substitute_regex}${del}g" "$filename")"
-        fi
-        echo "$result" > "$filename"
-    }
-
-    kafka_conf_set() {
-        local file="${1:?missing file}"
-        local key="${2:?missing key}"
-        local value="${3:?missing value}"
-
-        # Check if the value was set before
-        if grep -q "^[#\\s]*$key\s*=.*" "$file"; then
-            # Update the existing key
-            replace_in_file "$file" "^[#\\s]*${key}\s*=.*" "${key}=${value}" false
-        else
-            # Add a new key
-            printf '\n%s=%s' "$key" "$value" >>"$file"
-        fi
-    }
-
-    replace_placeholder() {
-      local placeholder="${1:?missing placeholder value}"
-      local password="${2:?missing password value}"
-      local -r del=$'\001' # Use a non-printable character as a 'sed' delimiter to avoid issues with delimiter symbols in sed string
-      sed -i "s${del}$placeholder${del}$password${del}g" "$KAFKA_CONFIG_FILE"
-    }
-
-    append_file_to_kafka_conf() {
-        local file="${1:?missing source file}"
-        local conf="${2:?missing kafka conf file}"
-
-        cat "$1" >> "$2"
-    }
-
-    configure_external_access() {
-      # Configure external hostname
-      if [[ -f "/shared/external-host.txt" ]]; then
-        host=$(cat "/shared/external-host.txt")
-      elif [[ -n "${EXTERNAL_ACCESS_HOST:-}" ]]; then
-        host="$EXTERNAL_ACCESS_HOST"
-      elif [[ -n "${EXTERNAL_ACCESS_HOSTS_LIST:-}" ]]; then
-        read -r -a hosts <<<"$(tr ',' ' ' <<<"${EXTERNAL_ACCESS_HOSTS_LIST}")"
-        host="${hosts[$POD_ID]}"
-      elif [[ "$EXTERNAL_ACCESS_HOST_USE_PUBLIC_IP" =~ ^(yes|true)$ ]]; then
-        host=$(curl -s https://ipinfo.io/ip)
-      else
-        error "External access hostname not provided"
-      fi
-
-      # Configure external port
-      if [[ -f "/shared/external-port.txt" ]]; then
-        port=$(cat "/shared/external-port.txt")
-      elif [[ -n "${EXTERNAL_ACCESS_PORT:-}" ]]; then
-        if [[ "${EXTERNAL_ACCESS_PORT_AUTOINCREMENT:-}" =~ ^(yes|true)$ ]]; then
-          port="$((EXTERNAL_ACCESS_PORT + POD_ID))"
-        else
-          port="$EXTERNAL_ACCESS_PORT"
-        fi
-      elif [[ -n "${EXTERNAL_ACCESS_PORTS_LIST:-}" ]]; then
-        read -r -a ports <<<"$(tr ',' ' ' <<<"${EXTERNAL_ACCESS_PORTS_LIST}")"
-        port="${ports[$POD_ID]}"
-      else
-        error "External access port not provided"
-      fi
-      # Configure Kafka advertised listeners
-      sed -i -E "s|^(advertised\.listeners=\S+)$|\1,{{ upper .Values.listeners.external.name }}://${host}:${port}|" "$KAFKA_CONFIG_FILE"
-    }
-    {{- if (include "kafka.sslEnabled" .) }}
-    configure_kafka_tls() {
-      # Remove previously existing keystores and certificates, if any
-      rm -f /certs/kafka.keystore.jks /certs/kafka.truststore.jks
-      rm -f /certs/tls.crt /certs/tls.key /certs/ca.crt
-      find /certs -name "xx*" -exec rm {} \;
-      if [[ "${KAFKA_TLS_TYPE}" = "PEM" ]]; then
-        # Copy PEM certificate and key
-        if [[ -f "/mounted-certs/kafka-${POD_ROLE}-${POD_ID}.crt" && "/mounted-certs/kafka-${POD_ROLE}-${POD_ID}.key" ]]; then
-          cp "/mounted-certs/kafka-${POD_ROLE}-${POD_ID}.crt" /certs/tls.crt
-          # Copy the PEM key ensuring the key used PEM format with PKCS#8
-          openssl pkcs8 -topk8 -nocrypt -passin pass:"${KAFKA_TLS_PEM_KEY_PASSWORD:-}" -in "/mounted-certs/kafka-${POD_ROLE}-${POD_ID}.key" > /certs/tls.key
-        elif [[ -f /mounted-certs/kafka.crt && -f /mounted-certs/kafka.key ]]; then
-          cp "/mounted-certs/kafka.crt" /certs/tls.crt
-          # Copy the PEM key ensuring the key used PEM format with PKCS#8
-          openssl pkcs8 -topk8 -passin pass:"${KAFKA_TLS_PEM_KEY_PASSWORD:-}" -nocrypt -in "/mounted-certs/kafka.key" > /certs/tls.key
-        elif [[ -f /mounted-certs/tls.crt && -f /mounted-certs/tls.key ]]; then
-          cp "/mounted-certs/tls.crt" /certs/tls.crt
-          # Copy the PEM key ensuring the key used PEM format with PKCS#8
-          openssl pkcs8 -topk8 -passin pass:"${KAFKA_TLS_PEM_KEY_PASSWORD:-}" -nocrypt -in "/mounted-certs/tls.key" > /certs/tls.key
-        else
-          error "PEM key and cert files not found"
-        fi
-
-        {{- if not .Values.tls.pemChainIncluded }}
-        # Copy CA certificate
-        if [[ -f /mounted-certs/kafka-ca.crt ]]; then
-          cp /mounted-certs/kafka-ca.crt /certs/ca.crt
-        elif [[ -f /mounted-certs/ca.crt ]]; then
-          cp /mounted-certs/ca.crt /certs/ca.crt
-        else
-          error "CA certificate file not found"
-        fi
-        {{- else }}
-        # CA certificates are also included in the same certificate
-        # All public certs will be included in the truststore
-        cp /certs/tls.crt /certs/ca.crt
-        {{- end }}
-
-        # Create JKS keystore from PEM cert and key
-        openssl pkcs12 -export -in "/certs/tls.crt" \
-          -passout pass:"${KAFKA_TLS_KEYSTORE_PASSWORD}" \
-          -inkey "/certs/tls.key" \
-          -out "/certs/kafka.keystore.p12"
-        keytool -importkeystore -srckeystore "/certs/kafka.keystore.p12" \
-          -srcstoretype PKCS12 \
-          -srcstorepass "${KAFKA_TLS_KEYSTORE_PASSWORD}" \
-          -deststorepass "${KAFKA_TLS_KEYSTORE_PASSWORD}" \
-          -destkeystore "/certs/kafka.keystore.jks" \
-          -noprompt
-        # Create JKS truststore from CA cert
-        keytool -keystore /certs/kafka.truststore.jks -alias CARoot -import -file /certs/ca.crt -storepass "${KAFKA_TLS_TRUSTSTORE_PASSWORD}" -noprompt
-        # Remove extra files
-        rm -f "/certs/kafka.keystore.p12" "/certs/tls.crt" "/certs/tls.key" "/certs/ca.crt"
-      elif [[ "${KAFKA_TLS_TYPE}" = "JKS" ]]; then
-        if [[ -f "/mounted-certs/kafka-${POD_ROLE}-${POD_ID}.keystore.jks" ]]; then
-          cp "/mounted-certs/kafka-${POD_ROLE}-${POD_ID}.keystore.jks" /certs/kafka.keystore.jks
-        elif [[ -f {{ printf "/mounted-certs/%s" ( default "kafka.keystore.jks" .Values.tls.jksKeystoreKey) | quote }} ]]; then
-          cp {{ printf "/mounted-certs/%s" ( default "kafka.keystore.jks" .Values.tls.jksKeystoreKey) | quote }} /certs/kafka.keystore.jks
-        else
-          error "Keystore file not found"
-        fi
-
-        if [[ -f {{ printf "/mounted-certs/%s" ( default "kafka.truststore.jks" .Values.tls.jksTruststoreKey) | quote }} ]]; then
-          cp {{ printf "/mounted-certs/%s" ( default "kafka.truststore.jks" .Values.tls.jksTruststoreKey) | quote }} /certs/kafka.truststore.jks
-        else
-          error "Truststore file not found"
-        fi
-      else
-        error "Invalid type ${KAFKA_TLS_TYPE}"
-      fi
-
-      # Configure TLS password settings in Kafka configuration
-      [[ -n "${KAFKA_TLS_KEYSTORE_PASSWORD:-}" ]] && kafka_conf_set "$KAFKA_CONFIG_FILE" "ssl.keystore.password" "$KAFKA_TLS_KEYSTORE_PASSWORD"
-      [[ -n "${KAFKA_TLS_TRUSTSTORE_PASSWORD:-}" ]] && kafka_conf_set "$KAFKA_CONFIG_FILE" "ssl.truststore.password" "$KAFKA_TLS_TRUSTSTORE_PASSWORD"
-      [[ -n "${KAFKA_TLS_PEM_KEY_PASSWORD:-}" ]] && kafka_conf_set "$KAFKA_CONFIG_FILE" "ssl.key.password" "$KAFKA_TLS_PEM_KEY_PASSWORD"
-      # Avoid errors caused by previous checks
-      true
-    }
-    {{- end }}
-    {{- if and .Values.tls.zookeeper.enabled .Values.tls.zookeeper.existingSecret }}
-    configure_zookeeper_tls() {
-      # Remove previously existing keystores
-      rm -f /certs/zookeeper.keystore.jks /certs/zookeeper.truststore.jks
-      ZOOKEEPER_TRUSTSTORE={{ printf "/zookeeper-certs/%s" .Values.tls.zookeeper.existingSecretTruststoreKey | quote }}
-      ZOOKEEPER_KEYSTORE={{ printf "/zookeeper-certs/%s" .Values.tls.zookeeper.existingSecretKeystoreKey | quote }}
-      if [[ -f "$ZOOKEEPER_KEYSTORE" ]]; then
-        cp "$ZOOKEEPER_KEYSTORE" "/certs/zookeeper.keystore.jks"
-      else
-        error "Zookeeper keystore file not found"
-      fi
-      if [[ -f "$ZOOKEEPER_TRUSTSTORE" ]]; then
-        cp "$ZOOKEEPER_TRUSTSTORE" "/certs/zookeeper.truststore.jks"
-      else
-        error "Zookeeper keystore file not found"
-      fi
-      [[ -n "${KAFKA_ZOOKEEPER_TLS_KEYSTORE_PASSWORD:-}" ]] && kafka_conf_set "$KAFKA_CONFIG_FILE" "zookeeper.ssl.keystore.password" "${KAFKA_ZOOKEEPER_TLS_KEYSTORE_PASSWORD}"
-      [[ -n "${KAFKA_ZOOKEEPER_TLS_TRUSTSTORE_PASSWORD:-}" ]] && kafka_conf_set "$KAFKA_CONFIG_FILE" "zookeeper.ssl.truststore.password" "${KAFKA_ZOOKEEPER_TLS_TRUSTSTORE_PASSWORD}"
-      # Avoid errors caused by previous checks
-      true
-    }
-    {{- end }}
-
-    {{- if (include "kafka.saslEnabled" .) }}
-    configure_kafka_sasl() {
-
-      # Replace placeholders with passwords
-      {{- if regexFind "SASL" (upper .Values.listeners.interbroker.protocol) }}
-      {{- if (include "kafka.saslUserPasswordsEnabled" .) }}
-      replace_placeholder "interbroker-password-placeholder" "$KAFKA_INTER_BROKER_PASSWORD"
-      {{- end }}
-      {{- if (include "kafka.saslClientSecretsEnabled" .) }}
-      replace_placeholder "interbroker-client-secret-placeholder" "$KAFKA_INTER_BROKER_CLIENT_SECRET"
-      {{- end }}
-      {{- end -}}
-      {{- if and .Values.kraft.enabled (regexFind "SASL" (upper .Values.listeners.controller.protocol)) }}
-      {{- if (include "kafka.saslUserPasswordsEnabled" .) }}
-      replace_placeholder "controller-password-placeholder" "$KAFKA_CONTROLLER_PASSWORD"
-      {{- end }}
-      {{- if (include "kafka.saslClientSecretsEnabled" .) }}
-      replace_placeholder "controller-client-secret-placeholder" "$KAFKA_CONTROLLER_CLIENT_SECRET"
-      {{- end }}
-      {{- end }}
-      {{- if (include "kafka.client.saslEnabled" .)}}
-      read -r -a passwords <<<"$(tr ',;' ' ' <<<"${KAFKA_CLIENT_PASSWORDS:-}")"
-      for ((i = 0; i < ${#passwords[@]}; i++)); do
-          replace_placeholder "password-placeholder-${i}\"" "${passwords[i]}\""
-      done
-      {{- end }}
-      {{- if .Values.sasl.zookeeper.user }}
-      replace_placeholder "zookeeper-password-placeholder" "$KAFKA_ZOOKEEPER_PASSWORD"
-      {{- end }}
-    }
-    {{- end }}
-
-    {{- if .Values.externalAccess.autoDiscovery.enabled }}
-    # Wait for autodiscovery to finish
-    if [[ "${EXTERNAL_ACCESS_ENABLED:-false}" =~ ^(yes|true)$ ]]; then
-      retry_while "test -f /shared/external-host.txt -o -f /shared/external-port.txt" || error "Timed out waiting for autodiscovery init-container"
-    fi
-    {{- end }}
-
-    {{- if .Values.sasl.zookeeper.user }}
-    export KAFKA_CONFIG_FILE=/config/kafka_jaas.conf
-    cat << EOF > /config/kafka_jaas.conf
-    Client {
-      org.apache.kafka.common.security.plain.PlainLoginModule required
-      username="{{ .Values.sasl.zookeeper.user }}"
-      password="zookeeper-password-placeholder";
-    };
-    EOF
-    replace_placeholder "zookeeper-password-placeholder" "$KAFKA_ZOOKEEPER_PASSWORD"
-    {{- end }}
-
-    export KAFKA_CONFIG_FILE=/config/server.properties
-    cp /configmaps/server.properties $KAFKA_CONFIG_FILE
-
-    # Get pod ID and role, last and second last fields in the pod name respectively
-    POD_ID=$(echo "$MY_POD_NAME" | rev | cut -d'-' -f 1 | rev)
-    POD_ROLE=$(echo "$MY_POD_NAME" | rev | cut -d'-' -f 2 | rev)
-
-    # Configure node.id and/or broker.id
-    if [[ -f "/bitnami/kafka/data/meta.properties" ]]; then
-        if grep -q "broker.id" /bitnami/kafka/data/meta.properties; then
-          ID="$(grep "broker.id" /bitnami/kafka/data/meta.properties | awk -F '=' '{print $2}')"
-          {{- if or (and .Values.kraft.enabled (not .Values.broker.zookeeperMigrationMode)) (and (not .Values.zookeeper.enabled) (not .Values.externalZookeeper.servers)) }}
-          kafka_conf_set "$KAFKA_CONFIG_FILE" "node.id" "$ID"
-          {{- else }}
-          kafka_conf_set "$KAFKA_CONFIG_FILE" "broker.id" "$ID"
-          {{- end }}
-        else
-          ID="$(grep "node.id" /bitnami/kafka/data/meta.properties | awk -F '=' '{print $2}')"
-          kafka_conf_set "$KAFKA_CONFIG_FILE" "node.id" "$ID"
-        fi
-    else
-        ID=$((POD_ID + KAFKA_MIN_ID))
-        {{- if .Values.kraft.enabled }}
-        kafka_conf_set "$KAFKA_CONFIG_FILE" "node.id" "$ID"
-        {{- end }}
-        {{- if or .Values.zookeeper.enabled .Values.externalZookeeper.servers }}
-        kafka_conf_set "$KAFKA_CONFIG_FILE" "broker.id" "$ID"
-        {{- end }}
-    fi
-    {{- if not .Values.listeners.advertisedListeners }}
-    replace_placeholder "advertised-address-placeholder" "${MY_POD_NAME}.{{ $fullname }}-${POD_ROLE}-headless.{{ $releaseNamespace }}.svc.{{ $clusterDomain }}"
-    if [[ "${EXTERNAL_ACCESS_ENABLED:-false}" =~ ^(yes|true)$ ]]; then
-      configure_external_access
-    fi
-    {{- end }}
-    {{- if (include "kafka.sslEnabled" .) }}
-    configure_kafka_tls
-    {{- end }}
-    {{- if (include "kafka.saslEnabled" .) }}
-    configure_kafka_sasl
-    {{- end }}
-    {{- if and .Values.tls.zookeeper.enabled .Values.tls.zookeeper.existingSecret }}
-    configure_zookeeper_tls
-    {{- end }}
-    {{- if eq .Values.brokerRackAssignment "aws-az" }}
-    # Broker rack awareness
-    echo "Obtaining broker.rack for aws-az rack assignment"
-    EC2_METADATA_TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 60")
-    export BROKER_RACK=$(curl -H "X-aws-ec2-metadata-token: $EC2_METADATA_TOKEN" "http://169.254.169.254/latest/meta-data/placement/availability-zone-id")
-    kafka_conf_set "$KAFKA_CONFIG_FILE" "broker.rack" "$BROKER_RACK"
-    {{- end }}
-    {{- if eq .Values.brokerRackAssignment "azure" }}
-    # Broker rack awareness IGT Implemented
-    echo "Obtaining broker.rack for Azure rack assignment"
-    export LOCATION=$(curl -s -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/compute/location?api-version={{ .Values.brokerRackAssignmentApiVersion }}&format=text")
-    export ZONE=$(curl -s -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/compute/zone?api-version={{ .Values.brokerRackAssignmentApiVersion }}&format=text")
-    kafka_conf_set "$KAFKA_CONFIG_FILE" "broker.rack" "${LOCATION}-${ZONE}"
-    {{- end }}
-    if [ -f /secret-config/server-secret.properties ]; then
-      append_file_to_kafka_conf /secret-config/server-secret.properties $KAFKA_CONFIG_FILE
-    fi
-    {{- include "common.tplvalues.render" ( dict "value" .Values.extraInit "context" $ ) | nindent 4 }}
-

+ 18 - 19
bitnami/kafka/templates/secrets.yaml

@@ -36,9 +36,6 @@ data:
   system-user-password: {{ index (splitList "," (b64dec $secretValue)) 0 | b64enc | quote }}
   {{- end }}
   {{- end }}
-  {{- if or .Values.sasl.zookeeper.user .Values.zookeeper.auth.client.enabled }}
-  zookeeper-password: {{ include "common.secrets.passwords.manage" (dict "secret" $secretName "key" "zookeeper-password" "providedValues" (list "sasl.zookeeper.password" "zookeeper.auth.client.clientPassword") "failOnNew" false "context" $) }}
-  {{- end }}
   {{- if regexFind "SASL" (upper .Values.listeners.interbroker.protocol) }}
   {{- if (include "kafka.saslUserPasswordsEnabled" .) }}
   inter-broker-password: {{ include "common.secrets.passwords.manage" (dict "secret" $secretName "key" "inter-broker-password" "providedValues" (list "sasl.interbroker.password") "failOnNew" false "context" $) }}
@@ -55,8 +52,25 @@ data:
   controller-client-secret: {{ include "common.secrets.passwords.manage" (dict "secret" $secretName "key" "controller-client-secret" "providedValues" (list "sasl.controller.clientSecret") "failOnNew" false "context" $) }}
   {{- end }}
   {{- end }}
+{{- if not .Values.existingKraftSecret }}
+---
+apiVersion: v1
+kind: Secret
+metadata:
+  name: {{ printf "%s-kraft" (include "common.names.fullname" .) }}
+  namespace: {{ include "common.names.namespace" . | quote }}
+  labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }}
+  {{- if .Values.commonAnnotations }}
+  annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
+  {{- end }}
+type: Opaque
+data:
+  cluster-id: {{ include "common.secrets.passwords.manage" (dict "secret" (printf "%s-kraft" (include "common.names.fullname" .)) "key" "cluster-id" "providedValues" (list "clusterId") "length" 22 "context" $) }}
+  {{- range $i := until (int .Values.controller.replicaCount) }}
+  controller-{{ $i }}-id: {{ include "common.secrets.passwords.manage" (dict "secret" (printf "%s-kraft" (include "common.names.fullname" $)) "key" "controller-{{ $i }}-id" "providedValues" (list "") "length" 22 "context" $) }}
+  {{- end }}
+{{- end }}
 {{- if .Values.serviceBindings.enabled }}
-
 {{- if (include "kafka.client.saslEnabled" .) }}
 {{- $host := list }}
 {{- $port := .Values.service.ports.client }}
@@ -115,18 +129,3 @@ data:
 {{- end }}
 {{- end }}
 {{- end }}
-{{- if and .Values.kraft.enabled (not .Values.kraft.existingClusterIdSecret) }}
----
-apiVersion: v1
-kind: Secret
-metadata:
-  name: {{ printf "%s-kraft-cluster-id" (include "common.names.fullname" .) }}
-  namespace: {{ include "common.names.namespace" . | quote }}
-  labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }}
-  {{- if .Values.commonAnnotations }}
-  annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
-  {{- end }}
-type: Opaque
-data:
-  kraft-cluster-id: {{ include "common.secrets.passwords.manage" (dict "secret" (printf "%s-kraft-cluster-id" (include "common.names.fullname" .)) "key" "kraft-cluster-id" "providedValues" (list "kraft.clusterId") "length" 22 "context" $) }}
-{{- end }}

+ 1 - 1
bitnami/kafka/templates/svc.yaml

@@ -64,7 +64,7 @@ spec:
     {{- end }}
   selector: {{- include "common.labels.matchLabels" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }}
     app.kubernetes.io/part-of: kafka
-    {{- if and .Values.kraft.enabled .Values.controller.controllerOnly }}
+    {{- if .Values.controller.controllerOnly }}
     app.kubernetes.io/component: broker
     {{- end }}
   {{- with .Values.service.ipFamilyPolicy }}

+ 26 - 54
bitnami/kafka/templates/tls-secret.yaml

@@ -3,36 +3,26 @@ Copyright Broadcom, Inc. All Rights Reserved.
 SPDX-License-Identifier: APACHE-2.0
 */}}
 
-{{- if (include "kafka.createTlsSecret" .) }}
+{{- if include "kafka.createTlsSecret" . }}
 {{- $releaseNamespace := include "common.names.namespace" . }}
 {{- $clusterDomain := .Values.clusterDomain }}
 {{- $fullname := include "common.names.fullname" . }}
-{{- $secretName := printf "%s-tls" (include "common.names.fullname" .) }}
-{{- $altNames := list (printf "%s.%s.svc.%s" $fullname $releaseNamespace $clusterDomain) (printf "%s.%s" $fullname $releaseNamespace) $fullname }}
-{{- $replicaCount := int .Values.broker.replicaCount }}
-{{- range $i := until $replicaCount }}
-{{- $replicaHost := printf "%s-broker-%d.%s-broker-headless" $fullname $i $fullname }}
-{{- $altNames = append $altNames (printf "%s.%s.svc.%s" $replicaHost $releaseNamespace $clusterDomain) }}
-{{- $altNames = append $altNames (printf "%s.%s" $replicaHost $releaseNamespace) }}
-{{- $altNames = append $altNames $replicaHost }}
-{{- end }}
+{{- $secretName := include "kafka.tlsSecretName" . }}
+{{- $altNames := list (printf "%s.%s.svc.%s" $fullname $releaseNamespace $clusterDomain) (printf "%s.%s" $fullname $releaseNamespace) $fullname "127.0.0.1" "localhost" }}
+{{- $controllerSvcName := printf "%s-headless" (include "kafka.controller.fullname" .) | trunc 63 | trimSuffix "-" }}
+{{- $brokerSvcName := printf "%s-headless" (include "kafka.broker.fullname" .) | trunc 63 | trimSuffix "-" }}
+{{- $altNames = concat $altNames (list (printf "*.%s.%s.svc.%s" $controllerSvcName $releaseNamespace $clusterDomain) (printf "*.%s.%s" $controllerSvcName $releaseNamespace) (printf "*.%s" $controllerSvcName)) }}
+{{- $altNames = concat $altNames (list (printf "*.%s.%s.svc.%s" $brokerSvcName $releaseNamespace $clusterDomain) (printf "*.%s.%s" $brokerSvcName $releaseNamespace) (printf "*.%s" $brokerSvcName)) }}
 {{- if .Values.externalAccess.enabled -}}
-{{- with .Values.externalAccess.broker.service.domain }}
-{{- $altNames = append $altNames . }}
-{{- end }}
-{{- with .Values.externalAccess.controller.service.domain }}
-{{- $altNames = append $altNames . }}
-{{- end }}
-{{- end }}
-{{- with .Values.tls.customAltNames }}
-{{- $altNames = concat $altNames . }}
+  {{- with .Values.externalAccess.broker.service.domain }}
+    {{- $altNames = append $altNames . }}
+  {{- end }}
+  {{- with .Values.externalAccess.controller.service.domain }}
+    {{- $altNames = append $altNames . }}
+  {{- end }}
 {{- end }}
-{{- $replicaCount := int .Values.controller.replicaCount }}
-{{- range $i := until $replicaCount }}
-{{- $replicaHost := printf "%s-controller-%d.%s-controller-headless" $fullname $i $fullname }}
-{{- $altNames = append $altNames (printf "%s.%s.svc.%s" $replicaHost $releaseNamespace $clusterDomain) }}
-{{- $altNames = append $altNames (printf "%s.%s" $replicaHost $releaseNamespace) }}
-{{- $altNames = append $altNames $replicaHost }}
+{{- with .Values.tls.autoGenerated.customAltNames }}
+  {{- $altNames = concat $altNames . }}
 {{- end }}
 {{- $ca := genCA "kafka-ca" 365 }}
 {{- $cert := genSignedCert $fullname nil $altNames 365 $ca }}
@@ -42,52 +32,34 @@ metadata:
   name: {{ $secretName }}
   namespace: {{ include "common.names.namespace" . | quote }}
   labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }}
+    app.kubernetes.io/part-of: kafka
   {{- if .Values.commonAnnotations }}
   annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
   {{- end }}
-type: Opaque
+type: kubernetes.io/tls
 data:
-  kafka.crt: {{ include "common.secrets.lookup" (dict "secret" $secretName "key" "kafka.crt" "defaultValue" $cert.Cert "context" $) }}
-  kafka.key: {{ include "common.secrets.lookup" (dict "secret" $secretName "key" "kafka.key" "defaultValue" $cert.Key "context" $) }}
-  kafka-ca.crt: {{ include "common.secrets.lookup" (dict "secret" $secretName "key" "kafka-ca.crt" "defaultValue" $ca.Cert "context" $) }}
+  ca.crt: {{ include "common.secrets.lookup" (dict "secret" $secretName "key" "ca.crt" "defaultValue" $ca.Cert "context" $) }}
+  tls.crt: {{ include "common.secrets.lookup" (dict "secret" $secretName "key" "tls.crt" "defaultValue" $cert.Cert "context" $) }}
+  tls.key: {{ include "common.secrets.lookup" (dict "secret" $secretName "key" "tls.key" "defaultValue" $cert.Key "context" $) }}
 ---
 {{- end }}
-{{- if (include "kafka.createTlsPasswordsSecret" .) }}
+{{- if include "kafka.createTlsPasswordsSecret" . }}
+{{- $secretName := include "kafka.tlsPasswordsSecretName" . }}
 apiVersion: v1
 kind: Secret
 metadata:
-  name: {{ printf "%s-tls-passwords" (include "common.names.fullname" .) }}
+  name: {{ $secretName }}
   namespace: {{ include "common.names.namespace" . | quote }}
   labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }}
+    app.kubernetes.io/part-of: kafka
   {{- if .Values.commonAnnotations }}
   annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
   {{- end }}
 type: Opaque
 data:
-  {{ .Values.tls.passwordsSecretKeystoreKey }}: {{ include "common.secrets.passwords.manage" (dict "secret" (printf "%s-tls-passwords" (include "common.names.fullname" .)) "key" .Values.tls.passwordsSecretKeystoreKey "providedValues" (list "tls.keystorePassword") "context" $) }}
-  {{ .Values.tls.passwordsSecretTruststoreKey }}: {{ include "common.secrets.passwords.manage" (dict "secret" (printf "%s-tls-passwords" (include "common.names.fullname" .)) "key" .Values.tls.passwordsSecretTruststoreKey "providedValues" (list "tls.truststorePassword") "context" $) }}
+  {{ .Values.tls.passwordsSecretKeystoreKey }}: {{ include "common.secrets.passwords.manage" (dict "secret" $secretName "key" .Values.tls.passwordsSecretKeystoreKey "providedValues" (list "tls.keystorePassword") "context" $) }}
+  {{ .Values.tls.passwordsSecretTruststoreKey }}: {{ include "common.secrets.passwords.manage" (dict "secret" $secretName "key" .Values.tls.passwordsSecretTruststoreKey "providedValues" (list "tls.truststorePassword") "context" $) }}
   {{- if .Values.tls.keyPassword }}
   {{ default "key-password" .Values.tls.passwordsSecretPemPasswordKey }}: {{ .Values.tls.keyPassword | b64enc | quote }}
   {{- end }}
----
-{{- end }}
-{{- if (include "kafka.zookeeper.createTlsPasswordsSecret" .) }}
-apiVersion: v1
-kind: Secret
-metadata:
-  name: {{ printf "%s-zookeeper-tls-passwords" (include "common.names.fullname" .) }}
-  namespace: {{ include "common.names.namespace" . | quote }}
-  labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }}
-  {{- if .Values.commonAnnotations }}
-  annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
-  {{- end }}
-type: Opaque
-data:
-  {{- if .Values.tls.zookeeper.keystorePassword }}
-  {{ .Values.tls.zookeeper.passwordsSecretKeystoreKey }}: {{ .Values.tls.zookeeper.keystorePassword | b64enc | quote }}
-  {{- end }}
-  {{- if .Values.tls.zookeeper.truststorePassword }}
-  {{ .Values.tls.zookeeper.passwordsSecretTruststoreKey }}: {{ .Values.tls.zookeeper.truststorePassword | b64enc | quote }}
-  {{- end }}
----
 {{- end }}

+ 323 - 381
bitnami/kafka/values.yaml

@@ -5,12 +5,10 @@
 ## Global Docker image parameters
 ## Please, note that this will override the image parameters, including dependencies, configured to use the global value
 ## Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass
-##
 
 ## @param global.imageRegistry Global Docker image registry
 ## @param global.imagePullSecrets Global Docker registry secret names as an array
 ## @param global.defaultStorageClass Global default StorageClass for Persistent Volume(s)
-## @param global.storageClass DEPRECATED: use global.defaultStorageClass instead
 ##
 global:
   imageRegistry: ""
@@ -20,7 +18,6 @@ global:
   ##
   imagePullSecrets: []
   defaultStorageClass: ""
-  storageClass: ""
   ## Security parameters
   ##
   security:
@@ -35,8 +32,8 @@ global:
       ## @param global.compatibility.openshift.adaptSecurityContext Adapt the securityContext sections of the deployment to make them compatible with Openshift restricted-v2 SCC: remove runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs. Possible values: auto (apply if the detected running cluster is Openshift), force (perform the adaptation always), disabled (do not perform adaptation)
       ##
       adaptSecurityContext: auto
+
 ## @section Common parameters
-##
 
 ## @param kubeVersion Override Kubernetes version
 ##
@@ -50,6 +47,9 @@ nameOverride: ""
 ## @param fullnameOverride String to fully override common.names.fullname
 ##
 fullnameOverride: ""
+## @param namespaceOverride String to fully override common.names.namespace
+##
+namespaceOverride: ""
 ## @param clusterDomain Default Kubernetes cluster domain
 ##
 clusterDomain: cluster.local
@@ -62,27 +62,27 @@ commonAnnotations: {}
 ## @param extraDeploy Array of extra objects to deploy with the release
 ##
 extraDeploy: []
-## @param serviceBindings.enabled Create secret for service binding (Experimental)
-## Ref: https://servicebinding.io/service-provider/
+## @param usePasswordFiles Mount credentials as files instead of using environment variables
 ##
-serviceBindings:
-  enabled: false
-## Enable diagnostic mode in the statefulset
+usePasswordFiles: true
+## Diagnostic mode
+## @param diagnosticMode.enabled Enable diagnostic mode (all probes will be disabled and the command will be overridden)
+## @param diagnosticMode.command Command to override all containers in the chart release
+## @param diagnosticMode.args Args to override all containers in the chart release
 ##
 diagnosticMode:
-  ## @param diagnosticMode.enabled Enable diagnostic mode (all probes will be disabled and the command will be overridden)
-  ##
   enabled: false
-  ## @param diagnosticMode.command Command to override all containers in the statefulset
-  ##
   command:
     - sleep
-  ## @param diagnosticMode.args Args to override all containers in the statefulset
-  ##
   args:
     - infinity
-## @section Kafka parameters
+## @param serviceBindings.enabled Create secret for service binding (Experimental)
+## Ref: https://servicebinding.io/service-provider/
 ##
+serviceBindings:
+  enabled: false
+
+## @section Kafka common parameters
 
 ## Bitnami Kafka image version
 ## ref: https://hub.docker.com/r/bitnami/kafka/tags/
@@ -97,7 +97,7 @@ diagnosticMode:
 image:
   registry: docker.io
   repository: bitnami/kafka
-  tag: 3.9.0-debian-12-r12
+  tag: 4.0.0-debian-12-r0
   digest: ""
   ## Specify a imagePullPolicy
   ## ref: https://kubernetes.io/docs/concepts/containers/images/#pre-pulled-images
@@ -114,54 +114,57 @@ image:
   ## Set to true if you would like to see extra information on logs
   ##
   debug: false
-## @param extraInit Additional content for the kafka init script, rendered as a template.
+
+## @param clusterId Kafka Kraft cluster ID (ignored if existingKraftSecret is set). A random cluster ID will be generated the 1st time Kraft is initialized if not set.
+## NOTE: Already initialized Kafka nodes will use cluster ID stored in their persisted storage.
+## If reusing existing PVCs, make sure the cluster ID is set matching the stored cluster ID, otherwise new nodes will fail to join the cluster.
+## In case the cluster ID stored in the secret does not match the value stored in /bitnami/kafka/data/meta.properties, remove the secret and upgrade the chart setting the correct value.
 ##
-extraInit: ""
-## @param config Configuration file for Kafka, rendered as a template. Auto-generated based on chart values when not specified.
-## @param existingConfigmap ConfigMap with Kafka Configuration
-## NOTE: This will override the configuration based on values, please act carefully
-## If both are set, the existingConfigMap will be used.
+clusterId: ""
+## @param existingKraftSecret Name of the secret containing the Kafka KRaft Cluster ID and one directory ID per controller replica
 ##
-config: ""
-existingConfigmap: ""
-## @param extraConfig Additional configuration to be appended at the end of the generated Kafka configuration file.
+existingKraftSecret: ""
+## @param config Specify content for Kafka configuration (auto-generated based on other parameters otherwise)
+## NOTE: This will override the configuration based on values, please act carefully
+## Use simple key-value YAML format, then it's transformed to properties format by the chart. e.g:
+##    process.roles: broker
+## ... will be transformed to:
+##    process.roles=broker
 ##
-extraConfig: ""
-## @param extraConfigYaml Additional configuration in yaml format to be appended at the end of the generated Kafka configuration file.
+config: {}
+## @param overrideConfiguration Kafka common configuration override. Values defined here takes precedence over the ones defined at `config`
 ##
-## E.g.
-## extraConfigYaml:
-##   default.replication.factor: 3
+overrideConfiguration: {}
+## @param existingConfigmap Name of an existing ConfigMap with the Kafka configuration
 ##
-extraConfigYaml: {}
-## @param secretConfig Additional configuration to be appended at the end of the generated Kafka configuration file.
-## This value will be stored in a secret.
+existingConfigmap: ""
+## @param secretConfig Additional configuration to be appended at the end of the generated Kafka configuration (store in a secret)
 ##
 secretConfig: ""
-## @param existingSecretConfig Secret with additonal configuration that will be appended to the end of the generated Kafka configuration file
+## @param existingSecretConfig Secret with additional configuration that will be appended to the end of the generated Kafka configuration
 ## The key for the configuration should be: server-secret.properties
 ## NOTE: This will override secretConfig value
 ##
 existingSecretConfig: ""
-## @param log4j An optional log4j.properties file to overwrite the default of the Kafka brokers
-## An optional log4j.properties file to overwrite the default of the Kafka brokers
-## ref: https://github.com/apache/kafka/blob/trunk/config/log4j.properties
+## @param log4j2 Specify content for Kafka log4j2 configuration (default one is used otherwise)
+## ref: https://github.com/apache/kafka/blob/trunk/config/log4j2.yaml
 ##
-log4j: ""
-## @param existingLog4jConfigMap The name of an existing ConfigMap containing a log4j.properties file
-## The name of an existing ConfigMap containing a log4j.properties file
-## NOTE: this will override `log4j`
+log4j2: ""
+## @param existingLog4j2ConfigMap The name of an existing ConfigMap containing the log4j2.yaml file
 ##
-existingLog4jConfigMap: ""
+existingLog4j2ConfigMap: ""
 ## @param heapOpts Kafka Java Heap configuration
 ##
 heapOpts: -XX:InitialRAMPercentage=75 -XX:MaxRAMPercentage=75
-## @param brokerRackAssignment Set Broker Assignment for multi tenant environment Allowed values: `aws-az`, `azure`
+## @param brokerRackAwareness.enabled Enable Kafka Rack Awareness
+## @param brokerRackAwareness.cloudProvider Cloud provider to use to set Broker Rack Awareness. Allowed values: `aws-az`, `azure`
+## @param brokerRackAwareness.azureApiVersion Metadata API version to use when brokerRackAwareness.cloudProvider is set to `azure`
 ## ref: https://cwiki.apache.org/confluence/display/KAFKA/KIP-392%3A+Allow+consumers+to+fetch+from+closest+replica
 ##
-brokerRackAssignment: ""
-## @param brokerRackAssignmentApiVersion Set Broker Assignment API version when brokerRackAssignment set to : `azure`
-brokerRackAssignmentApiVersion: "2023-11-15"
+brokerRackAwareness:
+  enabled: false
+  cloudProvider: ""
+  azureApiVersion: "2023-11-15"
 ## @param interBrokerProtocolVersion Override the setting 'inter.broker.protocol.version' during the ZK migration.
 ## Ref. https://docs.confluent.io/platform/current/installation/migrate-zk-kraft.html
 ##
@@ -236,7 +239,7 @@ sasl:
   ## @param sasl.controllerMechanism SASL mechanism for controller communications.
   ##
   controllerMechanism: PLAIN
-  ## Settings for oauthbearer mechanism
+  ## Settings for OAuthBearer mechanism
   ## @param sasl.oauthbearer.tokenEndpointUrl The URL for the OAuth/OIDC identity provider
   ## @param sasl.oauthbearer.jwksEndpointUrl The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved
   ## @param sasl.oauthbearer.expectedAudience The comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences
@@ -277,18 +280,11 @@ sasl:
     users:
       - user1
     passwords: ""
-  ## Credentials for Zookeeper communications.
-  ## @param sasl.zookeeper.user Username for zookeeper communications when SASL is enabled.
-  ## @param sasl.zookeeper.password Password for zookeeper communications when SASL is enabled.
-  ##
-  zookeeper:
-    user: ""
-    password: ""
-  ## @param sasl.existingSecret Name of the existing secret containing credentials for clientUsers, interBrokerUser, controllerUser and zookeeperUser
+  ## @param sasl.existingSecret Name of the existing secret containing credentials for client.users, interbroker.user and controller.user
   ## Create this secret running the command below where SECRET_NAME is the name of the secret you want to create:
-  ##       kubectl create secret generic SECRET_NAME --from-literal=client-passwords=CLIENT_PASSWORD1,CLIENT_PASSWORD2 --from-literal=inter-broker-password=INTER_BROKER_PASSWORD --from-literal=inter-broker-client-secret=INTER_BROKER_CLIENT_SECRET --from-literal=controller-password=CONTROLLER_PASSWORD --from-literal=controller-client-secret=CONTROLLER_CLIENT_SECRET --from-literal=zookeeper-password=ZOOKEEPER_PASSWORD
-  ## The client secrets are only required when using oauthbearer as sasl mechanism.
-  ## Client, interbroker and controller passwords are only required if the sasl mechanism includes something other than oauthbearer.
+  ##       kubectl create secret generic SECRET_NAME --from-literal=client-passwords=CLIENT_PASSWORD1,CLIENT_PASSWORD2 --from-literal=inter-broker-password=INTER_BROKER_PASSWORD --from-literal=inter-broker-client-secret=INTER_BROKER_CLIENT_SECRET --from-literal=controller-password=CONTROLLER_PASSWORD --from-literal=controller-client-secret=CONTROLLER_CLIENT_SECRET
+  ## The client secrets are only required when using OAuthBearer as SASL mechanism.
+  ## Client, inter-broker and controller passwords are only required if the SASL mechanism includes something other than OAuthBearer.
   ##
   existingSecret: ""
 ## @section Kafka TLS parameters
@@ -302,6 +298,27 @@ tls:
   ## Certificates must be in proper order, where the top certificate is the leaf and the bottom certificate is the top-most intermediate CA.
   ##
   pemChainIncluded: false
+  ## @param tls.autoGenerated.enabled Enable automatic generation of TLS certificates (only supported if `tls.type` is `PEM`)
+  ## @param tls.autoGenerated.engine Mechanism to generate the certificates (allowed values: helm, cert-manager)
+  ## @param tls.autoGenerated.customAltNames List of additional subject alternative names (SANs) for the automatically generated TLS certificates.
+  ## @param tls.autoGenerated.certManager.existingIssuer The name of an existing Issuer to use for generating the certificates (only for `cert-manager` engine)
+  ## @param tls.autoGenerated.certManager.existingIssuerKind Existing Issuer kind, defaults to Issuer (only for `cert-manager` engine)
+  ## @param tls.autoGenerated.certManager.keyAlgorithm Key algorithm for the certificates (only for `cert-manager` engine)
+  ## @param tls.autoGenerated.certManager.keySize Key size for the certificates (only for `cert-manager` engine)
+  ## @param tls.autoGenerated.certManager.duration Duration for the certificates (only for `cert-manager` engine)
+  ## @param tls.autoGenerated.certManager.renewBefore Renewal period for the certificates (only for `cert-manager` engine)
+  ##
+  autoGenerated:
+    enabled: true
+    engine: helm
+    customAltNames: []
+    certManager:
+      existingIssuer: ""
+      existingIssuerKind: ""
+      keySize: 2048
+      keyAlgorithm: RSA
+      duration: 2160h
+      renewBefore: 360h
   ## @param tls.existingSecret Name of the existing secret containing the TLS certificates for the Kafka nodes.
   ## When using 'jks' format for certificates, each secret should contain a truststore and a keystore.
   ## Create these secrets following the steps below:
@@ -317,24 +334,16 @@ tls:
   ## When using 'pem' format for certificates, each secret should contain a public CA certificate, a public certificate and one private key.
   ## Create these secrets following the steps below:
   ## 1) Create a certificate key and signing request per Kafka broker, and sign the signing request with your CA
-  ## 2) Rename your CA file to `kafka-ca.crt`.
+  ## 2) Rename your CA file to `ca.crt`.
   ## 3) Rename your certificates to `kafka-X.tls.crt` where X is the ID of each Kafka broker.
   ## 3) Rename your keys to `kafka-X.tls.key` where X is the ID of each Kafka broker.
   ## 4) Run the command below one time per broker to create its associated secret (SECRET_NAME_X is the name of the secret you want to create):
-  ##      kubectl create secret generic SECRET_NAME_0 --from-file=kafka-ca.crt=./kafka-ca.crt --from-file=kafka-controller-0.crt=./kafka-controller-0.crt --from-file=kafka-controller-0.key=./kafka-controller-0.key \
+  ##      kubectl create secret generic SECRET_NAME_0 --from-file=ca.crt=./ca.crt --from-file=kafka-controller-0.crt=./kafka-controller-0.crt --from-file=kafka-controller-0.key=./kafka-controller-0.key \
   ##        --from-file=kafka-broker-0.crt=./kafka-broker-0.crt --from-file=kafka-broker-0.key=./kafka-broker-0.key ...
   ##
-  ## NOTE: Alternatively, a single key and certificate can be provided for all nodes under the keys 'kafka.crt' and 'kafka.key'. These certificates will be used by all nodes unless overridden by the 'kafka-<role>-X.key' and 'kafka-<role>-X.crt' files
   ## NOTE: Alternatively, a single key and certificate can be provided for all nodes under the keys 'tls.crt' and 'tls.key'. These certificates will be used by all nodes unless overridden by the 'kafka-<role>-X.key' and 'kafka-<role>-X.crt' files
   ##
   existingSecret: ""
-  ## @param tls.autoGenerated Generate automatically self-signed TLS certificates for Kafka brokers. Currently only supported if `tls.type` is `PEM`
-  ## Note: ignored when using 'jks' format or `tls.existingSecret` is not empty
-  ##
-  autoGenerated: false
-  ## @param tls.customAltNames Optionally specify extra list of additional subject alternative names (SANs) for the automatically generated TLS certificates.
-  ##
-  customAltNames: []
   ## @param tls.passwordsSecret Name of the secret containing the password to access the JKS files or PEM key when they are password-protected. (`key`: `password`)
   ##
   passwordsSecret: ""
@@ -380,43 +389,6 @@ tls:
   ## ref: https://docs.confluent.io/current/kafka/authentication_ssl.html#optional-settings
   ##
   sslClientAuth: "required"
-  ## Zookeeper TLS connection configuration for Kafka
-  ##
-  zookeeper:
-    ## @param tls.zookeeper.enabled Enable TLS for Zookeeper client connections.
-    ##
-    enabled: false
-    ## @param tls.zookeeper.verifyHostname Hostname validation.
-    ##
-    verifyHostname: true
-    ## @param tls.zookeeper.existingSecret Name of the existing secret containing the TLS certificates for ZooKeeper client communications.
-    ##
-    existingSecret: ""
-    ## @param tls.zookeeper.existingSecretKeystoreKey The secret key from the  tls.zookeeper.existingSecret containing the Keystore.
-    ##
-    existingSecretKeystoreKey: zookeeper.keystore.jks
-    ## @param tls.zookeeper.existingSecretTruststoreKey The secret key from the tls.zookeeper.existingSecret containing the Truststore.
-    ##
-    existingSecretTruststoreKey: zookeeper.truststore.jks
-    ## @param tls.zookeeper.passwordsSecret Existing secret containing Keystore and Truststore passwords.
-    ##
-    passwordsSecret: ""
-    ## @param tls.zookeeper.passwordsSecretKeystoreKey The secret key from the tls.zookeeper.passwordsSecret containing the password for the Keystore.
-    ## If no keystore password is included in the passwords secret, set this value to an empty string.
-    ##
-    passwordsSecretKeystoreKey: keystore-password
-    ## @param tls.zookeeper.passwordsSecretTruststoreKey The secret key from the tls.zookeeper.passwordsSecret containing the password for the Truststore.
-    ## If no truststore password is included in the passwords secret, set this value to an empty string.
-    ##
-    passwordsSecretTruststoreKey: truststore-password
-    ## @param tls.zookeeper.keystorePassword Password to access the JKS keystore when it is password-protected. Ignored when 'tls.passwordsSecret' is provided.
-    ## When using tls.type=PEM, the generated keystore will use this password or randomly generate one.
-    ##
-    keystorePassword: ""
-    ## @param tls.zookeeper.truststorePassword Password to access the JKS truststore when it is password-protected. Ignored when 'tls.passwordsSecret' is provided.
-    ## When using tls.type=PEM, the generated keystore will use this password or randomly generate one.
-    ##
-    truststorePassword: ""
 ## @param extraEnvVars Extra environment variables to add to Kafka pods
 ## ref: https://github.com/bitnami/containers/tree/main/bitnami/kafka#configuration
 ## e.g:
@@ -470,7 +442,7 @@ sidecars: []
 initContainers: []
 ## DNS-Pod services
 ## Ref: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
-## @param dnsPolicy Specifies the DNS policy for the zookeeper pods
+## @param dnsPolicy Specifies the DNS policy for the Kafka pods
 ## DNS policies can be set on a per-Pod basis. Currently Kubernetes supports the following Pod-specific DNS policies.
 ## Available options: Default, ClusterFirst, ClusterFirstWithHostNet, None
 ## Ref: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy
@@ -490,47 +462,244 @@ dnsPolicy: ""
 ##       value: "2"
 ##     - name: edns0
 dnsConfig: {}
+
+## Default init Containers
+##
+defaultInitContainers:
+  ## 'volume-permissions' init container
+  ## Used to change the owner and group of the persistent volume(s) mountpoint(s) to 'runAsUser:fsGroup' on each node
+  ##
+  volumePermissions:
+    ## @param defaultInitContainers.volumePermissions.enabled Enable init container that changes the owner and group of the persistent volume
+    ##
+    enabled: false
+    ## @param defaultInitContainers.volumePermissions.image.registry [default: REGISTRY_NAME] "volume-permissions" init-containers' image registry
+    ## @param defaultInitContainers.volumePermissions.image.repository [default: REPOSITORY_NAME/os-shell] "volume-permissions" init-containers' image repository
+    ## @skip defaultInitContainers.volumePermissions.image.tag "volume-permissions" init-containers' image tag (immutable tags are recommended)
+    ## @param defaultInitContainers.volumePermissions.image.digest "volume-permissions" init-containers' image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
+    ## @param defaultInitContainers.volumePermissions.image.pullPolicy "volume-permissions" init-containers' image pull policy
+    ## @param defaultInitContainers.volumePermissions.image.pullSecrets "volume-permissions" init-containers' image pull secrets
+    ##
+    image:
+      registry: docker.io
+      repository: bitnami/os-shell
+      tag: 12-debian-12-r39
+      digest: ""
+      pullPolicy: IfNotPresent
+      ## Optionally specify an array of imagePullSecrets.
+      ## Secrets must be manually created in the namespace.
+      ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
+      ## Example:
+      ## pullSecrets:
+      ##   - myRegistryKeySecretName
+      ##
+      pullSecrets: []
+    ## Configure "volume-permissions" init-container Security Context
+    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
+    ## @param defaultInitContainers.volumePermissions.containerSecurityContext.enabled Enabled "volume-permissions" init-containers' Security Context
+    ## @param defaultInitContainers.volumePermissions.containerSecurityContext.seLinuxOptions [object,nullable] Set SELinux options in "volume-permissions" init-containers
+    ## @param defaultInitContainers.volumePermissions.containerSecurityContext.runAsUser Set runAsUser in "volume-permissions" init-containers' Security Context
+    ## @param defaultInitContainers.volumePermissions.containerSecurityContext.privileged Set privileged in "volume-permissions" init-containers' Security Context
+    ## @param defaultInitContainers.volumePermissions.containerSecurityContext.allowPrivilegeEscalation Set allowPrivilegeEscalation in "volume-permissions" init-containers' Security Context
+    ## @param defaultInitContainers.volumePermissions.containerSecurityContext.capabilities.add List of capabilities to be added in "volume-permissions" init-containers
+    ## @param defaultInitContainers.volumePermissions.containerSecurityContext.capabilities.drop List of capabilities to be dropped in "volume-permissions" init-containers
+    ## @param defaultInitContainers.volumePermissions.containerSecurityContext.seccompProfile.type Set seccomp profile in "volume-permissions" init-containers
+    ##
+    containerSecurityContext:
+      enabled: true
+      seLinuxOptions: {}
+      runAsUser: 0
+      privileged: false
+      allowPrivilegeEscalation: false
+      capabilities:
+        add: []
+        drop: ["ALL"]
+      seccompProfile:
+        type: "RuntimeDefault"
+    ## Kafka "volume-permissions" init container resource requests and limits
+    ## ref: http://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
+    ## @param defaultInitContainers.volumePermissions.resourcesPreset Set Kafka "volume-permissions" init container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if defaultInitContainers.volumePermissions.resources is set (defaultInitContainers.volumePermissions.resources is recommended for production).
+    ## More information: https://github.com/bitnami/charts/blob/main/bitnami/common/templates/_resources.tpl#L15
+    ##
+    resourcesPreset: "nano"
+    ## @param defaultInitContainers.volumePermissions.resources Set Kafka "volume-permissions" init container requests and limits for different resources like CPU or memory (essential for production workloads)
+    ## E.g:
+    ## resources:
+    ##   requests:
+    ##     cpu: 2
+    ##     memory: 512Mi
+    ##   limits:
+    ##     cpu: 3
+    ##     memory: 1024Mi
+    ##
+    resources: {}
+  ## Kafka "prepare-config" init container
+  ## Used to prepare the Kafka configuration files for main containers to use them
+  ##
+  prepareConfig:
+    ## Configure "prepare-config" init-container Security Context
+    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
+    ## @param defaultInitContainers.prepareConfig.containerSecurityContext.enabled Enabled "prepare-config" init-containers' Security Context
+    ## @param defaultInitContainers.prepareConfig.containerSecurityContext.seLinuxOptions [object,nullable] Set SELinux options in "prepare-config" init-containers
+    ## @param defaultInitContainers.prepareConfig.containerSecurityContext.runAsUser Set runAsUser in "prepare-config" init-containers' Security Context
+    ## @param defaultInitContainers.prepareConfig.containerSecurityContext.runAsGroup Set runAsUser in "prepare-config" init-containers' Security Context
+    ## @param defaultInitContainers.prepareConfig.containerSecurityContext.runAsNonRoot Set runAsNonRoot in "prepare-config" init-containers' Security Context
+    ## @param defaultInitContainers.prepareConfig.containerSecurityContext.readOnlyRootFilesystem Set readOnlyRootFilesystem in "prepare-config" init-containers' Security Context
+    ## @param defaultInitContainers.prepareConfig.containerSecurityContext.privileged Set privileged in "prepare-config" init-containers' Security Context
+    ## @param defaultInitContainers.prepareConfig.containerSecurityContext.allowPrivilegeEscalation Set allowPrivilegeEscalation in "prepare-config" init-containers' Security Context
+    ## @param defaultInitContainers.prepareConfig.containerSecurityContext.capabilities.add List of capabilities to be added in "prepare-config" init-containers
+    ## @param defaultInitContainers.prepareConfig.containerSecurityContext.capabilities.drop List of capabilities to be dropped in "prepare-config" init-containers
+    ## @param defaultInitContainers.prepareConfig.containerSecurityContext.seccompProfile.type Set seccomp profile in "prepare-config" init-containers
+    ##
+    containerSecurityContext:
+      enabled: true
+      seLinuxOptions: {}
+      runAsUser: 1001
+      runAsGroup: 1001
+      runAsNonRoot: true
+      readOnlyRootFilesystem: true
+      privileged: false
+      allowPrivilegeEscalation: false
+      capabilities:
+        add: []
+        drop: ["ALL"]
+      seccompProfile:
+        type: "RuntimeDefault"
+    ## Kafka "prepare-config" init container resource requests and limits
+    ## ref: http://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
+    ## @param defaultInitContainers.prepareConfig.resourcesPreset Set Kafka "prepare-config" init container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if defaultInitContainers.prepareConfig.resources is set (defaultInitContainers.prepareConfig.resources is recommended for production).
+    ## More information: https://github.com/bitnami/charts/blob/main/bitnami/common/templates/_resources.tpl#L15
+    ##
+    resourcesPreset: "nano"
+    ## @param defaultInitContainers.prepareConfig.resources Set Kafka "prepare-config" init container requests and limits for different resources like CPU or memory (essential for production workloads)
+    ## E.g:
+    ## resources:
+    ##   requests:
+    ##     cpu: 2
+    ##     memory: 512Mi
+    ##   limits:
+    ##     cpu: 3
+    ##     memory: 1024Mi
+    ##
+    resources: {}
+    ## @param defaultInitContainers.prepareConfig.extraInit Additional content for the "prepare-config" init script, rendered as a template.
+    ##
+    extraInit: ""
+  ## 'auto-discovery' init container
+  ## Used to auto-detect LB IPs or node ports by querying the K8s API
+  ## Note: RBAC might be required
+  ##
+  autoDiscovery:
+    ## @param defaultInitContainers.autoDiscovery.enabled Enable init container that auto-detects external IPs/ports by querying the K8s API
+    ##
+    enabled: false
+    ## Bitnami Kubectl image
+    ## @param defaultInitContainers.autoDiscovery.image.registry [default: REGISTRY_NAME] "auto-discovery" init-containers' image registry
+    ## @param defaultInitContainers.autoDiscovery.image.repository [default: REPOSITORY_NAME/os-shell] "auto-discovery" init-containers' image repository
+    ## @skip defaultInitContainers.autoDiscovery.image.tag "auto-discovery" init-containers' image tag (immutable tags are recommended)
+    ## @param defaultInitContainers.autoDiscovery.image.digest "auto-discovery" init-containers' image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
+    ## @param defaultInitContainers.autoDiscovery.image.pullPolicy "auto-discovery" init-containers' image pull policy
+    ## @param defaultInitContainers.autoDiscovery.image.pullSecrets "auto-discovery" init-containers' image pull secrets
+    ##
+    image:
+      registry: docker.io
+      repository: bitnami/kubectl
+      tag: 1.32.2-debian-12-r3
+      digest: ""
+      ## Specify a imagePullPolicy
+      ## ref: https://kubernetes.io/docs/concepts/containers/images/#pre-pulled-images
+      ##
+      pullPolicy: IfNotPresent
+      ## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
+      ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
+      ## e.g:
+      ## pullSecrets:
+      ##   - myRegistryKeySecretName
+      ##
+      pullSecrets: []
+    ## Configure "auto-discovery" init-container Security Context
+    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
+    ## @param defaultInitContainers.autoDiscovery.containerSecurityContext.enabled Enabled "auto-discovery" init-containers' Security Context
+    ## @param defaultInitContainers.autoDiscovery.containerSecurityContext.seLinuxOptions [object,nullable] Set SELinux options in "auto-discovery" init-containers
+    ## @param defaultInitContainers.autoDiscovery.containerSecurityContext.runAsUser Set runAsUser in "auto-discovery" init-containers' Security Context
+    ## @param defaultInitContainers.autoDiscovery.containerSecurityContext.runAsGroup Set runAsUser in "auto-discovery" init-containers' Security Context
+    ## @param defaultInitContainers.autoDiscovery.containerSecurityContext.runAsNonRoot Set runAsNonRoot in "auto-discovery" init-containers' Security Context
+    ## @param defaultInitContainers.autoDiscovery.containerSecurityContext.readOnlyRootFilesystem Set readOnlyRootFilesystem in "auto-discovery" init-containers' Security Context
+    ## @param defaultInitContainers.autoDiscovery.containerSecurityContext.privileged Set privileged in "auto-discovery" init-containers' Security Context
+    ## @param defaultInitContainers.autoDiscovery.containerSecurityContext.allowPrivilegeEscalation Set allowPrivilegeEscalation in "auto-discovery" init-containers' Security Context
+    ## @param defaultInitContainers.autoDiscovery.containerSecurityContext.capabilities.add List of capabilities to be added in "auto-discovery" init-containers
+    ## @param defaultInitContainers.autoDiscovery.containerSecurityContext.capabilities.drop List of capabilities to be dropped in "auto-discovery" init-containers
+    ## @param defaultInitContainers.autoDiscovery.containerSecurityContext.seccompProfile.type Set seccomp profile in "auto-discovery" init-containers
+    ##
+    containerSecurityContext:
+      enabled: true
+      seLinuxOptions: {}
+      runAsUser: 1001
+      runAsGroup: 1001
+      runAsNonRoot: true
+      readOnlyRootFilesystem: true
+      privileged: false
+      allowPrivilegeEscalation: false
+      capabilities:
+        add: []
+        drop: ["ALL"]
+      seccompProfile:
+        type: "RuntimeDefault"
+    ## Kafka "auto-discovery" init container resource requests and limits
+    ## ref: http://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
+    ## @param defaultInitContainers.autoDiscovery.resourcesPreset Set Kafka "auto-discovery" init container resources according to one common preset (allowed values: none, nano, small, medium, large, xlarge, 2xlarge). This is ignored if defaultInitContainers.autoDiscovery.resources is set (defaultInitContainers.autoDiscovery.resources is recommended for production).
+    ## More information: https://github.com/bitnami/charts/blob/main/bitnami/common/templates/_resources.tpl#L15
+    ##
+    resourcesPreset: "nano"
+    ## @param defaultInitContainers.autoDiscovery.resources Set Kafka "auto-discovery" init container requests and limits for different resources like CPU or memory (essential for production workloads)
+    ## E.g:
+    ## resources:
+    ##   requests:
+    ##     cpu: 2
+    ##     memory: 512Mi
+    ##   limits:
+    ##     cpu: 3
+    ##     memory: 1024Mi
+    ##
+    resources: {}
+
 ## @section Controller-eligible statefulset parameters
 ##
 controller:
   ## @param controller.replicaCount Number of Kafka controller-eligible nodes
-  ## Ignore this section if running in Zookeeper mode.
   ##
   replicaCount: 3
   ## @param controller.controllerOnly If set to true, controller nodes will be deployed as dedicated controllers, instead of controller+broker processes.
   ##
   controllerOnly: false
+  ## @param controller.quorumBootstrapServers Override the Kafka controller quorum bootstrap servers of the Kafka Kraft cluster. If not set, it will be automatically configured to use all controller-eligible nodes.
+  ##
+  quorumBootstrapServers: ""
   ## @param controller.minId Minimal node.id values for controller-eligible nodes. Do not change after first initialization.
   ## Broker-only id increment their ID starting at this minimal value.
   ## We recommend setting this this value high enough, as IDs under this value will be used by controller-elegible nodes
   ##
   minId: 0
-  ## @param controller.zookeeperMigrationMode Set to true to deploy cluster controller quorum
-  ## This allows configuring both kraft and zookeeper modes simultaneously in order to perform the migration of the Kafka metadata.
-  ## Ref. https://docs.confluent.io/platform/current/installation/migrate-zk-kraft.html
-  ##
-  zookeeperMigrationMode: false
-  ## @param controller.config Configuration file for Kafka controller-eligible nodes, rendered as a template. Auto-generated based on chart values when not specified.
-  ## @param controller.existingConfigmap ConfigMap with Kafka Configuration for controller-eligible nodes.
+  ## @param controller.config Specify content for Kafka configuration for Kafka controller-eligible nodes (auto-generated based on other parameters otherwise)
   ## NOTE: This will override the configuration based on values, please act carefully
-  ## If both are set, the existingConfigMap will be used.
+  ## Use simple key-value YAML format, then it's transformed to properties format by the chart. e.g:
+  ##    process.roles: controller
+  ## ... will be transformed to:
+  ##    process.roles=controller
   ##
-  config: ""
-  existingConfigmap: ""
-  ## @param controller.extraConfig Additional configuration to be appended at the end of the generated Kafka controller-eligible nodes configuration file.
+  config: {}
+  ## @param controller.overrideConfiguration Kafka configuration override for Kafka controller-eligible nodes. Values defined here takes precedence over the ones defined at `controller.config`
   ##
-  extraConfig: ""
-  ## @param controller.extraConfigYaml Additional configuration in yaml format to be appended at the end of the generated Kafka controller-eligible nodes configuration file.
-  ## If keys of extraConfigYaml are duplicated here, the value from controller.extraConfigYaml is taken.
+  overrideConfiguration: {}
+  ## @param controller.existingConfigmap Name of an existing ConfigMap with the Kafka configuration for Kafka controller-eligible nodes
   ##
-  extraConfigYaml: {}
-  ## @param controller.secretConfig Additional configuration to be appended at the end of the generated Kafka controller-eligible nodes configuration file.
-  ## This value will be stored in a secret.
+  existingConfigmap: ""
+  ## @param controller.secretConfig Additional configuration to be appended at the end of the generated Kafka configuration for Kafka controller-eligible nodes (store in a secret)
   ##
   secretConfig: ""
-  ## @param controller.existingSecretConfig Secret with additonal configuration that will be appended to the end of the generated Kafka controller-eligible nodes configuration file
+  ## @param controller.existingSecretConfig Secret with additional configuration that will be appended to the end of the generated Kafka configuration for Kafka controller-eligible nodes
   ## The key for the configuration should be: server-secret.properties
-  ## NOTE: This will override controller.secretConfig value
+  ## NOTE: This will override secretConfig value
   ##
   existingSecretConfig: ""
   ## @param controller.heapOpts Kafka Java Heap size for controller-eligible nodes
@@ -615,14 +784,6 @@ controller:
   ## @param controller.lifecycleHooks lifecycleHooks for the Kafka container to automate configuration before or after startup
   ##
   lifecycleHooks: {}
-  ## Kafka init container resource requests and limits
-  ## ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
-  ## @param controller.initContainerResources.limits The resources limits for the init container
-  ## @param controller.initContainerResources.requests The requested resources for the init container
-  ##
-  initContainerResources:
-    limits: {}
-    requests: {}
   ## Kafka resource requests and limits
   ## ref: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
   ## @param controller.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if controller.resources is set (controller.resources is recommended for production).
@@ -851,6 +1012,9 @@ controller:
       ## @param controller.autoscaling.hpa.enabled Enable HPA for Kafka Controller
       ##
       enabled: false
+      ## @param controller.autoscaling.hpa.annotations Annotations for HPA resource
+      ##
+      annotations: {}
       ## @param controller.autoscaling.hpa.minReplicas Minimum number of Kafka Controller replicas
       ##
       minReplicas: ""
@@ -877,7 +1041,7 @@ controller:
   ## ref: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
   ##
   persistence:
-    ## @param controller.persistence.enabled Enable Kafka data persistence using PVC, note that ZooKeeper persistence is unaffected
+    ## @param controller.persistence.enabled Enable Kafka data persistence using PVC
     ##
     enabled: true
     ## @param controller.persistence.existingClaim A manually managed Persistent Volume and Claim
@@ -917,7 +1081,7 @@ controller:
   ## Log Persistence parameters
   ##
   logPersistence:
-    ## @param controller.logPersistence.enabled Enable Kafka logs persistence using PVC, note that ZooKeeper persistence is unaffected
+    ## @param controller.logPersistence.enabled Enable Kafka logs persistence using PVC
     ##
     enabled: false
     ## @param controller.logPersistence.existingClaim A manually managed Persistent Volume and Claim
@@ -963,32 +1127,26 @@ broker:
   ##
   ##
   minId: 100
-  ## @param broker.zookeeperMigrationMode Set to true to deploy cluster controller quorum
-  ## This allows configuring both kraft and zookeeper modes simultaneously in order to perform the migration of the Kafka metadata.
-  ## Ref. https://docs.confluent.io/platform/current/installation/migrate-zk-kraft.html
-  ##
-  zookeeperMigrationMode: false
-  ## @param broker.config Configuration file for Kafka broker-only nodes, rendered as a template. Auto-generated based on chart values when not specified.
-  ## @param broker.existingConfigmap ConfigMap with Kafka Configuration for broker-only nodes.
+  ## @param broker.config Specify content for Kafka configuration for Kafka broker-only nodes (auto-generated based on other parameters otherwise)
   ## NOTE: This will override the configuration based on values, please act carefully
-  ## If both are set, the existingConfigMap will be used.
+  ## Use simple key-value YAML format, then it's transformed to properties format by the chart. e.g:
+  ##    process.roles: broker
+  ## ... will be transformed to:
+  ##    process.roles=broker
   ##
-  config: ""
-  existingConfigmap: ""
-  ## @param broker.extraConfig Additional configuration to be appended at the end of the generated Kafka broker-only nodes configuration file.
+  config: {}
+  ## @param broker.overrideConfiguration Kafka configuration override for Kafka broker-only nodes. Values defined here takes precedence over the ones defined at `broker.config`
   ##
-  extraConfig: ""
-  ## @param broker.extraConfigYaml Additional configuration in yaml format to be appended at the end of the generated Kafka broker-only nodes configuration file.
-  ## If keys of extraConfigYaml are duplicated here, the value from broker.extraConfigYaml is taken.
+  overrideConfiguration: {}
+  ## @param broker.existingConfigmap Name of an existing ConfigMap with the Kafka configuration for Kafka broker-only nodes
   ##
-  extraConfigYaml: {}
-  ## @param broker.secretConfig Additional configuration to be appended at the end of the generated Kafka broker-only nodes configuration file.
-  ## This value will be stored in a secret.
+  existingConfigmap: ""
+  ## @param broker.secretConfig Additional configuration to be appended at the end of the generated Kafka configuration for Kafka broker-only nodes (store in a secret)
   ##
   secretConfig: ""
-  ## @param broker.existingSecretConfig Secret with additonal configuration that will be appended to the end of the generated Kafka broker-only nodes configuration file
+  ## @param broker.existingSecretConfig Secret with additional configuration that will be appended to the end of the generated Kafka configuration for Kafka broker-only nodes
   ## The key for the configuration should be: server-secret.properties
-  ## NOTE: This will override broker.secretConfig value
+  ## NOTE: This will override secretConfig value
   ##
   existingSecretConfig: ""
   ## @param broker.heapOpts Kafka Java Heap size for broker-only nodes
@@ -1073,14 +1231,6 @@ broker:
   ## @param broker.lifecycleHooks lifecycleHooks for the Kafka container to automate configuration before or after startup
   ##
   lifecycleHooks: {}
-  ## Kafka init container resource requests and limits
-  ## ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
-  ## @param broker.initContainerResources.limits The resources limits for the container
-  ## @param broker.initContainerResources.requests The requested resources for the container
-  ##
-  initContainerResources:
-    limits: {}
-    requests: {}
   ## Kafka resource requests and limits
   ## ref: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
   ## @param broker.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if broker.resources is set (broker.resources is recommended for production).
@@ -1318,6 +1468,9 @@ broker:
       ## @param broker.autoscaling.hpa.enabled Enable HPA for Kafka Broker
       ##
       enabled: false
+      ## @param broker.autoscaling.hpa.annotations Annotations for HPA resource
+      ##
+      annotations: {}
       ## @param broker.autoscaling.hpa.minReplicas Minimum number of Kafka Broker replicas
       ##
       minReplicas: ""
@@ -1334,7 +1487,7 @@ broker:
   ## ref: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
   ##
   persistence:
-    ## @param broker.persistence.enabled Enable Kafka data persistence using PVC, note that ZooKeeper persistence is unaffected
+    ## @param broker.persistence.enabled Enable Kafka data persistence using PVC
     ##
     enabled: true
     ## @param broker.persistence.existingClaim A manually managed Persistent Volume and Claim
@@ -1374,7 +1527,7 @@ broker:
   ## Log Persistence parameters
   ##
   logPersistence:
-    ## @param broker.logPersistence.enabled Enable Kafka logs persistence using PVC, note that ZooKeeper persistence is unaffected
+    ## @param broker.logPersistence.enabled Enable Kafka logs persistence using PVC
     ##
     enabled: false
     ## @param broker.logPersistence.existingClaim A manually managed Persistent Volume and Claim
@@ -1418,7 +1571,7 @@ service:
   ##
   type: ClusterIP
   ## @param service.ports.client Kafka svc port for client connections
-  ## @param service.ports.controller Kafka svc port for controller connections. It is used if "kraft.enabled: true"
+  ## @param service.ports.controller Kafka svc port for controller connections
   ## @param service.ports.interbroker Kafka svc port for inter-broker connections
   ## @param service.ports.external Kafka svc port for external connections
   ##
@@ -1508,86 +1661,6 @@ externalAccess:
   ## @param externalAccess.enabled Enable Kubernetes external cluster access to Kafka brokers
   ##
   enabled: false
-  ## External IPs auto-discovery configuration
-  ## An init container is used to auto-detect LB IPs or node ports by querying the K8s API
-  ## Note: RBAC might be required
-  ##
-  autoDiscovery:
-    ## @param externalAccess.autoDiscovery.enabled Enable using an init container to auto-detect external IPs/ports by querying the K8s API
-    ##
-    enabled: false
-    ## Bitnami Kubectl image
-    ## ref: https://hub.docker.com/r/bitnami/kubectl/tags/
-    ## @param externalAccess.autoDiscovery.image.registry [default: REGISTRY_NAME] Init container auto-discovery image registry
-    ## @param externalAccess.autoDiscovery.image.repository [default: REPOSITORY_NAME/kubectl] Init container auto-discovery image repository
-    ## @skip externalAccess.autoDiscovery.image.tag Init container auto-discovery image tag (immutable tags are recommended)
-    ## @param externalAccess.autoDiscovery.image.digest Kubectl image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
-    ## @param externalAccess.autoDiscovery.image.pullPolicy Init container auto-discovery image pull policy
-    ## @param externalAccess.autoDiscovery.image.pullSecrets Init container auto-discovery image pull secrets
-    ##
-    image:
-      registry: docker.io
-      repository: bitnami/kubectl
-      tag: 1.32.2-debian-12-r3
-      digest: ""
-      ## Specify a imagePullPolicy
-      ## ref: https://kubernetes.io/docs/concepts/containers/images/#pre-pulled-images
-      ##
-      pullPolicy: IfNotPresent
-      ## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
-      ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
-      ## e.g:
-      ## pullSecrets:
-      ##   - myRegistryKeySecretName
-      ##
-      pullSecrets: []
-    ## Init Container resource requests and limits
-    ## ref: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
-    ## @param externalAccess.autoDiscovery.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if externalAccess.autoDiscovery.resources is set (externalAccess.autoDiscovery.resources is recommended for production).
-    ## More information: https://github.com/bitnami/charts/blob/main/bitnami/common/templates/_resources.tpl#L15
-    ##
-    resourcesPreset: "nano"
-    ## @param externalAccess.autoDiscovery.resources Set container requests and limits for different resources like CPU or memory (essential for production workloads)
-    ## Example:
-    ## resources:
-    ##   requests:
-    ##     cpu: 2
-    ##     memory: 512Mi
-    ##   limits:
-    ##     cpu: 3
-    ##     memory: 1024Mi
-    ##
-    resources: {}
-    ## Kafka provisioning containers' Security Context
-    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
-    ## @param externalAccess.autoDiscovery.containerSecurityContext.enabled Enable Kafka auto-discovery containers' Security Context
-    ## @param externalAccess.autoDiscovery.containerSecurityContext.seLinuxOptions [object,nullable] Set SELinux options in container
-    ## @param externalAccess.autoDiscovery.containerSecurityContext.runAsUser Set containers' Security Context runAsUser
-    ## @param externalAccess.autoDiscovery.containerSecurityContext.runAsGroup Set containers' Security Context runAsGroup
-    ## @param externalAccess.autoDiscovery.containerSecurityContext.runAsNonRoot Set Kafka auto-discovery containers' Security Context runAsNonRoot
-    ## @param externalAccess.autoDiscovery.containerSecurityContext.allowPrivilegeEscalation Set Kafka auto-discovery containers' Security Context allowPrivilegeEscalation
-    ## @param externalAccess.autoDiscovery.containerSecurityContext.readOnlyRootFilesystem Set Kafka auto-discovery containers' Security Context readOnlyRootFilesystem
-    ## @param externalAccess.autoDiscovery.containerSecurityContext.capabilities.drop Set Kafka auto-discovery containers' Security Context capabilities to be dropped
-    ## @param externalAccess.autoDiscovery.containerSecurityContext.seccompProfile.type Set Kafka auto-discovery seccomp profile type
-    ## e.g:
-    ##   containerSecurityContext:
-    ##     enabled: true
-    ##     capabilities:
-    ##       drop: ["NET_RAW"]
-    ##     readOnlyRootFilesystem: true
-    ##
-    containerSecurityContext:
-      enabled: true
-      seLinuxOptions: {}
-      runAsUser: 1001
-      runAsGroup: 1001
-      runAsNonRoot: true
-      allowPrivilegeEscalation: false
-      readOnlyRootFilesystem: true
-      capabilities:
-        drop: ["ALL"]
-      seccompProfile:
-        type: "RuntimeDefault"
   ## Service settings
   controller:
     ## @param externalAccess.controller.forceExpose If set to true, force exposing controller-eligible nodes although they are configured as controller-only nodes
@@ -1784,8 +1857,8 @@ networkPolicy:
   ##
   enabled: true
   ## @param networkPolicy.allowExternal Don't require client label for connections
-  ## When set to false, only pods with the correct client label will have network access to the port Redis&reg; is
-  ## listening on. When true, zookeeper accept connections from any source (with the correct destination port).
+  ## When set to false, only pods with the correct client label will have network access to the port Kafka is
+  ## listening on. When true, Kafka accept connections from any source (with the correct destination port).
   ##
   allowExternal: true
   ## @param networkPolicy.allowExternalEgress Allow the pod to access any range of port and all destinations.
@@ -1838,65 +1911,8 @@ networkPolicy:
   ##
   ingressNSMatchLabels: {}
   ingressNSPodMatchLabels: {}
-## @section Volume Permissions parameters
-##
 
-## Init containers parameters:
-## volumePermissions: Change the owner and group of the persistent volume(s) mountpoint(s) to 'runAsUser:fsGroup' on each node
-##
-volumePermissions:
-  ## @param volumePermissions.enabled Enable init container that changes the owner and group of the persistent volume
-  ##
-  enabled: false
-  ## @param volumePermissions.image.registry [default: REGISTRY_NAME] Init container volume-permissions image registry
-  ## @param volumePermissions.image.repository [default: REPOSITORY_NAME/os-shell] Init container volume-permissions image repository
-  ## @skip volumePermissions.image.tag Init container volume-permissions image tag (immutable tags are recommended)
-  ## @param volumePermissions.image.digest Init container volume-permissions image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
-  ## @param volumePermissions.image.pullPolicy Init container volume-permissions image pull policy
-  ## @param volumePermissions.image.pullSecrets Init container volume-permissions image pull secrets
-  ##
-  image:
-    registry: docker.io
-    repository: bitnami/os-shell
-    tag: 12-debian-12-r39
-    digest: ""
-    pullPolicy: IfNotPresent
-    ## Optionally specify an array of imagePullSecrets.
-    ## Secrets must be manually created in the namespace.
-    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
-    ## Example:
-    ## pullSecrets:
-    ##   - myRegistryKeySecretName
-    ##
-    pullSecrets: []
-  ## Init container resource requests and limits
-  ## ref: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
-  ## @param volumePermissions.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if volumePermissions.resources is set (volumePermissions.resources is recommended for production).
-  ## More information: https://github.com/bitnami/charts/blob/main/bitnami/common/templates/_resources.tpl#L15
-  ##
-  resourcesPreset: "nano"
-  ## @param volumePermissions.resources Set container requests and limits for different resources like CPU or memory (essential for production workloads)
-  ## Example:
-  ## resources:
-  ##   requests:
-  ##     cpu: 2
-  ##     memory: 512Mi
-  ##   limits:
-  ##     cpu: 3
-  ##     memory: 1024Mi
-  ##
-  resources: {}
-  ## Init container' Security Context
-  ## Note: the chown of the data folder is done to containerSecurityContext.runAsUser
-  ## and not the below volumePermissions.containerSecurityContext.runAsUser
-  ## @param volumePermissions.containerSecurityContext.seLinuxOptions [object,nullable] Set SELinux options in container
-  ## @param volumePermissions.containerSecurityContext.runAsUser User ID for the init container
-  ##
-  containerSecurityContext:
-    seLinuxOptions: {}
-    runAsUser: 0
 ## @section Other Parameters
-##
 
 ## ServiceAccount for Kafka
 ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
@@ -1925,8 +1941,8 @@ rbac:
   ## that allows Kafka pods querying the K8s API
   ##
   create: false
+
 ## @section Metrics parameters
-##
 
 ## Prometheus Exporters / Metrics
 ##
@@ -2435,77 +2451,3 @@ provisioning:
   waitForKafka: true
   ## @param provisioning.useHelmHooks Flag to indicate usage of helm hooks
   useHelmHooks: true
-## @section KRaft chart parameters
-
-## KRaft configuration
-## Kafka mode without Zookeeper. Kafka nodes can work as controllers in this mode.
-##
-kraft:
-  ## @param kraft.enabled Switch to enable or disable the KRaft mode for Kafka
-  ##
-  enabled: true
-  ## @param kraft.existingClusterIdSecret Name of the secret containing the cluster ID for the Kafka KRaft cluster. This is incompatible with the clusterId parameter. If both are set, the existingClusterIdSecret will be used
-  existingClusterIdSecret: ""
-  ## @param kraft.clusterId Kafka Kraft cluster ID. If not set, a random cluster ID will be generated the first time Kraft is initialized.
-  ## NOTE: Already initialized Kafka nodes will use cluster ID stored in their persisted storage.
-  ## If reusing existing PVCs or migrating from Zookeeper mode, make sure the cluster ID is set matching the stored cluster ID, otherwise new nodes will fail to join the cluster.
-  ## In case the cluster ID stored in the secret does not match the value stored in /bitnami/kafka/data/meta.properties, remove the secret and upgrade the chart setting the correct value.
-  ##
-  clusterId: ""
-  ## @param kraft.controllerQuorumVoters Override the Kafka controller quorum voters of the Kafka Kraft cluster. If not set, it will be automatically configured to use all controller-elegible nodes.
-  ##
-  controllerQuorumVoters: ""
-## @section ZooKeeper chart parameters
-##
-## @param zookeeperChrootPath Path which puts data under some path in the global ZooKeeper namespace
-## ref: https://kafka.apache.org/documentation/#brokerconfigs_zookeeper.connect
-##
-zookeeperChrootPath: ""
-## ZooKeeper chart configuration
-## https://github.com/bitnami/charts/blob/main/bitnami/zookeeper/values.yaml
-##
-zookeeper:
-  ## @param zookeeper.enabled Switch to enable or disable the ZooKeeper helm chart. Must be false if you use KRaft mode.
-  ##
-  enabled: false
-  ## @param zookeeper.replicaCount Number of ZooKeeper nodes
-  ##
-  replicaCount: 1
-  ## ZooKeeper authentication
-  ##
-  auth:
-    client:
-      ## @param zookeeper.auth.client.enabled Enable ZooKeeper auth
-      ##
-      enabled: false
-      ## @param zookeeper.auth.client.clientUser User that will use ZooKeeper client (zkCli.sh) to authenticate. Must exist in the serverUsers comma-separated list.
-      ##
-      clientUser: ""
-      ## @param zookeeper.auth.client.clientPassword Password that will use ZooKeeper client (zkCli.sh) to authenticate. Must exist in the serverPasswords comma-separated list.
-      ##
-      clientPassword: ""
-      ## @param zookeeper.auth.client.serverUsers Comma, semicolon or whitespace separated list of user to be created. Specify them as a string, for example: "user1,user2,admin"
-      ##
-      serverUsers: ""
-      ## @param zookeeper.auth.client.serverPasswords Comma, semicolon or whitespace separated list of passwords to assign to users when created. Specify them as a string, for example: "pass4user1, pass4user2, pass4admin"
-      ##
-      serverPasswords: ""
-  ## ZooKeeper Persistence parameters
-  ## ref: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
-  ## @param zookeeper.persistence.enabled Enable persistence on ZooKeeper using PVC(s)
-  ## @param zookeeper.persistence.storageClass Persistent Volume storage class
-  ## @param zookeeper.persistence.accessModes Persistent Volume access modes
-  ## @param zookeeper.persistence.size Persistent Volume size
-  ##
-  persistence:
-    enabled: true
-    storageClass: ""
-    accessModes:
-      - ReadWriteOnce
-    size: 8Gi
-## External Zookeeper Configuration
-##
-externalZookeeper:
-  ## @param externalZookeeper.servers List of external zookeeper servers to use. Typically used in combination with 'zookeeperChrootPath'. Must be empty if you use KRaft mode.
-  ##
-  servers: []

部分文件因为文件数量过多而无法显示