Skip to main content
Version: 0.8

Deploying with Helm Charts

This page provides instructions for deploying a Fluss cluster on Kubernetes using Helm charts. The chart creates a distributed streaming storage system with CoordinatorServer and TabletServer components.

Prerequisites

Before installing the Fluss Helm chart, ensure you have:

note

A Fluss cluster deployment requires a running ZooKeeper ensemble. To provide flexibility in deployment and enable reuse of existing infrastructure, the Fluss Helm chart does not include a bundled ZooKeeper cluster. If you don’t already have a ZooKeeper running, the installation documentation provides instructions for deploying one using Bitnami’s Helm chart.

Supported Versions

ComponentMinimum VersionRecommended Version
Kubernetesv1.19+v1.25+
Helmv3.8.0+v3.18.6+
ZooKeeperv3.6+v3.8+
Apache Fluss (Container Image)0.8.0-incubating0.8.0-incubating
Minikube (Local Development)v1.25+v1.32+
Docker (Local Development)v20.10+v24.0+

Installation

Running Fluss locally with Minikube

For local testing and development, you can deploy Fluss on Minikube. This is ideal for development, testing, and learning purposes.

Prerequisites

  • Docker container runtime
  • At least 4GB RAM available for Minikube
  • At least 2 CPU cores available

Start Minikube

# Start Minikube with recommended settings for Fluss
minikube start

# Verify cluster is ready
kubectl cluster-info

Configure Docker Environment (Optional)

To build images directly in Minikube you need to configure the Docker CLI to use Minikube's internal Docker daemon:

# Configure shell to use Minikube's Docker daemon
eval $(minikube docker-env)

To build custom images please refer to Custom Container Images.

Installing the chart on a cluster

This installation process is generally working for a distributed Kubernetes cluster or a Minikube setup.

Step 1: Deploy ZooKeeper (Optional if ZooKeeper is existing)

To start Zookeeper use Bitnami’s chart or your own deployment. If you have an existing Zookeeper cluster, you can skip this step. Example with Bitnami’s chart:

# Add Bitnami repository
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update

# Deploy ZooKeeper
helm install zk bitnami/zookeeper

Step 2: Deploy Fluss

Install from Helm repo

helm repo add fluss https://downloads.apache.org/incubator/fluss/helm-chart
helm repo update
helm install helm-repo/fluss

Install from Local Chart

helm install fluss ./helm

Install with Custom Values

You can customize the installation by providing your own values.yaml file or setting individual parameters via the --set flag. Using a custom values file:

helm install fluss ./helm -f my-values.yaml

Or for example to change the ZooKeeper address via the --set flag:

helm install fluss ./helm \
--set configurationOverrides.zookeeper.address=<my-zk-cluster>:2181

Cleanup

# Uninstall Fluss
helm uninstall fluss

# Uninstall ZooKeeper
helm uninstall zk

# Delete PVCs
kubectl delete pvc -l app.kubernetes.io/name=fluss

# Stop Minikube
minikube stop

# Delete Minikube cluster
minikube delete

Architecture Overview

The Fluss Helm chart deploys the following Kubernetes resources:

Core Components

  • CoordinatorServer: 1x StatefulSet with Headless Service for cluster coordination
  • TabletServer: 3x StatefulSet with Headless Service for data storage and processing
  • ConfigMap: Configuration management for server.yaml settings
  • Services: Headless services providing stable pod DNS names

Optional Components

  • PersistentVolumes: Data persistence when persistence.enabled=true

Step 3: Verify Installation

# Check pod status
kubectl get pods -l app.kubernetes.io/name=fluss

# Check services
kubectl get svc -l app.kubernetes.io/name=fluss

# View logs
kubectl logs -l app.kubernetes.io/component=coordinator
kubectl logs -l app.kubernetes.io/component=tablet

Configuration Parameters

The following table lists the configurable parameters of the Fluss chart and their default values.

Global Parameters

ParameterDescriptionDefault
nameOverrideOverride the name of the chart""
fullnameOverrideOverride the full name of the resources""

Image Parameters

ParameterDescriptionDefault
image.registryContainer image registry""
image.repositoryContainer image repositoryfluss
image.tagContainer image tag0.8.0-incubating
image.pullPolicyContainer image pull policyIfNotPresent
image.pullSecretsContainer image pull secrets[]

Application Configuration

ParameterDescriptionDefault
appConfig.internalPortInternal communication port9123
appConfig.externalPortExternal client port9124

Fluss Configuration Overrides

ParameterDescriptionDefault
configurationOverrides.default.bucket.numberDefault number of buckets for tables3
configurationOverrides.default.replication.factorDefault replication factor3
configurationOverrides.zookeeper.path.rootZooKeeper root path for Fluss/fluss
configurationOverrides.zookeeper.addressZooKeeper ensemble addresszk-zookeeper.{{ .Release.Namespace }}.svc.cluster.local:2181
configurationOverrides.remote.data.dirRemote data directory for snapshots/tmp/fluss/remote-data
configurationOverrides.data.dirLocal data directory/tmp/fluss/data
configurationOverrides.internal.listener.nameInternal listener nameINTERNAL

Persistence Parameters

ParameterDescriptionDefault
persistence.enabledEnable persistent volume claimsfalse
persistence.sizePersistent volume size1Gi
persistence.storageClassStorage class namenil (uses default)

Resource Parameters

ParameterDescriptionDefault
resources.coordinatorServer.requests.cpuCPU requests for coordinatorNot set
resources.coordinatorServer.requests.memoryMemory requests for coordinatorNot set
resources.coordinatorServer.limits.cpuCPU limits for coordinatorNot set
resources.coordinatorServer.limits.memoryMemory limits for coordinatorNot set
resources.tabletServer.requests.cpuCPU requests for tablet serversNot set
resources.tabletServer.requests.memoryMemory requests for tablet serversNot set
resources.tabletServer.limits.cpuCPU limits for tablet serversNot set
resources.tabletServer.limits.memoryMemory limits for tablet serversNot set

Advanced Configuration

Custom ZooKeeper Configuration

For external ZooKeeper clusters:

configurationOverrides:
zookeeper.address: "zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181"
zookeeper.path.root: "/my-fluss-cluster"

Network Configuration

The chart automatically configures listeners for internal cluster communication and external client access:

  • Internal Port (9123): Used for inter-service communication within the cluster
  • External Port (9124): Used for client connections

Custom listener configuration:

appConfig:
internalPort: 9123
externalPort: 9124

configurationOverrides:
bind.listeners: "INTERNAL://0.0.0.0:9123,CLIENT://0.0.0.0:9124"
advertised.listeners: "CLIENT://my-cluster.example.com:9124"

Storage Configuration

Configure different storage backends:

configurationOverrides:
data.dir: "/data/fluss"
remote.data.dir: "s3://my-bucket/fluss-data"

Upgrading

Upgrade the Chart

# Upgrade to a newer chart version
helm upgrade fluss ./helm

# Upgrade with new configuration
helm upgrade fluss ./helm -f values-new.yaml

Rolling Updates

The StatefulSets support rolling updates. When you update the configuration, pods will be restarted one by one to maintain availability.

Custom Container Images

Building Custom Images

To build and use custom Fluss images:

  1. Build the project with Maven:
mvn clean package -DskipTests
  1. Build the Docker image:
# Copy build artifacts
cp -r build-target/* docker/fluss/build-target

# Build image
cd docker
docker build -t my-registry/fluss:custom-tag .
  1. Use in Helm values:
image:
registry: my-registry
repository: fluss
tag: custom-tag
pullPolicy: Always

Monitoring and Observability

Health Checks

The chart includes liveness and readiness probes:

livenessProbe:
tcpSocket:
port: 9124
initialDelaySeconds: 10
periodSeconds: 3
failureThreshold: 100

readinessProbe:
tcpSocket:
port: 9124
initialDelaySeconds: 10
periodSeconds: 3
failureThreshold: 100

Logs

Access logs from different components:

# Coordinator logs
kubectl logs -l app.kubernetes.io/component=coordinator -f

# Tablet server logs
kubectl logs -l app.kubernetes.io/component=tablet -f

# Specific pod logs
kubectl logs coordinator-server-0 -f
kubectl logs tablet-server-0 -f

Troubleshooting

Common Issues

Pod Startup Issues

Symptoms: Pods stuck in Pending or CrashLoopBackOff state

Solutions:

# Check pod events
kubectl describe pod <pod-name>

# Check resource availability
kubectl describe nodes

# Verify ZooKeeper connectivity
kubectl exec -it <fluss-pod> -- nc -zv <zookeeper-host> 2181

Image Pull Errors

Symptoms: ImagePullBackOff or ErrImagePull

Solutions:

  • Verify image repository and tag exist
  • Check pull secrets configuration
  • Ensure network connectivity to registry

Connection Issues

Symptoms: Clients cannot connect to Fluss cluster

Solutions:

# Check service endpoints
kubectl get endpoints

# Test network connectivity
kubectl exec -it <client-pod> -- nc -zv <fluss-service> 9124

# Verify DNS resolution
kubectl exec -it <client-pod> -- nslookup <fluss-service>

Debug Commands

# Get all resources
kubectl get all -l app.kubernetes.io/name=fluss

# Check configuration
kubectl get configmap fluss-conf-file -o yaml


# Get detailed pod information
kubectl get pods -o wide -l app.kubernetes.io/name=fluss