Mach5 Search on a Local Kubernetes Cluster
This guide explains how to deploy mach5-search on a local Kubernetes cluster and make it usable after the Helm install completes.
Cluster provisioning is out of scope. Use one of the local cluster options below, or any other option you are comfortable with, then follow the Mach5-specific steps in this guide:
k3s- Lightweight Kubernetes that works well on a single machine or small lab server.k3d- Runsk3sinside Docker, which is convenient if Docker is already your local runtime.microk8s- A snap-based local cluster with useful built-in add-ons.minikube- A widely used local Kubernetes option with multiple drivers and good documentation.kind- Kubernetes in Docker, useful for fast disposable clusters and CI-like local testing.
1. Prerequisites
Before installing Mach5, make sure you have:
- A working local Kubernetes cluster
kubectlhelm- Access to pull Mach5 images from the private registry
2. Required Node Labels
Mach5 workloads need node labels so the scheduler can place pods correctly.
Label every node that should be eligible for Mach5 workloads:
kubectl label node <node-name> \
mach5-compactor-role=true \
mach5-fdb-role=true \
mach5-ingestor-role=true \
mach5-main-role=true \
mach5-warehouse-head-role=true \
mach5-warehouse-worker-role=true
If your local cluster only has one node, label that node with all of the above.
3. Image Pull Access
mach5-search uses private Mach5 images. Your cluster must be able to pull them before the pods can start.
Create or configure the image pull secret using credentials from your Mach5 administrator, then reference that secret in the chart values.
Example:
kubectl create namespace mach5
kubectl -n mach5 create secret docker-registry mach5-image-pull-key \
--docker-server=us-central1-docker.pkg.dev \
--docker-username=_json_key_base64 \
--docker-password='<base64-json-key>'
If you are using a local helper flow, the repo also supports generating this secret from a Docker config file. The important part is that the cluster can authenticate to the private registry before the deployment starts.
Use the existing secret in values.yaml instead of embedding registry credentials:
mach5ImagePullSecret:
createSecretResources: false
dockerconfigjson: "" # not used when referencing an existing secret
Then ensure the chart is configured to use the pre-created secret name expected by your cluster setup.
4. Install backing charts
Install PostgreSQL, MinIO, and DynamoDB from cloud-deployment-scripts using the charts in cloud-deployment-scripts/helm-charts/.
Keep these charts in namespaces separate from mach5. A simple layout is:
mach5formach5-searchstoragefor PostgreSQL and MinIOdynamodbfor DynamoDB
Assuming you have cloned that repo locally, install PostgreSQL first and make sure you set the correct storage class:
helm upgrade --install postgresql ./cloud-deployment-scripts/helm-charts/postgresql \
--namespace storage \
--create-namespace \
--set pvc.storageclass=<your-storage-class> \
--set password=<postgres-password>
Install MinIO next:
helm upgrade --install minio ./cloud-deployment-scripts/helm-charts/minio \
--namespace storage \
--create-namespace
MinIO uses emptyDir, so its data is lost if the pod is deleted.
Install DynamoDB:
helm upgrade --install dynamodb ./cloud-deployment-scripts/helm-charts/dynamodb \
--namespace dynamodb \
--create-namespace
Use the service DNS names from those namespaces when configuring Mach5:
- PostgreSQL:
postgresdb.storage.svc.cluster.local - MinIO:
minio.storage.svc.cluster.local - DynamoDB:
dynamodb.dynamodb.svc.cluster.local
5. Prepare
Create a local override file, for example local-values.yaml, and keep it focused on local runtime details.
Minimum example:
force-upgrade: "yes-force-upgrade"
license:
createSecretResources: false
name: mach5-license
metered: false
mach5ImagePullSecret:
createSecretResources: false
dockerconfigjson: ""
metadatadb:
name: postgres
host: postgresdb.storage.svc.cluster.local
port: "5432"
sslmode: disable
user: postgres
credentials:
secretName: postgres-secret
secretKey: PGPASSWORD
s3EndpointUrl:
enabled: true
awsAccessKeyId: <minio-access-key>
awsDefaultRegion: us-east-1
secret:
create: false
name: s3-credentials
key: secret-key
Notes:
licensemust point at an already-created secret named bylicense.name. That secret must contain.licensemodeand.licensetoken.metadatadb.credentials.secretNamemust point at an already-created secret with the PostgreSQL password key.s3EndpointUrl.secret.nameands3EndpointUrl.secret.keymust match an already-created secret holding the MinIO secret key.- If you are pulling the chart images from a private registry, the
mach5ImagePullSecretsecret must already exist and contain the Docker config JSON. - Keep node assignment overrides only if you need to constrain additional workloads; the required node labels are usually enough for a single-node local cluster.
6. Install
Create the namespace if it does not exist:
kubectl create namespace mach5
Install the chart from the repo source:
helm upgrade --install m5s ./mach5-search-<version>.tgz \
--namespace mach5 \
--create-namespace \
-f local-values.yaml \
--wait
If you are installing from a packaged chart artifact or OCI registry, use the equivalent helm upgrade --install form for that source instead.
7. License Token Setup
Mach5 will not be operational until a valid license token is applied.
Follow the license workflow in: License Token Setup Guide
In short:
- Open the Mach5 admin UI after the chart is running.
- Go to the License page and copy the Deployment ID.
- Send the Deployment ID to the Mach5 team to request a license token.
- Update the existing Kubernetes secret referenced by
license.nameso it contains the new.licensetokenvalue. - Run
helm upgradeagain so the deployment picks up the updated secret.
The token must be applied after deployment if it was not already set. This is a required step, not a warning.
8. Bootstrap MinIO and the Store
After the chart is installed and the license token is in place, complete the final runtime steps below.
Access the nginx entrypoint
Once the deployment is ready, expose the nginx service from your local cluster to your workstation with port-forwarding.
The service name follows the Helm release name. For the m5s release used in this guide:
kubectl -n mach5 port-forward svc/m5s-nginx 8888:80
Then open:
http://localhost:8888
If you used a different Helm release name, replace m5s with that release name.
Create the bucket in MinIO
Create the bucket that your store will use. The repo’s local flows commonly use locals3-it or warehouse.
Use either the MinIO console or the mc client, depending on how your local MinIO is exposed.
Configure the store and store route
The store configuration should point at the local MinIO and DynamoDB services:
{
"index_store": {
"CloudStore": {
"object_store": {
"S3": {
"bucket": "locals3-it",
"prefix": "locals3",
"s3_endpoint": {
"url": "http://minio:9000"
},
"dynamodb_endpoint": {
"url": "http://dynamodb:8000"
}
}
}
}
}
}
Use the Mach5 UI or REST API to configure the store and store route.
The store setup should use these values:
bucket: locals3-itprefix: locals3s3_endpoint: http://minio.storage.svc.cluster.local:9000dynamodb_endpoint: http://dynamodb.dynamodb.svc.cluster.local:8000
9. Verify the Deployment
Check that the pods are running:
kubectl -n mach5 get pods
Check the logs for the main services if something is pending or crash-looping:
kubectl -n mach5 logs deploy/<failing-deployment-name>
Then open the Mach5 UI from the service exposed by your local cluster setup and verify the store and route are visible.
If the UI loads but search or store operations fail, the usual causes are:
- missing node labels
- bad PostgreSQL host or password
- MinIO bucket not created
- DynamoDB endpoint not reachable
- invalid image pull secret
- license token not applied