Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
d3a4386
end-to-end-security demo in non-default ns and with demo labels
xeniape Mar 5, 2026
65d4049
airflow-scheduled-job demo in non-default ns and with demo labels
xeniape Mar 5, 2026
3ad6451
nifi-kafka-druid-earthquake-data demo in non-default ns and with demo…
xeniape Mar 9, 2026
f1bb055
nifi-kafka-druid-water-level-data demo in non-default ns and with dem…
xeniape Mar 9, 2026
f36fa9b
spark-k8s-anomaly-detection-taxi-data demo in non-default ns and with…
xeniape Mar 9, 2026
35db02e
trino-iceberg demo in non-default ns and with demo labels
xeniape Mar 9, 2026
13836ab
trino-taxi-data demo in non-default ns and with demo labels
xeniape Mar 9, 2026
065b540
logging demo in non-default ns and with demo labels
xeniape Mar 9, 2026
ec9f746
signal-processing demo in non-default ns and with demo labels
xeniape Mar 9, 2026
fe87f39
jupyterhub-keycloak demo in non-default ns and with demo labels
xeniape Mar 10, 2026
1334741
opensearch demo in non-default ns and with demo labels
xeniape Mar 10, 2026
8c0e4c4
monitoring in non-default ns and with demo labels
xeniape Mar 10, 2026
116668b
observability in non-default ns and with demo labels
xeniape Mar 10, 2026
a02b92a
tutorial-openldap in non-default ns and with demo labels
xeniape Mar 10, 2026
ed92e91
openldap in non-default ns and with demo labels
xeniape Mar 10, 2026
cf9e661
argocd-cd-git-ops demo in non-default ns and with demo labels
xeniape Mar 10, 2026
5d737ad
data-lakehouse-iceberg-trino-spark demo in non-default ns and with de…
xeniape Mar 10, 2026
62849d7
remove default namespace warnings in docs
xeniape Mar 10, 2026
5b35e5d
Merge branch 'main' into fix/demos-in-namespaces-with-labels
xeniape Mar 11, 2026
f8b5da0
add finalizer explanation
xeniape Mar 11, 2026
9f87a91
parameterize demo label
xeniape Mar 11, 2026
5dbe361
add/remove namespace in rbac
xeniape Mar 11, 2026
decf874
use DEMO and STACK parameter for labels etc.
xeniape Mar 12, 2026
74a54c7
Merge branch 'main' into fix/demos-in-namespaces-with-labels
xeniape Mar 12, 2026
3f3a5eb
Merge branch 'main' into fix/demos-in-namespaces-with-labels
xeniape Mar 12, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 1 addition & 2 deletions demos/airflow-scheduled-job/serviceaccount.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@ apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-serviceaccount
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
Expand All @@ -12,7 +11,7 @@ metadata:
subjects:
- kind: ServiceAccount
name: demo-serviceaccount
namespace: default
namespace: {{ NAMESPACE }}
roleRef:
kind: ClusterRole
name: demo-clusterrole
Expand Down
3 changes: 3 additions & 0 deletions demos/argo-cd-git-ops/applications/airflow-postgres.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,9 @@ apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: airflow-postgres
# Finalizer needed so resources created by the Application are deleted when the Application gets removed
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: airflow
destination:
Expand Down
3 changes: 3 additions & 0 deletions demos/argo-cd-git-ops/applications/airflow.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,9 @@ apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: airflow
# Finalizer needed so resources created by the Application are deleted when the Application gets removed
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: airflow
destination:
Expand Down
3 changes: 3 additions & 0 deletions demos/argo-cd-git-ops/applications/minio.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,9 @@ apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: minio
# Finalizer needed so resources created by the Application are deleted when the Application gets removed
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: minio
destination:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,7 @@ spec:
creationTimestamp: null
name: postgresql-credentials
namespace: stackable-airflow
labels:
stackable.tech/vendor: Stackable
stackable.tech/demo: {{ DEMO }}
type: Opaque
3 changes: 3 additions & 0 deletions demos/argo-cd-git-ops/manifests/airflow/airflow.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,9 @@ apiVersion: airflow.stackable.tech/v1alpha1
kind: AirflowCluster
metadata:
name: airflow
labels:
stackable.tech/vendor: Stackable
stackable.tech/demo: {{ DEMO }}
spec:
image:
productVersion: 3.1.6
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,4 +21,7 @@ spec:
creationTimestamp: null
name: airflow-credentials
namespace: stackable-airflow
labels:
stackable.tech/vendor: Stackable
stackable.tech/demo: {{ DEMO }}
type: Opaque
Original file line number Diff line number Diff line change
Expand Up @@ -13,4 +13,7 @@ spec:
creationTimestamp: null
name: airflow-minio-connection
namespace: stackable-airflow
labels:
stackable.tech/vendor: Stackable
stackable.tech/demo: {{ DEMO }}
type: Opaque
15 changes: 15 additions & 0 deletions demos/argo-cd-git-ops/manifests/minio/minio.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@ apiVersion: v1
kind: ServiceAccount
metadata:
name: "minio-sa"
labels:
stackable.tech/vendor: Stackable
stackable.tech/demo: {{ DEMO }}
---
# Source: minio/templates/configmap.yaml
apiVersion: v1
Expand All @@ -15,6 +18,8 @@ metadata:
chart: minio-5.4.0
release: minio
heritage: Helm
stackable.tech/vendor: Stackable
stackable.tech/demo: {{ DEMO }}
data:
initialize: |-
#!/bin/sh
Expand Down Expand Up @@ -408,6 +413,8 @@ metadata:
chart: minio-5.4.0
release: minio
heritage: Helm
stackable.tech/vendor: Stackable
stackable.tech/demo: {{ DEMO }}
spec:
accessModes:
- "ReadWriteOnce"
Expand All @@ -425,6 +432,8 @@ metadata:
chart: minio-5.4.0
release: minio
heritage: Helm
stackable.tech/vendor: Stackable
stackable.tech/demo: {{ DEMO }}
spec:
type: NodePort
externalTrafficPolicy: "Cluster"
Expand All @@ -448,6 +457,8 @@ metadata:
release: minio
heritage: Helm
monitoring: "true"
stackable.tech/vendor: Stackable
stackable.tech/demo: {{ DEMO }}
spec:
type: NodePort
externalTrafficPolicy: "Cluster"
Expand All @@ -471,6 +482,7 @@ metadata:
release: minio
heritage: Helm
stackable.tech/vendor: Stackable
stackable.tech/demo: {{ DEMO }}
spec:
strategy:
type: RollingUpdate
Expand All @@ -489,6 +501,7 @@ spec:
app: minio
release: minio
stackable.tech/vendor: Stackable
stackable.tech/demo: {{ DEMO }}
annotations:
checksum/secrets: fa63e34a92c817c84057e2d452fa683e66462a57b0529388fb96a57e05f38e57
checksum/config: ebea49cc4c1bfbd1b156a58bf770a776ff87fe199f642d31c2816b5515112e72
Expand Down Expand Up @@ -583,6 +596,8 @@ metadata:
chart: minio-5.4.0
release: minio
heritage: Helm
stackable.tech/vendor: Stackable
stackable.tech/demo: {{ DEMO }}
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-delete-policy": hook-succeeded,before-hook-creation
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,7 @@ spec:
creationTimestamp: null
name: minio
namespace: minio
labels:
stackable.tech/vendor: Stackable
stackable.tech/demo: {{ DEMO }}
type: Opaque

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,11 @@ spec:
- -c
- |
echo 'Waiting for all minio instances to be ready'
kubectl wait --for=condition=ready --timeout=30m pod -l app=minio,release=minio,stackable.tech/vendor=Stackable
kubectl wait --for=condition=ready --timeout=30m pod -l app=minio,release=minio,stackable.tech/vendor=Stackable -n {{ NAMESPACE }}
echo 'Waiting for all kafka brokers to be ready'
kubectl wait --for=condition=ready --timeout=30m pod -l app.kubernetes.io/name=kafka,app.kubernetes.io/instance=kafka
kubectl wait --for=condition=ready --timeout=30m pod -l app.kubernetes.io/name=kafka,app.kubernetes.io/instance=kafka -n {{ NAMESPACE }}
echo 'Waiting for all nifi instances to be ready'
kubectl wait --for=condition=ready --timeout=30m pod -l app.kubernetes.io/name=nifi,app.kubernetes.io/instance=nifi
kubectl wait --for=condition=ready --timeout=30m pod -l app.kubernetes.io/name=nifi,app.kubernetes.io/instance=nifi -n {{ NAMESPACE }}
- name: wait-for-kafka-topics
image: oci.stackable.tech/sdp/kafka:4.1.0-stackable0.0.0-dev
command:
Expand Down Expand Up @@ -142,6 +142,7 @@ data:
name: spark-ingest-into-lakehouse
labels:
stackable.tech/vendor: Stackable
stackable.tech/demo: {{ DEMO }}
spec:
sparkImage:
# Iceberg 1.10.1 only supports Spark 4.0.x
Expand Down Expand Up @@ -254,7 +255,7 @@ data:
spark.sql("CREATE TABLE IF NOT EXISTS lakehouse.smart_city.shared_bikes_station_status (station_id string, num_bikes_available short, is_installed boolean, is_renting boolean, is_returning boolean, last_reported timestamp) USING iceberg PARTITIONED BY (days(last_reported)) TBLPROPERTIES ('format-version' = 2, format = 'PARQUET')")

kafkaOptions = {
"kafka.bootstrap.servers": "kafka-broker-default.default.svc.cluster.local:9093",
"kafka.bootstrap.servers": "kafka-broker-default.{{ NAMESPACE }}.svc.cluster.local:9093",
"kafka.security.protocol": "SSL",
"kafka.ssl.truststore.location": "/stackable/tls/truststore.p12",
"kafka.ssl.truststore.password": "changeit",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ spec:
- -c
- |
echo 'Waiting for job load-test-data to finish'
kubectl wait --for=condition=complete --timeout=30m job/load-test-data
kubectl wait --for=condition=complete --timeout=30m job/load-test-data -n {{ NAMESPACE }}
containers:
- name: create-tables-in-trino
image: oci.stackable.tech/sdp/testing-tools/trino:0.3.0-stackable0.0.0-dev
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ spec:
# Copy the CA cert from the "tls" SecretClass
cp -v /etc/minio/mc/original_certs/ca.crt /.mc/certs/CAs/public.crt
mc alias set minio https://minio.default.svc.cluster.local:9000/ $(cat /minio-s3-credentials/accessKey) $(cat /minio-s3-credentials/secretKey)
mc alias set minio https://minio.{{ NAMESPACE }}.svc.cluster.local:9000/ $(cat /minio-s3-credentials/accessKey) $(cat /minio-s3-credentials/secretKey)
cd /tmp
curl -sO https://repo.stackable.tech/repository/misc/datasets/open-postcode-geo/open-postcode-geo.csv
Expand Down
3 changes: 1 addition & 2 deletions demos/data-lakehouse-iceberg-trino-spark/serviceaccount.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@ apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-serviceaccount
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
Expand All @@ -12,7 +11,7 @@ metadata:
subjects:
- kind: ServiceAccount
name: demo-serviceaccount
namespace: default
namespace: {{ NAMESPACE }}
roleRef:
kind: ClusterRole
name: demo-clusterrole
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ spec:
- pipefail
- -c
- |
# This file path needs adjustment for versioned demos on release branch
curl -o superset-assets.zip https://raw.githubusercontent.com/stackabletech/demos/main/demos/data-lakehouse-iceberg-trino-spark/superset-assets.zip
python -u /tmp/script/script.py
volumeMounts:
Expand Down
8 changes: 5 additions & 3 deletions demos/demos-v2.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,7 @@ demos:
cpu: 9000m
memory: 19586Mi
pvc: 40Gi
parameters: []
nifi-kafka-druid-earthquake-data:
description: Demo ingesting earthquake data into Kafka using NiFi, streaming it into Druid and creating a Superset dashboard
documentation: https://docs.stackable.tech/home/stable/demos/nifi-kafka-druid-earthquake-data
Expand All @@ -111,7 +112,7 @@ demos:
- plainYaml: https://raw.githubusercontent.com/stackabletech/demos/main/demos/nifi-kafka-druid-earthquake-data/create-nifi-ingestion-job.yaml
- plainYaml: https://raw.githubusercontent.com/stackabletech/demos/main/demos/nifi-kafka-druid-earthquake-data/create-druid-ingestion-job.yaml
- plainYaml: https://raw.githubusercontent.com/stackabletech/demos/main/demos/nifi-kafka-druid-earthquake-data/setup-superset.yaml
supportedNamespaces: ["default"]
supportedNamespaces: []
resourceRequests:
cpu: 8700m
memory: 42034Mi
Expand All @@ -133,7 +134,7 @@ demos:
- plainYaml: https://raw.githubusercontent.com/stackabletech/demos/main/demos/nifi-kafka-druid-water-level-data/create-nifi-ingestion-job.yaml
- plainYaml: https://raw.githubusercontent.com/stackabletech/demos/main/demos/nifi-kafka-druid-water-level-data/create-druid-ingestion-job.yaml
- plainYaml: https://raw.githubusercontent.com/stackabletech/demos/main/demos/nifi-kafka-druid-water-level-data/setup-superset.yaml
supportedNamespaces: ["default"]
supportedNamespaces: []
resourceRequests:
cpu: 8900m
memory: 42330Mi
Expand Down Expand Up @@ -215,11 +216,12 @@ demos:
- plainYaml: https://raw.githubusercontent.com/stackabletech/demos/main/demos/data-lakehouse-iceberg-trino-spark/create-nifi-ingestion-job.yaml
- plainYaml: https://raw.githubusercontent.com/stackabletech/demos/main/demos/data-lakehouse-iceberg-trino-spark/create-spark-ingestion-job.yaml
- plainYaml: https://raw.githubusercontent.com/stackabletech/demos/main/demos/data-lakehouse-iceberg-trino-spark/setup-superset.yaml
supportedNamespaces: ["default"]
supportedNamespaces: []
resourceRequests:
cpu: "80"
memory: 200Gi
pvc: 300Gi
parameters: []
jupyterhub-pyspark-hdfs-anomaly-detection-taxi-data:
description: Jupyterhub with PySpark and HDFS integration
documentation: https://docs.stackable.tech/home/stable/demos/jupyterhub-pyspark-hdfs-anomaly-detection-taxi-data
Expand Down
7 changes: 5 additions & 2 deletions demos/end-to-end-security/create-spark-report.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,9 @@ data:
kind: SparkApplication
metadata:
name: spark-report
labels:
stackable.tech/vendor: Stackable
stackable.tech/demo: {{ DEMO }}
spec:
sparkImage:
# Iceberg 1.10.1 only supports Spark 4.0.x
Expand All @@ -65,10 +68,10 @@ data:
sparkConf:
spark.driver.extraClassPath: /stackable/config/hdfs
spark.executor.extraClassPath: /stackable/config/hdfs
spark.hadoop.hive.metastore.kerberos.principal: hive/hive-iceberg.default.svc.cluster.local@KNAB.COM
spark.hadoop.hive.metastore.kerberos.principal: hive/hive-iceberg.{{ NAMESPACE }}.svc.cluster.local@KNAB.COM
spark.hadoop.hive.metastore.sasl.enabled: "true"
spark.kerberos.keytab: /stackable/kerberos/keytab
spark.kerberos.principal: spark/spark.default.svc.cluster.local@KNAB.COM
spark.kerberos.principal: spark/spark.{{ NAMESPACE }}.svc.cluster.local@KNAB.COM
spark.sql.catalog.lakehouse: org.apache.iceberg.spark.SparkCatalog
spark.sql.catalog.lakehouse.type: hive
spark.sql.catalog.lakehouse.uri: thrift://hive-iceberg-metastore:9083
Expand Down
3 changes: 1 addition & 2 deletions demos/end-to-end-security/serviceaccount.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@ apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-serviceaccount
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
Expand All @@ -12,7 +11,7 @@ metadata:
subjects:
- kind: ServiceAccount
name: demo-serviceaccount
namespace: default
namespace: {{ NAMESPACE }}
roleRef:
kind: ClusterRole
name: demo-clusterrole
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ data:
"ioConfig": {
"type": "kafka",
"consumerProperties": {
"bootstrap.servers": "kafka-broker-default-headless.default.svc.cluster.local:9093",
"bootstrap.servers": "kafka-broker-default-headless.{{ NAMESPACE }}.svc.cluster.local:9093",
"security.protocol": "SSL",
"ssl.truststore.location": "/stackable/tls/truststore.p12",
"ssl.truststore.password": "changeit",
Expand Down

Large diffs are not rendered by default.

3 changes: 1 addition & 2 deletions demos/nifi-kafka-druid-earthquake-data/serviceaccount.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@ apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-serviceaccount
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
Expand All @@ -12,7 +11,7 @@ metadata:
subjects:
- kind: ServiceAccount
name: demo-serviceaccount
namespace: default
namespace: {{ NAMESPACE }}
roleRef:
kind: ClusterRole
name: demo-clusterrole
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ data:
"ioConfig": {
"type": "kafka",
"consumerProperties": {
"bootstrap.servers": "kafka-broker-default-headless.default.svc.cluster.local:9093",
"bootstrap.servers": "kafka-broker-default-headless.{{ NAMESPACE }}.svc.cluster.local:9093",
"security.protocol": "SSL",
"ssl.truststore.location": "/stackable/tls/truststore.p12",
"ssl.truststore.password": "changeit",
Expand Down Expand Up @@ -133,7 +133,7 @@ data:
"ioConfig": {
"type": "kafka",
"consumerProperties": {
"bootstrap.servers": "kafka-broker-default-headless.default.svc.cluster.local:9093",
"bootstrap.servers": "kafka-broker-default-headless.{{ NAMESPACE }}.svc.cluster.local:9093",
"security.protocol": "SSL",
"ssl.truststore.location": "/stackable/tls/truststore.p12",
"ssl.truststore.password": "changeit",
Expand Down

Large diffs are not rendered by default.

3 changes: 1 addition & 2 deletions demos/nifi-kafka-druid-water-level-data/serviceaccount.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@ apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-serviceaccount
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
Expand All @@ -12,7 +11,7 @@ metadata:
subjects:
- kind: ServiceAccount
name: demo-serviceaccount
namespace: default
namespace: {{ NAMESPACE }}
roleRef:
kind: ClusterRole
name: demo-clusterrole
Expand Down
2 changes: 1 addition & 1 deletion demos/signal-processing/DownloadAndWriteToDB.json
Original file line number Diff line number Diff line change
Expand Up @@ -741,7 +741,7 @@
"kerberos-credentials-service": null,
"dbcp-max-conn-lifetime": "-1",
"Validation-query": "select count(*) from conditions;",
"Database Connection URL": "jdbc:postgresql://postgresql-timescaledb.default.svc.cluster.local:5432/tsdb",
"Database Connection URL": "jdbc:postgresql://postgresql-timescaledb.{{ NAMESPACE }}.svc.cluster.local:5432/tsdb",
"dbcp-time-between-eviction-runs": "-1",
"Database User": "admin",
"kerberos-user-service": null,
Expand Down
Loading
Loading