installed

Your kubectl get pods -A output shows a mostly healthy cluster with many self-hosted services (Nextcloud, Synapse/Matrix, “docs” which looks like OnlyOffice/Collabora-related, Meet, Element, Grist, etc.). However, there are a few problematic pods in CrashLoopBackOff:

All other pods (including most databases like PostgreSQL/Redis/MinIO, frontends, and system pods) are Running fine.

Quick Diagnosis Steps

CrashLoopBackOff means the container starts but exits with a non-zero code (crash or error), so Kubernetes keeps restarting it with increasing backoff delays. Common root causes include:

To find the exact cause, run these commands (replace <pod-name> with the full name, e.g. nextcloud-864d6c9854-9ts7h):

  1. Check detailed events and state (most important first step):

    kubectldescribepod<pod-name>-ndefault
    • Look under Events at the bottom for clues (e.g., “Error”, “Back-off restarting”, OOMKilled, probe failures, volume mount issues).
    • Check the Last State or State section for the container — it often shows the exit code (e.g., 137 = OOMKilled, 1 = general error).
  2. View logs (this usually reveals the real error):

    kubectllogs<pod-name>-ndefault
    • For previous (crashed) instance: kubectl logs <pod-name> -n default --previous
    • For the cronjob one: same, but it might be a short-lived job.
  3. Check recent events cluster-wide (helpful context):

    kubectlgetevents-ndefault-sort-by=.metadata.creationTimestamp

Likely Specific Issues Based on Your Setup

Most of your other services (Element, Meet, Grist, LiveKit, etc.) look stable, so this seems isolated to recent changes or a shared dependency (e.g., DB/network hiccup during deployment).

Next Steps / Fixes

Once you paste the output from kubectl describe pod + kubectl logs for the crashing ones, I can help pinpoint the exact fix. In the meantime, start with those two commands on the affected pods!