Skip to content

Resources are created during dry-run when using Force=true,Replace=true sync options #623

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
rafal-jan opened this issue Sep 2, 2024 · 3 comments · May be fixed by #633
Open

Resources are created during dry-run when using Force=true,Replace=true sync options #623

rafal-jan opened this issue Sep 2, 2024 · 3 comments · May be fixed by #633

Comments

@rafal-jan
Copy link

If resources have the argocd.argoproj.io/sync-options: Force=true,Replace=true annotation, they are recreated twice:

  • when Argo CD performs dry-run
  • when Argo CD performs the actual sync

This behavior can be observed when using multiple Jobs with the argocd.argoproj.io/sync-options: Force=true,Replace=true annotation and different sync waves. All Jobs are created immediately on the cluster during the dry-run phase and then recreated one-by-one as their respective sync wave is processed by Argo CD.

More information about using replace with dry-run and force options can be found in kubernetes/kubectl#1222. kubectl is used as library in gitops-engine but only Run() method is called. However, the fix in kubernetes/kubernetes#110326 updated the Validate() method.

I believe that the force option should not be set to true when performing a dry-run replace operation. This would avoid unnecessary resource recreation and make it usable with sync waves.

@mmclane
Copy link

mmclane commented Oct 31, 2024

I believe I am seeing this too. I have this annotation set on a K8s job. When that job gets replaced, it happens twice therefore the job runs twice even though it succeeds the first time.

@mmclane
Copy link

mmclane commented Oct 31, 2024

Having done a little more testing, I have found that the job gets recreated twice when something else has the annotation: argocd.argoproj.io/sync-wave: "-1"

To test this out I created an app that installs a custom helm chart. That helm chart creates two things, a configmap and a job. The job runs a hello world container.
image

If I have no annotations, ArgoCD can not update the job. To accommodate this I add the annotation argocd.argoproj.io/sync-options: Replace=true,Force=true to the job. Everything works as expected if I change the image tag for that job via a parameter override and the job gets replaced once.

If however I add the annotation argocd.argoproj.io/sync-wave: "-1" to the configmap, then when I update the jobs image tag, ArgoCD will replace the job twice causing it to run twice.

Here are some additional observations:

  • If I set the configmap's annotation to argocd.argoproj.io/sync-wave: "0", things work as expected.
  • If I set the job to have annotation to argocd.argoproj.io/sync-wave: "1", the job gets replaced twice.

I am running ArgoCD version v2.12.3+6b9cd82

@rafal-jan rafal-jan linked a pull request Nov 4, 2024 that will close this issue
@majusmisiak
Copy link

majusmisiak commented May 15, 2025

I am observing exactly the same issue.

Contrary to @mmclane I did not find that setting same argocd.argoproj.io/sync-wave on all resources in the application resolves this. What I observe instead is following:

  1. If a Job has argocd.argoproj.io/sync-options: Force=true,Replace=true and argocd.argoproj.io/sync-wave: 1 , and there are other resources with argocd.argoproj.io/sync-wave: 0 in the Application, then 2 Job objects are created with the same name, in about 3 seconds interval of each other. Each of these Jobs spawns a duplicate Pod and the first one is immediately set to the Terminating state.
  2. If a Job has argocd.argoproj.io/sync-options: Force=true,Replace=true and argocd.argoproj.io/sync-wave: 0 basically same thing is happening. Only difference is that interval between the Jobs is shorter (1 second or maybe less), so it's harder to observe.

This is how it looks in kubectl get pods:

Image

I would also expect that setting ServerSideApply=false flag on an Application or Resource level should fix this (since Argo would internally use kubectl instead of kubernetes library). But ServerSideApply=false seems to have no effect on this bug.

@rafal-jan any chance for having your PR merged?

@rafal-jan or @mmclane did you find any workaround to this issue?

I am running ArgoCD version v2.14.11+8283115

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants