Skip to content

[ISV-5621] Fix "Argument list too long" error. #450

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 13, 2025

Conversation

BorekZnovustvoritel
Copy link

This error resolves "Argument list too long". As specified in the comment, the error was occurring when too many files were changed because their names were stored in a variable that was exported in a previous script. This overflowed the buffers in Bash (https://linux.die.net/man/2/execve) and caused any invoked command to raise the mentioned issue (even commands like sleep 1).

To replicate this, I have exported the variables from the failed pipeline run and created this env file:

export GITHUB_OUTPUT='run_gh_out.txt'

export AUTOMERGE_ENABLED=1
export OPP_PRODUCTION_TYPE=k8s
export OPP_SCRIPT_URL=https://raw.githubusercontent.com/redhat-openshift-ecosystem/community-operators-pipeline/ci/latest/ci/scripts/opp.sh
export OPP_SCRIPT_ENV_OPRT_URL=https://raw.githubusercontent.com/redhat-openshift-ecosystem/community-operators-pipeline/ci/latest/ci/scripts/opp-oprt.sh
export OPP_SCRIPT_ENV_URL=https://raw.githubusercontent.com/redhat-openshift-ecosystem/community-operators-pipeline/ci/latest/ci/scripts/opp-env.sh
export OPP_SCRIPT_COSMETICS_URL=https://raw.githubusercontent.com/redhat-openshift-ecosystem/community-operators-pipeline/ci/latest/ci/scripts/opp-oprt-cosmetics.sh
export OPP_THIS_REPO_BASE=https://github.com
export OPP_THIS_REPO=k8s-operatorhub/community-operators
export OPP_THIS_BRANCH=main
export OPP_ANSIBLE_PULL_REPO=https://github.com/redhat-openshift-ecosystem/operator-test-playbooks
export OPP_ANSIBLE_PULL_BRANCH=upstream-community
export OPP_REVIEWERS_ENABLED=1
export OPP_THIS_REPO_NAME=community-operators
export OPP_THIS_REPO_ORG=k8s-operatorhub
export pythonLocation=/opt/hostedtoolcache/Python/3.13.1/x64
export PKG_CONFIG_PATH=/opt/hostedtoolcache/Python/3.13.1/x64/lib/pkgconfig
export Python_ROOT_DIR=/opt/hostedtoolcache/Python/3.13.1/x64
export Python2_ROOT_DIR=/opt/hostedtoolcache/Python/3.13.1/x64
export Python3_ROOT_DIR=/opt/hostedtoolcache/Python/3.13.1/x64
export LD_LIBRARY_PATH=/opt/hostedtoolcache/Python/3.13.1/x64/lib
export OPP_LABELS=
export OPP_PR_AUTHOR=ericsyh
export OPP_OPRT_REPO=ericsyh/community-operators
export OPP_OPRT_SHA=547316926e8cc58be789b739b65dfa5b54951daa
export OPP_OPRT_SRC_REPO=k8s-operatorhub/community-operators
export OPP_OPRT_SRC_BRANCH=main
export OPP_THIS_PR=5629

And named is as prepare.sh. Then I have ran source prepare.sh, adjusted the ci/scripts/opp-opprt.sh to not fail on git commands as well as calling the local /path/to/repo/ci/scripts/opp-env.sh file instead of curling it. Then I issued the command bash ci/scripts/opp-opprt.sh.

The run has succeeded.

If you wish to replicate it, the pipeline will mess with your local git config, so sorry for committing as "Test User".

Copy link
Collaborator

@mporrato mporrato left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just one doubt: if the root cause is the size of the environment, why are the external commands issued in opp-oprt.sh after the variables have been exported not affected by this problem?
Perhaps the call to the other script at the end of the script adds just enough extra variables to trigger the behaviour? If that's the case I'm afraid we will see this issue coming back with larger PRs and will need to address this in a different way.

@BorekZnovustvoritel
Copy link
Author

Just one doubt: if the root cause is the size of the environment, why are the external commands issued in opp-oprt.sh after the variables have been exported not affected by this problem?
Perhaps the call to the other script at the end of the script adds just enough extra variables to trigger the behaviour? If that's the case I'm afraid we will see this issue coming back with larger PRs and will need to address this in a different way.

That may be right. I guess we can perform the export right before the subshell with opp-env.sh is called, that way it won't waste the buffer space for the entirety of opp-oprt.sh. Only Bash built-ins are immune to this issue so I will just have to perform the curl before exporting everything. It's worth a try at least.

Copy link
Collaborator

@mporrato mporrato left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even if this is not ideal, I would still give it a try.
As long as we are relying on environment variables to pass those values, we are pretty much guaranteed we will hit the limit sooner or later, but a proper solution would probably require using files instead so it might not be a trivial change.

Edit: another approach that might be worth exploring is the following: instead of calling the next script using bash, we could try sourcing the next script instead: it would not require any execve() and the variables would not even require to be exported but it might cause other issues (for example the environment in the second script would be polluted by the environment from non-exported variables in the first script) so it would require careful testing.

export OPP_RENAMED_FILES
export OPP_ADDED_MODIFIED_FILES
export OPP_ADDED_MODIFIED_RENAMED_FILES
bash <(echo "$OPP_SCRIPT_FILE_CONTENTS")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this will still issue an execve() under the hood (to run bash) so I'm not sure this will make any difference.

Signed-off-by: Marek Szymutko <mszymutk@redhat.com>
@BorekZnovustvoritel
Copy link
Author

I have decided to use the source command and unset variables that were not exported previously. I have tested this for 3 PRs: k8s-operatorhub/community-operators#5863, k8s-operatorhub/community-operators#5841 and the original k8s-operatorhub/community-operators#5629. I haven't spotted any issues.

Copy link
Collaborator

@mporrato mporrato left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good job! 👍

@BorekZnovustvoritel BorekZnovustvoritel merged commit 2be38a8 into ci/latest Mar 13, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants