Skip to content

Investigate regression in memory usage in v7 #1564

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
CharlieC3 opened this issue Feb 25, 2023 · 3 comments
Open

Investigate regression in memory usage in v7 #1564

CharlieC3 opened this issue Feb 25, 2023 · 3 comments
Assignees

Comments

@CharlieC3
Copy link
Member

We're seeing more OOMs in v7 than we did in v6. We may need to profile the memory usage again to identity key areas for performance improvements and address them.

@rafaelcr
Copy link
Collaborator

rafaelcr commented Mar 9, 2023

We should perform tests in the new clusters by reducing pod memory in staging and sending prod traffic into it so we can get a memory dump once it crashes

@rafaelcr rafaelcr moved this from Recent issues to Backlog in API Board Mar 9, 2023
@rafaelcr rafaelcr self-assigned this Feb 19, 2024
@zone117x
Copy link
Member

Is this still an issue? Did we fix it by switching to the slim docker image and upgrading nodejs version?

@CharlieC3
Copy link
Member Author

@zone117x Yeah this is still an issue unfortunately. Changing the LB algo to least_request helped cover the symptoms, but OOMs are still occurring frequently, just less so than before.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: 📋 Backlog
Development

No branches or pull requests

3 participants