-
Notifications
You must be signed in to change notification settings - Fork 55
Streaming PATCH updates are not reflected in client store #76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi Brian, Do you mind creating this as a support ticket? You can email your message to support@launchdarkly.com and it will automatically create a ticket. I will start investigating this right away though. Best, |
@justinucd Done. Let me know if I can help with any other info, or if you want to hop on a ScreenHero or Skype call. |
This was an issue with how we were establishing connections in a forked process environment (e.g. when using Spring or Puma). We are now reestablishing connections in Puma's |
@justinucd So handling things the way I mentioned above works fine to get the streaming updates to work, but now I'm getting this issue whenever I do a Here's the code we're using to establish our connection: This calls a simple helper function that returns the configured client, which all seems to be working fine. The connection is being established, but for some reason it seems like Celluloid can't shut down its actors properly and my console hangs for a bit before exiting. |
How are you reestablishing the LD connection? Are you just setting your global to a new instance of I've tried that (creating an
I can solve the problem by removing the initializer, but then the client won't be available in rake tasks or |
I can verify that this problem exists in other environments, in our case we've seen it in Pow.cx (Rack on Nodejs), Passenger in Apache, and Resque. So I'm guessing the problem exists in forking environments. Eventually I solved the problem by installing after fork hooks for the various frameworks. Edited after posting |
The downside of using after fork to recreate the client is that it adds a lot of launch time to the forked child on the first variation query. I tried solving this by keeping the feature store around so I could take the initialization hit once in the parent fork. I tried passing it into the Config parameters when the client is re-created. This doesn't work because |
@Manfred -- is the startup on forked child initialization causing a slow down in your application's critical path or is it incurred at start up? I've been investigating related celluloid issues regarding cleanly shutting down all running actors. Can someone running into these problems run their code with the following line of code set globally: require "celluloid/current"
$CELLULOID_DEBUG = true |
Currently we initialize the client on the first query so it depends. The problem with the application is that queries are done on the model level so they can be triggered in a web process, in background tasks (Sidekiq and legacy Resque jobs), or in one-off scripts which are triggered either by hand or through cron. We didn't want to force a query on process boot because some processes don't have to query at all. The problem in our case has nothing to do with actors not shutting down properly as far as I can tell. Sharing IO handles between forks is simply not possible so the current architecture of the gem will never work very well in forked environments. You will have to create an LDClient instance for every forked process. I think the best solution for this problem depends on the size of the application. For smaller applications dealing with just a few RPM it makes more sense to have easy setup that works similar to the current code. For larger deployments it makes much more sense to have just one process in the architecture take care of synchronization to a local cache and then have all the web, background, and script processes read from that local cache. |
In your larger deployments, would something like a redis backed feature store solve your needs? Think a ruby corollary to the java-client RedisFeatureStore Further, have you taken a look at redis storage and daemon mode in ld-relay? |
@dlau No, I haven't looked at other solutions because I was mostly working on upgrading the gem in the project. We haven't prioritized working on a more optimal integration with LD. Using a Redis backed feature query interface with a separate update process was exactly what I was thinking about. |
Great, I will bring up ruby client redis support with the team. Regarding the previous issue with initializing LDClient with spring and puma, the documentation can be found here: Closing this thread, as the contained issues are no longer relevant to the topic. |
add new version of all_flags that captures more metadata
I'm running into the following scenario while upgrading to ldclient-rb 2.0.3 from ldclient-rb 0.7.0. The gem is used within a Rails 4.2.7.1 application on Puma.
The feature flags are working fine in our development environment when the application first loads, and it appears the correct values are returned initially. However, if a user goes to the LaunchDarkly web interface and toggles a flag, the update is not reflected within the Rails application until the Rails server is killed and restarted. This led me to believe we might be having an issue with handling
PATCH
server-sent event passed back from LaunchDarkly when the flag is changed.In
config/initializers/launch_darkly.rb
:We then use this global in our code to check for enabled features. As is the default, the
LaunchDarkly::StreamProcessor
is configured as the update processor within the client. I put some logging into this class to see if thePATCH
SSEs are being received and handled, and it does appear that they are.This correctly reflects a new version number for the flag and the proper
on
value in the message.However, any time the global
LaunchDarkly::LDClient#variation
method is accessed onRails.configuration.features_client
for the flag, the variation code returns the initial version that was present when the LDClient was initialized on this line, which causes us to hit the condition here that returns the default flag value. It seems the update to the store is not being picked up from the client (even though accessing the store directly from the StreamProcessor logging statements above seem to indicate that the store was updated properly).I'm continuing to dig in to see if I can identify the issue, but any pointers would be appreciated.
The text was updated successfully, but these errors were encountered: