-
Notifications
You must be signed in to change notification settings - Fork 13.3k
Introduce cache for projection #32287
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce cache for projection #32287
Conversation
We ought not to be affecting inference state when assembling candidates, so invoke select inside of a probe.
The projection cache is snapshottable. This is because it will store projections out of type variables and other region data that may well go stale. This is different from (e.g.) the selection cache, which uses freshened keys.
r? @jroesch (rust_highfive has picked a reviewer for you, use r? to override) |
r? @arielb1 |
Oops, forgot that I had to rebase this over the specialization code. Let me fix that. |
Once this is rebased, I'll give it a shot on my horrorcode. |
Before we would ignore a candidate if it happened to be an impl with a default type. This change makes us never add the impl in the first place. This seems largely equivalent, though there might be some subtle difference in that -- before -- we would have failed to normalize if there was a "trait-def-candidate" contending with an (opaque) impl candidate. This corresponds I guess to a case like `<<A as Trait>::B as Trait2>::C`, and the definition of `Trait` contains a clause. Pretty obscure, but it seems like it's... ok to favor the trait definition in such a case.
@asajeffrey I think it builds now, go for it |
@nikomatsakis will do! |
@asajeffrey give me a second to resolve the travis failures, which were legit |
@nikomatsakis ok, let me know when I'm good to go |
@asajeffrey ok fixed (afaik) |
@nikomatsakis ok, starting the build... |
Hmm, with this PR I now get a type error on my code :( (https://github.com/asajeffrey/wasm)
|
@asajeffrey cool, let me take a look. |
ok, I think I see the problem, seems a bit obvious in retrospect -- for some reason I thought that I was handling higher-ranked things correctly, but I think I am ... just not. let me ponder best fix. (some of the longer-term improvements I had in mind would I think address this, maybe worth trying to do them now) |
Closing for now while I ponder the best solution here :) |
OK, good luck! |
Two main changes:
select
inside a probeBoth of these changes arose from work I've been doing extending @soltanmm's branch to handle lazy normalization. The cache is still incomplete in my mind, but it is certainly handy: for example it solves the explosion in compilation time for #31849.
Also, the refactoring to use
select
inside a probe does tweak the heuristics very slightly, in that candidates from being an object type now have a lower precedence than where-clause candidates. I wouldn't expect this to matter in practice. If it did, we could pull that logic for detecting object types more into project, but I'd prefer not to.I'll try to write-up further thoughts for improving this cache (and lazy norm etc) elsewhere. There are two main improvements, listed briefly here:
Fixes #31849.
cc @arielb1, @aturon, @asajeffrey