-
Notifications
You must be signed in to change notification settings - Fork 20.8k
core/filtermaps: clone cached slices, fix tempRange #31680
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@@ -610,7 +610,9 @@ func (f *FilterMaps) storeFilterMapRowsOfGroup(batch ethdb.Batch, mapIndices []u | |||
if uint32(len(mapIndices)) != f.baseRowGroupLength { // skip base rows read if all rows are replaced | |||
var ok bool | |||
baseRows, ok = f.baseRowsCache.Get(baseMapRowIndex) | |||
if !ok { | |||
if ok { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we also clone the slice in getFilterMapRow
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do clone baseRow
and if required then we append extRow
which is always retrieved from the database so it's guaranteed that the returned result does not share an underlying array with anything.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
The slice cloning, though definitely necessary to be safe, did not solve the issue so I added a lot of debug prints and finally conclusively traced the error back to its origin. When I add a new block and overwrite the last map with a new rendered one, I temporarily use |
I also changed |
@@ -656,7 +659,7 @@ func (f *FilterMaps) mapRowIndex(mapIndex, rowIndex uint32) uint64 { | |||
// called from outside the indexerLoop goroutine. | |||
func (f *FilterMaps) getBlockLvPointer(blockNumber uint64) (uint64, error) { | |||
if blockNumber >= f.indexedRange.blocks.AfterLast() && f.indexedRange.headIndexed { | |||
return f.indexedRange.headDelimiter, nil | |||
return f.indexedRange.headDelimiter + 1, nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we error out if blockNumber > f.indexedRange.blocks.AfterLast()?
Theoretically, the returned log index is for the next block and it must be consecutive with the latest indexed one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the search works async (range might change any time during the process) the underlying functions of the matcher backend do not fail when they are queried out of range, they just return results so that the search will not find any additional items (the head part of the results will be invalidated anyways in this case). getLogByLvIndex
also silently returns with no result end no errors.
getBlockLvPointer
also just returns the beginning of the next future block, even if queried further in the future, so that the search will just search until the end of the indexed range. The point is that if it's a long search then the majority of results will still be valuable and the head part will be searched again anyways so just do something that lets the operation finish safely.
We could have a different mechanism but this seems to work safely for now and we have to do the fork release today so I would not change it fundamentally right now.
@@ -536,6 +536,7 @@ func (r *mapRenderer) getTempRange() (filterMapsRange, error) { | |||
} else { | |||
tempRange.blocks.SetAfterLast(0) | |||
} | |||
tempRange.headIndexed = false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it the real root cause?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes
This PR ensures that caching a slice or a slice of slices will never affect the original version by always cloning a slice fetched from cache if it is not used in a guaranteed read only way.
This PR ensures that caching a slice or a slice of slices will never affect the original version by always cloning a slice fetched from cache if it is not used in a guaranteed read only way.