-
Notifications
You must be signed in to change notification settings - Fork 13
Mismatch between image size and meta-data field of view #126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I'm guessing that the field of view is incorrect. Allied Vision does indeed list the sensor as a 4/3 with square 5.5 µm x 5.5 µm pixels. |
@smarshall-bmr - can you provide us with the correct field of view? Priority is stereo-top |
you can see examples of the misalignments that result from lack of FOV consideration in #265 |
I'm dropping hyperspectral for a moment to do this, check back soon. |
At 2m the field of view of each camera is 101.5x74.9cm or 28.4 degrees on the x and 21.2 degrees on the y axis if my trig serves me correctly. |
@pless @max-zilla @craig-willis @rachelshekar @dlebauer Just to prevent any panic about how different these values are from the metadata reported ones I wanted to say that I roughly checked the FOV before programming StereoVIS scans. The current scans are based on a 100cmx75cm estimated FOV so scans take images every 50cm on the y axis and the gantry moves 105cm on the x between scans with a total of 3 scans per range. The StereoVIS cameras are 30cm apart so a single measurement actually captures around 130cm on the x-axis at 2m. This means each range should have around 270cm of 2x coverage and around 340cm total coverage per scan on 350cm ranges. I'm certainly welcome to changes to this scanning procedure if anyone has suggestions. |
@smarshall-bmr thanks. @ZongyangLi I am going to run some new results with this FOV:
|
@smarshall-bmr @ZongyangLi @yanliu-chn I added those new FOV to the get_fov() method of bin_to_geotiff():
...but that is not sufficient:
...you can see the little half-crescent shadow that does not align. There are also alignment issues between right/left TIF: At this point I want to turn fixing this problem over to a more knowledgable party. I think it would be best to use these two datasets as a small sample and get the demosaic bin_to_geotiff.py script working properly with these before we proceed. Links to datasets in Clowder:
Who is best suited to doing this? @yanliu-chn do you know of someone in CyberGIS with time to troubleshoot this? @ZongyangLi how comfortable are you with this aspect of the processing? In particular I wonder about the FOV offset portion of the code block I pasted above. It may not be as simple as fixing the FOV parameters, but this is a critical piece for the rest of the pipeline to get fixed. |
@max-zilla Do you think we could use the initial georegistration as a starting point for a feature matching algorithm to do final alignment? |
@max-zilla
This code base on two assumption:
But I still need to add these two 'HEIGHT_MAGIC_NUMBER' and 'PREDICT_MAGIC_SLOPE' to my formula to let the stitching things looks reasonable for all the stereoTop dataset. |
@ZongyangLi can you list which datasets/images you're using to show those examples? |
@max-zilla @ZongyangLi |
The mismatches I've seen have all been from Feb 2017. |
@max-zilla @dlebauer With the code 'full_day_to_tiles.py', I can get a well aligned map for almost all the stereoTop images. Or if that's not works for you, we may discuss with others to find a solution. |
@ZongyangLi your comment made sense, I had implemented the code you posted on the extractor and ran it to get the latest images I posted. But perhaps there is other updated code I was missing. I will check your updated code... note that we are not using the directory you linked anymore! I created a new repo for this code several months ago: However I want to be clear that I am not the right person to decide if this is being done properly :) I will generate new results today but between @pless @dlebauer @yanliu-chn @robkooper we should make sure the right GIS folks are examining our results before saying it's good. Thanks for pushing on this. |
@ZongyangLi @pless these results are indeed looking better. 4:25 + 4:24: The script is currently running on 02-11: |
@max-zilla The magic number I tested comes from left side image, It will be closer when you use left side image. |
David suggests that offset be annotated so users know that we are aware of it. |
I think this has gotten to a good point. There remains a magic number in the process which I don't understand (but must have to do with some coordinate system scaling), but Zongyang has derived a way to automatically reset this magic number as the gantry height changes. @rachelshekar, I think that the correct annotation to this should say something like: "This stitched image is computed based on an assumption that the scene is planar. There are likely to be be small offsets near the boundary of two images anytime there are plants at the boundary (because those plants are higher than the ground plane), or where the dirt is slightly higher or lower than average." |
Excellent. Will get this started up. I think for clarity I am going to create a new Level_1 output going forward:
...that will have the full field result. This will operate on another new Level_1 directory that will include the raw -> geoTiff temporary files:
So pipeline will be:
|
@max-zilla I'd say create a directory in /gpfs/largeblockFS/scratch/terraref/stereoTop_geotiff (on nodes their should be a symlink from /scratch to /gpfs/largeblockFS/scratch, so you can use that shortened path if you want) This way the ephemeral data doesn't live within the production data structure. |
@ZongyangLi in your full_day_to_tiles.py, I have a question:
This looks like it stitches the GeoTIFFs into a large VRT, then chops the VRT into google maps-ready tiles. For our extractor, we basically just want to stop at the !!!!!!!! for now and write that VRT to the permanent full-field output, right? The google tiles part we can do separately if we want. |
@max-zilla I am not quite sure if we can stop before 'Generate tiles from VRT'. The .vrt is a virtual TIFF, which is a way of storing all of the information for the overlapping TIFFs without actually creating the full stitched TIFF. Generating that full TIFF would result in an absurdly large file, so having this "virtual TIF" is a really nice alternative, and one that the map tiler handles well. The final piece is that we then generate the map tiles from the VRT. So if you have some other way of showing stitching map using the information in VRT or it would be better to do that separately when we want, that will be fine. Let me know if this dose not make sense. |
@ZongyangLi the goal of this extractor is to generate an absurdly large TIFF, I think... can you load a VRT into GIS? Can you view it like a TIF file? The new plan is to generate the plot-level clips of the large VRT on-demand, not generating them all every time automatically. If users are happy with a VRT, my thinking was to load the VRT into Clowder and let people use that. Is the VRT just a metadata file of sorts that points to all the component TIFF images? @jterstriep @yanliu-chn @dlebauer @robkooper do you have experience with VRTs for the use we want? |
@max-zilla To my understanding, this VRT just contains tif path and Geo-information to the tif file. to view the large stitching map, you have to create tile files. For our new plan, to generate the plot-level clips, we may first use the metadata to collect all the 'in the plot' tiffs, then use these classified tiffs to create VRT and plot-level stitching map. |
Yes, we handle a lot of VRTs. VRT is open source geospatial software community "standard". Esri has a similar format called mosaic dataset. As Zongyang said, it is just an XML file that contains geo metadata such as projection, extent, resolution, plus a list of file entries, each including attributes of a file included in VRT, e.g., data type, spatial metadata, file path, and relative position in the VRT extent. It is a very convenient way to handle large or a large number of files. It is important to note that: if you publish VRT as a data product, you have to make sure all the consuming clients of the VRT know how to interpret the file path. If you put the absolute path on ROGER as file path, that path has to be accessible by all the clients, be it a VM mounting ROGER storage as an NFS mount point or a desktop software. So I highly recommend that you use relative path. There are two ways to use relative path. One is to directly put a path without the beginning "/"; the other is to use the "relativeToVRT" option in XML file entry. Either is fine. Of course, then, you have to make sure that that relative path never changes; otherwise, VRT needs to be updated accordingly. Most of GIS software understands VRT, so as ArcGIS. You can treat VRT the same as GeoTIFF in that regard. If files in a VRT have overlapping spatial extent, it is an unknown behavior how the overlapping is handled. VRT doesn't handle it. GDAL does by picking the value of the overlapped area from one (either the first or the last) file. For areas that no file has content on, most of GIS software returns nodata value. |
One example of a VRT file. This is the VRT for the national elevation dataset at 10m resolution. We use absolute path in it.
|
@czender @ZongyangLi @yanliu-chn @dlebauer @jterstriep
the VRT file is only 13 MB. I'm manually running
in that same directory on the VM to see what kind of size/look the output TIF has. but it's gonna be massive and take a while - after 45 mins this morning, it's at like 2%. |
I'm not sure how useful the big geotiff is. Even ppl are patient to download it, desktop software would have hard time to display it. |
@yanliu-chn is there a simple arg to gdal_translate to downsample that might speed up generation and still give a useful "human-readable" idea of the data? The VRT still requires you to download 8,000 (!) geoTIFFs in this case or have visibility to them on Roger. |
@max-zilla yes. use the "-outsize" option to give it a percentage or num of pixels for the output geotiff. And select a good resampling method with "-r". I don't know which resampling method is better for such images. @czender or @ZongyangLi can decide. Nearest neighbor is the fastest, but we rarely use it because it is too simple. |
thanks. it's gone to 10% in the last half hour so I'm gonna wait and see if it finishes by the end of day today. |
This mismatch issue in stereo camera can be closed. Further discussion #265 |
Description
Field of view in metadata is not correct and is causing issues with subestting.
This is the original issue as reported by @pless :
In the Stereo image data, the json file has two fields that may be inconsistent with each other:
in:
Context
This field of view estimate is used on the creation of the geoTIFFs and the overall field summaries. The version created for the June 2 event required "magic numbers" to fix the scale and this may have been a partial cause of that.
Completion Criteria
The text was updated successfully, but these errors were encountered: