Yes, you can calculate transform from coordinates of image bounding box and shape of the image.
There are only 4 non-constant parameters in transform matrix. Two of them are coordinates of upper left corner of the image, which you can get from the bounding box. The other two are resolution in x
and y
directions. Those you can calculate by dividing size of bounding box by number of pixels in each of x
and y
dimensions of your image.
Hi!
I have a similar question regarding transformations.
I am using a WcsRequest to get a numpy array for a study area defined by a BBox.
However, the study area inside the BBox has an irregular shape, and I want to mask out the pixels that fall outside the study area boundary.
What I’m doing is getting the data first and then generating a mask manually by rasterizing the polygon that is inside the BBox with rasterio, like this:
mask = rasterize(
shapes=((g, 1) for g in aoi_boundary.geometry),
out_shape=(nrow, ncol),
transform=transform,
fill=np.nan,
all_touched=False)
nrow and ncol come from the downloaded array. I generate the transform like this:
rasterio.transform.from_bounds(*aoi_boundary.total_bounds, width=ncol, height=nrow)
Then I apply the mask to the downloaded array by multiplying both arrays.
However, I’m worried that the rasterization of the polygon done by rasterio might differ from the one done by Sentinel Hub when clipping the image to the requested BBox. This would mean the pixels are not well aligned, and I’m masking out the wrong pixels.
I’d appreciate a lot any ideas to improve this.
Thanks!
Roberto