Hi, could you please provide the error that is throwing?
The error is
TypeError: init() got an unexpected keyword argument ‘params’
For my undestanding, the whole ECCRegistrationTask function is changed and the old inputs does not work anymore. But I could not comprehend the right inputs to place in the function
The library has been updated to a newer version. Could you please pass directly max_iter = 500 and remove params.
coregister_ecc = ECCRegistrationTask((FeatureType.DATA, 'BANDS'), valid_mask_feature=(FeatureType.MASK, 'VALID_DATA'), channel=0, max_iter=500)
I tried this, but it is not working:
TypeError: init() missing 1 required positional argument: ‘reference_feature’
To my understanding, it needs the image of reference to co-register the others. I guess that the previous version automatically took the first image, while here you should indicate which one has to be used. Here the expected input is
ValueError: Allowed feature types were set to be {<FeatureType.DATA_TIMELESS: ‘data_timeless’>} but found FeatureType.DATA
but I am not familiar with this kind of data
Hi @paolo.filippucci!
Sorry for the inconvenience with the task changes. The explanation is that we started refactoring the library to be as modular as possible. Here we felt that the task did too many things at once, so we reduced the complexity, as it is possible to calculate everything which is necessary beforehand, using different tasks.
To my understanding, it needs the image of reference to co-register the others. I guess that the previous version automatically took the first image, while here you should indicate which one has to be used.
If I remember correctly, previously the process started at the end/beginning and consecutively registered each frame, that’s why the time index wasn’t required. The channel
parameter determines which of the channels to use in case the feature has multiple of them (like in case of RGB, 0=R, 1=G, 2=B)
You are now required to construct a reference feature which doesn’t have the temporal dimension (that’s what it’s data_timeless
means). This feature is used in the coregistration process and all temporal slices are aligned to this image. The reference feature can be a specific slice, or even a temporal average image of the full time stack.
If you want to go a step further, you can even calculate the gradients and do the coregistration on them, and then apply to all your other temporal features you want to coregister.
I hope this helps. Please don’t hesitate to ask additional questions if anything is unclear.
Regards,
Matic
Hi @matic,
First of all, thanks for the answer. The theory is pretty clear now. But can I ask you an example on how to practically put these inputs? I did some try, selecting just one day and using it as reference but I did not succed…
Thanks in advance
Paolo
Hi @paolo.filippucci,
Can you show more explicitly what you tried? Feel free to paste the code in a code block.
Otherwise, I would suggest doing this via writing you own custom EOTasks, which enable you to create a reference feature.
Example 1: A specific date as a reference
Say you have an EOPatch with BANDS
feature of the FeatureType.DATA
type in your eopatch, and that there are 15 timestamps in the eopatch, where you would like to take the 8th timestamp as the reference. This means you have to create a FeatureType.DATA_TIMELESS
feature in your eopatch, where the mentioned task comes in, which can map an input feature to a new output feature.
from eolearn.core import EOPatch, EOTask, FeatureType
class TemporalSliceTask(EOTask):
def __init__(self, in_feature_name: str, out_feature_name: str, temporal_index: int):
self.in_feature = (FeatureType.DATA, in_feature_name)
self.out_feature = (FeatureType.DATA_TIMELESS, out_feature_name)
self.temporal_index = temporal_index
def execute(self, eopatch: EOPatch) -> EOPatch:
eopatch[self.out_feature] = eopatch[self.in_feature][self.temporal_index]
return eopatch
The task above accepts the input feature name, loads the data, selects the index, and then saves the output slice as a new timeless feature. You would use it like this:
slice_task = TemporalSliceTask(in_feature_name="BANDS", out_feature_name="BANDS_SLICE", temporal_index=7)
eop = slice_task.execute(eop)
Example 2: Temporal mean as a reference
Similar to the above, but the temporal mean of the input feature can be calculated to serve as a common reference. Here you can extend the task to use a cloud mask in order to not aggregate and ignore the cloudy areas.
import numpy as np
from eolearn.core import EOPatch, EOTask, FeatureType
class TemporalMeanTask(EOTask):
def __init__(self, in_feature_name: str, clm_feature_name: str, out_feature_name: str):
self.in_feature = (FeatureType.DATA, in_feature_name)
self.clm_feature = (FeatureType.MASK, clm_feature_name)
self.out_feature = (FeatureType.DATA_TIMELESS, out_feature_name)
def execute(self, eopatch: EOPatch) -> EOPatch:
data = eopatch[self.in_feature]
mask = eopatch[self.clm_feature]
data[mask] = np.nan # set cloudy area to np.nan
eopatch[self.out_feature] = np.nanmean(data, axis=0) # mean over the first (the temporal) axis, ignoring nans
return eopatch
Then, after calculating the reference feature, you can run the coregistration task and just provide the reference feature as a parameter.
Let me know if this helps. If not. I’d kindly ask you to provide a code snippet so we can faster determine where the issue lies.
Cheers,
Matic
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.