Skip to main content

I am trying to use the sentinel-2 L2A images downloaded from sentinel-hub for a deep learning application on RGB images. However, each image has different brightness levels and that also depends on the area of interest I export. Is there a consistent way to retrieve RGB images for directly feeding to the ML model?


Does the sampling of pixel colors vary depending on the area of interest (Large area vs small area (10 km2)?

Hi,

Could you describe a bit more about what you’re trying to achieve there?

Sentinel-2 has band 4, band 3, and band 2 as red, green, and blue band (Sentinel-2 Bands). You could obtain a true color image via combining these three bands, either in reflectance or digital number with or without harmonisation.

In this example we increase the brightness simply by multiplying a factor of 2.5. However, you might want to consider training your model based on the measurement value instead of “visualisation value” which is adjusted for brightness.


Thanks for the response. Of course, I get the raw (DN) values of the rgb bands and get the rgb image, but however, the image looks like this after enhancing the brightness, else it is two dark (see figure, where I used two types of approach to enhance the brightness in figure 1 and figure 2). What I am interested is in how the bands are calibrated and the fact that the measurement value is often too low to train a model, so what approach would you suggest for consistent brightness for RGB images instead of arbitary values like 2.5.

 

 


Hi,

An easy way to enhance images, especially to gain contrast on the dark pixels, is log(1+DN)/log(65536). This image_runner python notebook might also be helpful.

Meanwhile, we are in the process of writing a blog post about normalisation of S-2 bands, which would probably be useful as well, but it is still work in progress. Feel free to follow our blog.


Thanks for the response! Is it possible to share the source the equation? The image_runner notebook os for level L1C product. However, I think it would be better to download L1C image and apply to sen2cor myself.


It’s just a particular normalisation used to train a CNN for the Dynamic World dataset. Here’s the article and the github repo with source code.


Reply