Skip to main content

Hello, I am trying to access Planet data through SH api. I created a collection and provided my own S3 bucket name with the right permissions. When I create the order with the collection_id, it gives me an error that the bucket has to be sh.tpdi.byoc.eu-central-1. Is there a way to use our own S3 bucket?

Thanks

Hi,

no, there is currently no way to use your own bucket for third party data.

Best regards,
 


ok, thank you. Are there plans for it in the future?

Can we copy it manually/programmatically afterward to our own bucket? Also, I noticed that the order only delivered the tiff. Is there a way to request the XML files with processing parameters?


We will almost certainly offer this option in the future as well, but can currently not commit yet on when it will happen.

The support for meta-data is also in the plan, prior to the above.


If you create a bucket in AWS eu-central-1 region and configure it similarly as we define for batch, we can copy the data you have purchased to that bucket, for your archive.


Thank you! One more question. The PS product offered right now is analytic(Radiometrically calibrated GeoTiff product suitable for analytic applications) correcet?


Yes, we use “analytic” product. And our data science team uses it quite a bit in varius (agriculture-related) machine learning exercises and they find it a valuable input.


Reply