question

Upvotes
Accepted
1 0 0 1

TRTH - direct copy from one AWS S3 bucket to another S3 bucket

@Christiaan Meihsl - I have seen your documentation on downloading directly from your AWS S3 bucket at https://developers.refinitiv.com/en/article-catalog/article/boost-tick-history-downloads-with-aws

I am wondering if it possible to go one step further and issue a direct S3 to S3 copy command. Such commands are normally much faster because the client does not actually download the data, it is copied by AWS on their back-end. See for example: https://docs.aws.amazon.com/AmazonS3/latest/dev/CopyingObjectUsingJava.html

Do you have any advice on how to authenticate the copy with AWS?

CC - @Raghavender.Bommoju

dss-rest-apitick-history-rest-api
icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 5.0 MiB each and 10.0 MiB total.

Upvotes
Accepted
1 0 0 1

Answering my own question: As far as I can tell, this is not currently possible. DSS would have to provide a signed URL authorizing this request.

icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 5.0 MiB each and 10.0 MiB total.

Upvotes
38.1k 71 35 53

When downloading TRTH data from AWS, you get the pre-signed URL. The question could be how to copy an object from the pre-signed URL. I found a few answers via the stack overflow.

Someone has mentioned about a pre-signed PUT URL with the x-amz-copy-source header.

icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 5.0 MiB each and 10.0 MiB total.

Thanks - but I think the accepted answer (that it is not possible) is probably correct. I don't see a way you can use the signature when the query has to change from a download to a copy. The signature is made of a hash that includes the request parameters, any change would invalidate the signature.

Upvotes
1 0 0 0

Can someone please update the latest situation in this regards?

What is the Best Method for DIRECTLY COPYING the VBD "Venue by Day" Data Files from Refinitiv AWS S3 Bucket to the Client's AWS S3 Bucket, without having to first download and then again upload these files individually from the client machine? What are the best practices for this particular type of AWS S3 direct copy method?

Please provide links to the example codes to accomplish the same.

Regards

@Christiaan Meihsl @Janik.Zikovsky @jirapongse.phuriphanvichai @veerapath.rungruengrayubkul

icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 5.0 MiB each and 10.0 MiB total.

Upvote
31 0 0 1

Hi Pankaj,


I have been able to do this partially - technically you are still downloading to the local machine but by passing the streams around you do it in a streaming way, from one inputStream to another, and avoid having to create a local file to then upload. In this code snippet awsURI is the signed URL returned by DSS :


    URLConnection urlConnection = awsURI.toURL().openConnection();
    // In some cases (reposted files) data as reported by DSS is incorrect. Happened on May 11, 2020.
    // Need to rely on HTTP headers instead
    long contentLength = urlConnection.getContentLengthLong();
    if (contentLength <= 0) {
      // But as a failure scenario, use the DSS data.
      contentLength = task.getFileSize();
    }
    // Same for MD5 sum
    String md5sum = urlConnection.getHeaderField("x-amz-meta-md5sum");
    if (StringUtils.isBlank(md5sum)) {
      md5sum = task.getContentMd5();
    }
    LOGGER.info("Starting download of {} with content length {}", awsURI.toURL(), contentLength);
    try (InputStream stream = urlConnection.getInputStream()) {
      String filename = FilenameUtils.getName(awsURI.toURL().getPath());
      ObjectMetadata metadata = new ObjectMetadata();
      metadata.setContentLength(contentLength);

      // Convert hex-encoded MD5 to raw bytes, then base64-encode
      byte[] binaryMd5 = DatatypeConverter.parseHexBinary(md5sum);
      String base64md5 = Base64.encodeBase64String(binaryMd5);
      metadata.setContentMD5(base64md5);
      // Save a simple MD5 sum because the ETag cannot be used in multi-part uploads.
      // Metadata has to start with "x-amz-meta-" to be accepted
      metadata.setHeader("x-amz-meta-" + VbdUtility.CHECKSUM_METADATA_NAME, task.getContentMd5());

      Date date = new Date();
      metadata.setLastModified(date);

      PutObjectRequest request = new PutObjectRequest(config.downloadBucket(), destKey, stream, metadata);
      request.setSdkClientExecutionTimeout(-1);  // disabled

      // Start the upload in the background, but then wait for completion
      Upload upload = transferManager.upload(request);
      upload.waitForCompletion();

      TransferState transferState = upload.getState();
      LOGGER.info("Transfer status of {} is {} and upload.isDone: {}", filename, transferState, upload.isDone());
      return transferState;
    }
  }





icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 5.0 MiB each and 10.0 MiB total.

Thanks Janik for sharing the solution that you are using.

Are you able to verify the data integrity of the uploaded VBD files on your AWS account by comparing the MD5 checksum? Does this streaming method works perfectly every time or you notice the incidents when the MD5 does not matches and you have to re-upload the files again?

And does Refinitiv staff have any alternative methods for achieving the same results? Will some support staff please comment?

If AWS S3 direct upload is not supported, then even some other cloud direct upload will also be helpful, maybe like uploading on Google Cloud Platform GCP instead.

The only main requirement is that the upload should be directly from Refinitiv AWS Link to the client Cloud Platform, without having the need to first download and then re-upload these big files.

Any help is appreciated.

Best Regards

Upvote
31 0 0 1

Hi Pankaj,

Yes, the upload's MD5 is validated - the AWS S3 SDK does this for you because of the "metadata.setContentMD5(base64md5);" call - see AWS S3 API docs for more details. I don't believe I've seen a case where the MD5 validation failed but there is a retry mechanism implemented so I may not have noticed.

Sincerely,

Janik Zikovsky


icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 5.0 MiB each and 10.0 MiB total.

Thank you so much Janik.

Click below to post an Idea Post Idea