Currently i am using TickHistoryRawExtractionRequest Rest Api python example from Refinitiv tutorial. I extracted the data from REST api and was able to create gzip file for one Identifier. But the data extracted is almost near to 1 billion. And the Query startdate and enddate i have provided is just for one day.
Is there any condition in the extraction request where i can restrict the data as i dont need nanosecond data but daily basis granular data..
I am not able to view CSV file and when i am trying to load this csv into oracle database table its taking hours.
Below is my extraction request part. I am not sure if i can restrict and reduce the unwanted raw data where the options are not valid or is there any way to load this data into oracle database table quiet performance efficiently ?
requestBody = { "ExtractionRequest": { "@odata.type": "#DataScope.Select.Api.Extractions.ExtractionRequests.TickHistoryRawExtractionRequest", "IdentifierList": { "@odata.type": "#DataScope.Select.Api.Extractions.ExtractionRequests.InstrumentIdentifierList", "InstrumentIdentifiers": [{ "Identifier": "SIEGn.DE", "IdentifierType": "Ric" }] }, "Condition": { "MessageTimeStampIn": "GmtUtc", "ReportDateRangeType": "Range", "QueryStartDate": "2021-08-20T00:00:00.000Z", "QueryEndDate": "2021-08-20T23:10:00.000Z", "Fids": "6,22,25,30,31,77,178,183,184,1021,3853,4465,6544,6554,7087,11872,12770,14265", "ExtractBy": "Ric", "SortBy": "SingleByRic", "DomainCode": "MarketPrice", "DisplaySourceRIC": "true", "FidListOperator": "OR" } } }