question

Upvotes
Accepted
3 0 1 1

Datastream DSWS Python most efficient way to load data

Hi - newbie to the product.

I can successfully connect and download data. I want to download 12 fields for > 2000 RICs

I have found that there is a limit on 50 RICs per request and as a limit

RICs x fields <= 100 

So the best I can figure out is to chunk my list of RICs into lists of 8 and then call with for the 12 data fields (= 96 items total)


Is there a better way of doing this?

thanks


def chunker(seq, size):
    return (seq[pos:pos + size] for pos in range(0, len(seq), size))

for group in chunker(df_rics['r'].to_list(), 8):
    ds_data = ds.get_data(tickers=', '.join(group),
                          fields=['EPS1UP', 'EPS1DN', 'EPS2UP', 'EPS2DN', 'DPS1UP',
                                  'DPS1DN', 'DPS2UP', 'DPS2DN', 'SAL1UP', 'SAL1DN', 'SAL2UP', 'SAL2DN'], kind=0, start=end_date.strftime("%Y-%m-%d"))
    print(ds_data)
pythondatastream-apidsws-apiapi-limits
icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 5.0 MiB each and 10.0 MiB total.

1 Answer

Upvotes
Accepted
38.1k 71 35 53

@john.lupton

The DSWS user stats and limits are explained in this document.

You may use the Bundle request instead.

The code looks like the following.

reqs =[]
reqs.append(ds.post_user_request(tickers='VOD',fields=['VO','P'],start='2017-01-01', kind = 0))
reqs.append(ds.post_user_request(tickers='U:BAC', fields=['P'], start='1975-01-01', end='0D', freq = "Y"))
df = ds.get_bundle_data(bundleRequest=reqs)

1615885024248.png (55.7 KiB)
icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 5.0 MiB each and 10.0 MiB total.

@jirapongse.phuriphanvichai Thank yiu for taking the time to reply: I have implemented what you sugest here: If i understand correctly, each sub-request is still limited to 100 items? thanks,

Click below to post an Idea Post Idea