Twitter 完整档案搜索 - 学术 API

问题描述

我正在使用来自 Twitter 本身的示例代码,通过 Twitter 学术 API 进行完整的档案搜索。然而,在我看来,

json_response = connect_to_endpoint(bearer_token,'from:TwitterDev')

必须在 while 循环之前,因为现在这会导致不考虑分页,如

json_response = connect_to_endpoint(bearer_token,'from:TwitterDev',next_token)

我正确地看到了吗?

count = 0
flag = True

# Replace with your own bearer token from your academic project in developer portal
bearer_token = "REPLACE_ME"

while flag:
    # Replace the count below with the number of Tweets you want to stop at. 
    # Note: running without the count check will result in getting more Tweets
    # that will count towards the Tweet cap
    if count >= 1000:
        break
    json_response = connect_to_endpoint(bearer_token,'from:TwitterDev')
    result_count = json_response['Meta']['result_count']
    if 'next_token' in json_response['Meta']:
        next_token = json_response['Meta']['next_token']
        if result_count is not None and result_count > 0 and next_token is not None:
            for tweet in json_response['data']:
                # Replace with your path below
                f = open('/your/path/tweet_ids.csv','a')
                f.write(tweet['id'] + "\n")
            count += result_count
            print(count)
            json_response = connect_to_endpoint(bearer_token,next_token)
    else:
        flag = False
print("Total Tweet IDs saved: {}".format(count))

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)