Here are a couple copies of the dataset I retrieved:ġ. I found that it can be difficult to work with 10MM files in a single directory on my mac, so I will try to save you the trouble. (it also killed Finder a couple times and I had to disable spotlight on the folder I was saving all the. jsonĬaution: That command took just over 30 hours to complete on my macbook. Wget will save the result of each GET request in a separate file with the format. That script takes about 10 minutes to produce a file that's around 560MB in size.Īfter the file is generated, you can feed it to wget using xargs to retrieve all the URIs.Ĭat hn-uri.txt | xargs -P 100 -n 100 wget -quiet Script to create the 10MM line file of URIs to retrieve: Note: 10MM items was ~5 years worth of data. (we will then feed this file into wget using xargs) There is no rate limit, so I created the following script that will generate a text file with 10MM lines containing all of the URIs to retrieve. (Items my be stories, comments, etc., it's the same API endpoints for all types of items.) You can start by retrieving the current Max ID, then walking backwards from there.
The API allows you to get each item by Id. But I needed an up-to-date copy, so I looked into the Hacker News Firebase API. I noticed that there was an old copy of the hacker news dataset available on Big Query. To create the visualization, we first needed to collect the data.