One thing that would be nice is to have the ability to have a suppression list based on Google Place IDs so that when doing a crawl it will skip those that are already known and only return net new locations found. This would be extremely helpful to reduce unneeded credit usage and provide more ongoing list poulls.
Each task will be counted as a new scraping task independent of others. The reason behind this is the fact that we have to scrape all the data to give you the results.
In order to avoid duplicates, I would recommend using different categories or locations in each task.