So you want to scrape data from some web page on a regular basis, huh?
Here are some ways you could do it:
- A pure Google-Sheets-based approach, which can be set to run hourly, daily, or on other schedules: https://www.computerworld.com/article/3684733/how-to-create-automatically-updating-google-sheet.html
- A pure GitHub-based approach (exploiting GitHub actions): https://simonwillison.net/2020/Oct/9/git-scraping/
- Flat Data - An extensions of Simon Willison's git-scraping that includes a GUI for viewing the captured data (as long as it's stored in a public GitHub repo)