How to build a web crawler to extract data from ZipSurvey and run it through an R Shiny App?

Solution for How to build a web crawler to extract data from ZipSurvey and run it through an R Shiny App?
is Given Below:

I have already built the Shiny app, although it has some problems. I am completely new to the concept of web crawlers. We need to use a crawler to pull CSV files from a survey we have up on ZipSurvey (https://www.zipsurvey.com/) and have the crawler automatically put each data set through our shiny app. I am genuinely not sure where to even begin. If anyone has any pointers about getting this started, it would be greatly appreciated. If it helps, the script below is our app.

library(shiny)

shinyApp(
  ui = fluidPage(
    fileInput("file1", "Choose CSV File",
              multiple = TRUE,
              accept = c("text/csv",
                         "text/comma-separated-values,text/plain",
                         ".csv")),
    downloadButton("report", "Generate report")
  ),
  server = function(input, output) {
    output$report <- downloadHandler(
      # For PDF output, change this to "report.pdf"
      filename = "report.pdf",
      content = function(file) {
        # Copy the report file to a temporary directory before processing it, in
        # case we don't have write permissions to the current working dir (which
        # can happen when deployed).
        tempReport <- file.path(tempdir(), "Mother_script.Rmd")
        file.copy("Mother_script.Rmd", tempReport, overwrite = TRUE)
        
        # Set up parameters to pass to Rmd document
        params <- list(n = input$file1)
        
        # Knit the document, passing in the `params` list, and eval it in a
        # child of the global environment (this isolates the code in the document
        # from the code in this app).
        rmarkdown::render(tempReport, output_file = file, 
                          params = params, 
                          envir = new.env(parent = globalenv())
        )
      }
    )
  }
)