@mwestphal Having it on two stites will be a pain. At present, in the old setup, when scraping the repository, the scripts generate the whole static site in the site
folder, and, in the GitHub setup, it automatically picks up the site if it is in the site
folder of the master. So there is no need for gh_pages. You do need to set the page source as master
on the site - that’s the only trick. So I guess the scrape repo script could push just the site
folder to the master on GitHub. We should then set .gitignore
to ignore the src
folder and force a push to the GitHub site. Rewriting the code for setting links to two different sites will be tricky but possible.
I would think having it all on GitLab would be much simpler and easier to maintain.
What do others think as I am hardly an expert?