Are you reading the right manual? If you are not sure, you are probably not!
Ensure that the packages have built and are on the r-universe; this might take a little while (it rebuilds once an hour). Check on the builds page to see if it has updated.
Install your own copy of hipercow as usual (you’ll need
hipercow.windows
and conan2
but these will be
installed on use if you don’t have them already).
install.packages(
c("hipercow", "hipercow.windows"),
repos = c("https://mrc-ide.r-universe.dev", "https://cloud.r-project.org"))
Set your working directory to anywhere on a network share (home drives are fine for this).
Trigger building the bootstrap with:
This will update all versions on the cluster that are at most one minor version older than the most recent. It does have the side effect of changing the windows configuration for wherever you run the command to the most recent supported R version.
If you want to test a copy of hipercow on the cluster, you need to install a specific version of it somewhere it will be picked up. The simplest way of doing this is to install everything into a new bootstrap environment.
Create a new development bootstrap library by running (as above, from a network location):
hipercow::hipercow_init(".")
hipercow::hipercow_configure("windows", r_version = "4.3.0")
hipercow.windows:::bootstrap_update(development = "mrc-4827")
Now you can use your new version, after setting the option
hipercow.development = TRUE
:
library(hipercow)
hipercow_init(".", "windows", r_version = "4.3.0") # needs to match above
options(hipercow.development = TRUE)
id <- task_create_expr(sessionInfo())
task_status(id)
task_log_show(id)
task_status(id)
task_result(id)
It is assumed that you have the development version installed locally yourself, which is likely the case.
These need to be run in a network share so set an environment variable:-
HIPERCOW_VIGNETTE_ROOT=/path/to/share
or on Windows, a mapped drive, such as
HIPERCOW_VIGNETTE_ROOT=Q:/hipercow_vignettes
in your .Renviron
indicating where we should work. We’ll
make lots of directories here.
Each vignette can be built by running (ideally in a fresh session with the working directory as the package root)
hipercow.windows:::windows_check_credentials()
knitr::knit("vignettes_src/windows.Rmd", "vignettes/windows.Rmd")
knitr::knit("vignettes_src/packages.Rmd", "vignettes/packages.Rmd")
knitr::knit("vignettes_src/workers.Rmd", "vignettes/workers.Rmd")
knitr::knit("vignettes_src/stan.Rmd", "vignettes/stan.Rmd")
which generates a new windows.Rmd
file within
vignettes/
that contains no runnable code and so can be
safely run on CI.
You may want to run options(hipercow.development = TRUE)
to build using the development versions (see above).
The remaining vignettes use the example driver so they can be run independent of any actual cluster.
On the command line, make vignettes
will run through all
vignettes, but this seems to only work on some versions of make.
We have batch files specific for each version of R that we support which:-
Rscript.exe
for that R versionRTOOLS43_HOME
and
BINPREF
.JAVA_HOME
which the
rJava
package looks up.These batch files live in
\\projects.dide.ic.ac.uk\software\hpc\R
, and are called
things like setr64_4_3_2.bat
, which looks like this.
@echo off
IF EXIST I:\rtools (set RTOOLS_DRIVE=I) else (set RTOOLS_DRIVE=T)
FOR /f %%V in (%RTOOLS_DRIVE%:\Java\latest.txt) DO set JAVA_HOME=%RTOOLS_DRIVE%:\Java\%%V
set RTOOLS43_HOME=%RTOOLS_DRIVE%:\Rtools\Rtools43
set BINPREF=%RTOOLS_DRIVE%:/Rtools/Rtools43/x86_64-w64-mingw32.static.posix/bin/
set path=%RTOOLS_DRIVE%:\Rtools\Rtools43\usr\bin;%RTOOLS_DRIVE%:\Rtools\Rtools43\x86_64-w64-mingw32.static.posix\bin;C:\Program Files\R\R-4.3.2\bin\x64;%path%
echo Using RTOOLS43_HOME = %RTOOLS43_HOME%
echo Using JAVA_HOME = %JAVA_HOME%
These get copied to C:\Windows
on each cluster node,
using the HPC Cluster Manager. You simply select the nodes, right click,
“Run Command”, and look in the history for previous copy commands, to
copy the batch files in to %SystemRoot% - hence they are always in the
path on every node. You also need to edit the permissions to allow
everyone to read the file, by running this on each cluster node:-
cacls %SYSTEMROOT%\setr64_*.bat /e /p everyone:r
This also needs doing (manually) on the headnode, which runs the BuildQueue.
\\projects\software\hpc\R
.install_r_....bat
file, to a name
matching the new R version. Edit it; usually just changing the R_VERSION
near the top is enough.setr64_...bat
file, to a name
matching the new R version. Usually just the path is enough, but for
major versions, we’ll need to consult the documentation and see if we
need a new RTools version too.install
batch file on all cluster
nodes.fi--didex1
edit
C:\xampp\htdocs\mrcdata\hpc\api\v1\cluster_software\cluster_software.json
to add the new version, and perhaps remove retired ones.This varies each time, and especially between different major R versions. But essentially:-
I:\rtools
,setr64_...bat
files;
review these for the versions that need the new RTools.I:\Java
, and then rename the folder it makes
it to a tidier version number, such as 21.0.2
I:\Java\latest.txt
to contain that new version
string.rJava
and xlsx
will work happily.See vignette("stan")
for general comments about
stan.
To update the installation of CmdStan
on the
I:/
(and generally available), run:
Users should not run this themselves into their home directories as the installation is about 1GB
There are influential environment variables set at the driver level
which prevent stan from breaking our Rtools installation
(CMDSTAN
and CMDSTANR_USE_RTOOLS
).
For details see: