Parallel computing on a cluster can be more challenging than running things locally because it’s often the first time that you need to package up code to run elsewhere, and when things go wrong it’s more difficult to get information on why things failed.
Much of the difficulty of getting things running involves working out what your code depends on, and getting that installed in the right place on a computer that you can’t physically poke at. The next set of problems is dealing with the ballooning set of files that end up being created - templates, scripts, output files, etc.
The hipercow
package aims to remove some of this pain,
with the aim that running a task on the cluster should be (almost) as
straightforward as running things locally, at least once some basic
setup is done.
At the moment, this document assumes that we will be using the “Windows” cluster, which implies the existence of some future non-Windows cluster. Stay tuned.
This manual is structured in escalating complexity, following the chain of things that a hypothetical user might encounter as they move from their first steps on the cluster through to running enormous batches of tasks.
Install the required packages from our “r-universe”. Be sure to run this in a fresh session.
install.packages(
"hipercow",
repos = c("https://mrc-ide.r-universe.dev", "https://cloud.r-project.org"))
Once installed you can load the package with
or use the package by prefixing the calls below with
hipercow::
, as you prefer.
Follow any platform-specific instructions in
vignettes("<cluster>")
; this will depend on the
cluster you intend to use:
vignette("windows")
We need a concept of a “root”; the point in the filesystem we can think of everything relative to. This will feel familiar to you if you have used git or orderly, as these all have a root (and this root will be a fine place to put your cluster work). Typically all paths will be within this root directory, and paths above it, or absolute paths in general, effectively cease to exist. If your project works this way then it’s easy to move around, which is exactly what we need to do in order to run it on the cluster.
If you are using RStudio, then we strongly recommend using an RStudio project.
Run
hipercow_init()
#> ✔ Initialised hipercow at '.' (/tmp/Rtmpehsscb/hv-20241209-12d862aa7ac3)
#> ℹ Next, call 'hipercow_configure()'
which will write things to a new path hipercow/
within
your working directory.
After initialisation you will typically want to configure a “driver”, which controls how tasks are sent to clusters. At the moment the only option is the windows cluster so for practical work you would write:
however, for this vignette we will use a special “example” driver which simulates what the cluster will do (don’t use this for anything yourself, it really won’t help):
You can run initialisation and configuration in one step by running
After initialisation and configuration you can see the computed
configuration by running hipercow_configuration()
:
hipercow_configuration()
#>
#> ── hipercow root at /tmp/Rtmpehsscb/hv-20241209-12d862aa7ac3 ───────────────────
#> ✔ Working directory '.' within root
#> ℹ R version 4.4.2 on Linux (root@4383a53695d9)
#>
#> ── Packages ──
#>
#> ℹ This is hipercow 1.0.52
#> ℹ Installed: conan2 (1.9.101), logwatch (0.1.1), rrq (0.7.22)
#> ✖ hipercow.windows is not installed
#>
#> ── Environments ──
#>
#> ── default
#> • packages: (none)
#> • sources: (none)
#> • globals: (none)
#>
#> ── empty
#> • packages: (none)
#> • sources: (none)
#> • globals: (none)
#>
#> ── Drivers ──
#>
#> ✔ 1 driver configured ('example')
#>
#> ── example
#> (unconfigurable)
Here, you can see versions of important packages, information about
where you are working, and information about how you intend to interact
with the cluster. See vignette("windows")
for example
output you might expect on the Windows cluster, which includes
information about mapping of your paths onto those of the cluster, the
version of R you will use, and other information.
If you have issues with hipercow
we will always want to
see the output of hipercow_configuration()
.
The first time you use the tools (ever, in a while, or on a new machine) we recommend sending off a tiny task to make sure that everything is working as expected:
id <- task_create_expr(sessionInfo())
#> ✔ Submitted task '096f4b503fdb8d7dfeed9655f0c02391' using 'example'
This creates a new task that will run the expression
sessionInfo()
on the cluster. The
task_create_expr()
function works by so-called “non standard
evaluation” and the expression is not evaluated from your R session,
but sent to run on another machine.
The id
returned is just an ugly hex string:
Many other functions accept this id
as an argument. You
can get the status of the task, which will have finished now because it
really does not take very long:
Once the task has completed you can inspect the result:
task_result(id)
#> R version 4.4.2 (2024-10-31)
#> Platform: x86_64-pc-linux-gnu
#> Running under: Ubuntu 24.04.1 LTS
#>
#> Matrix products: default
#> BLAS: /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3
#> LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.26.so; LAPACK version 3.12.0
#>
#> locale:
#> [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
#> [3] LC_TIME=en_US.UTF-8 LC_COLLATE=C
#> [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
#> [7] LC_PAPER=en_US.UTF-8 LC_NAME=C
#> [9] LC_ADDRESS=C LC_TELEPHONE=C
#> [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
#>
#> time zone: Etc/UTC
#> tzcode source: system (glibc)
#>
#> attached base packages:
#> [1] stats graphics grDevices utils datasets methods base
#>
#> loaded via a namespace (and not attached):
#> [1] compiler_4.4.2 cli_3.6.3 withr_3.0.2 hipercow_1.0.52
#> [5] rlang_1.1.4
Because we are using the “example” driver here, this is the same as
the result that you’d get running sessionInfo()
directly,
just with more steps. See vignette("windows")
for an
example that runs on Windows.
It’s unlikely that the code you want to run on the cluster is one of
the functions built into R itself; more likely you have written a
simulation or similar and you want to run that instead. In
order to do this, we need to tell the cluster where to find your code.
There are two broad places where code that you want to run is likely to
be found script files and packages; we
start with the former here, and deal with packages in much more detail
in vignette("packages")
.
Suppose you have a file simulation.R
containing some
simulation:
random_walk <- function(x, n_steps) {
ret <- numeric(n_steps)
for (i in seq_len(n_steps)) {
x <- rnorm(1, x)
ret[[i]] <- x
}
ret
}
We can’t run this on the cluster immediately, because the cluster does not know about the new function:
id <- task_create_expr(random_walk(0, 10))
#> ✔ Submitted task '3903e03e14b6108963734f9d1f215ad2' using 'example'
task_wait(id)
#> [1] FALSE
task_status(id)
#> [1] "failure"
task_result(id)
#> <simpleError in random_walk(0, 10): could not find function "random_walk">
(See vignette("troubleshooting")
for more on
failures.)
We need to tell hipercow to source()
the file
simulation.R
before running the task. To do this we use
hipercow_environment_create()
to create an “environment”
(not to be confused with R’s environments) in which to run things:
Now we can run our simulation:
id <- task_create_expr(random_walk(0, 10))
#> ✔ Submitted task '9f41ff272a59da3c88d41b1d85757c7a' using 'example'
task_wait(id)
#> [1] TRUE
task_result(id)
#> [1] -1.537489 -2.420363 -1.595175 -3.390950 -4.970865 -5.392787 -5.324389
#> [8] -6.980023 -6.664406 -6.433726
parallel
, future
or
rrq
) will set up their environmentsRead more about environments in
vignette("environments")
Once you have created (and submitted) tasks, they will be queued by the cluster and eventually run. The hope is that we surface enough information to make it easy for you to see how things are going and what has gone wrong.
task_info()
The primary function for fetching information about a task is
task_info()
:
task_info(id)
#>
#> ── task 9f41ff272a59da3c88d41b1d85757c7a (success) ─────────────────────────────
#> ℹ Submitted with 'example'
#> ℹ Task type: expression
#> • Expression: random_walk(0, 10)
#> • Locals: (none)
#> • Environment: default
#> R_GC_MEM_GROW: 3
#> ℹ Created at 2024-12-09 19:22:05.830109 (moments ago)
#> ℹ Started at 2024-12-09 19:22:05.877905 (moments ago; waited 48ms)
#> ℹ Finished at 2024-12-09 19:22:05.87903 (moments ago; ran for 2ms)
This prints out core information about the task; its identifier
(9f41ff272a59da3c88d41b1d85757c7a
) and status
(success
), along with information about what sort of task
it was, what expression it had, variables it used, the environment it
executed in and the time that key events happened for the task (when it
was created, started and finished).
This display is meant to be friendly; if you need to compute on this
information, you can access the times by reading the $times
element of the task_info()
return value:
task_info(id)$times
#> created started finished
#> "2024-12-09 19:22:05 UTC" "2024-12-09 19:22:05 UTC" "2024-12-09 19:22:05 UTC"
Likewise, the information about the task itself is within
$data
. To work with the underling data you might just
unclass the object to see the structure:
unclass(task_info(id))
#> $id
#> [1] "9f41ff272a59da3c88d41b1d85757c7a"
#>
#> $status
#> [1] "success"
#>
#> $data
#> $data$type
#> [1] "expression"
#>
#> $data$id
#> [1] "9f41ff272a59da3c88d41b1d85757c7a"
#>
#> $data$time
#> [1] "2024-12-09 19:22:05 UTC"
#>
#> $data$path
#> [1] "."
#>
#> $data$environment
#> [1] "default"
#>
#> $data$envvars
#> name value secret
#> 1 R_GC_MEM_GROW 3 FALSE
#>
#> $data$parallel
#> NULL
#>
#> $data$expr
#> random_walk(0, 10)
#>
#> $data$variables
#> NULL
#>
#>
#> $driver
#> [1] "example"
#>
#> $times
#> created started finished
#> "2024-12-09 19:22:05 UTC" "2024-12-09 19:22:05 UTC" "2024-12-09 19:22:05 UTC"
#>
#> $retry_chain
#> NULL
but note that the exact structure is subject to (infrequent) change.
task_log_show
Every task will produce some logs, and these can be an important part of understanding what they did and why they went wrong.
You can view the log with task_log_show()
This prints the contents of the logs to the screen; you can access
the values directly with task_log_value(id)
. The format of
the logs will be generally the same for all tasks; after the header
saying where we are running, some information about the task will be
printed (its identifier, the time, details about the task itself), then
any logs that come from calls to message()
and
print()
within the queued function (within the “task logs”
section; here that is empty because our task prints nothing). Finally, a
summary will be printed with the final status, final time (and elapsed
time), then any warnings that were produced will be flushed (see
vignette("troubleshooting")
for more on warnings).
There is a second log too, the “outer” log, which is generally less interesting so it is not the default. These logs come from the cluster scheduler itself and show the startup process that leads up to (and after) the code that hipercow itself runs. It will differ from driver to driver. In addition, this log may not be available forever; the windows cluster retains it only for a couple of weeks:
The logs returned by task_log_show(id, outer = FALSE)
are the logs generated by the statement containing
Rscript -e
.
task_log_watch
If your task is still running, you can stream logs to your computer
using task_log_watch()
; this will print new logs
line-by-line as they arrive (with a delay of up to 1s by default). This
can be useful while debugging something to give the illusion that you’re
running it locally.
Using Ctrl-C
(or ESC
in RStudio) to escape
will only stop log streaming and not the underlying task.
Running one task on the cluster is nice, because it takes the load off your laptop, but it’s generally not why you’re going through this process. More likely, you have many, similar, tasks that you want to set running at once. You might be:
In all these cases, you would want to submit a group of related tasks, sharing a common function, but differing in the data passed into that function. We call this the “bulk interface”, and it is the simplest and usually most effective way of getting started with parallel computing.
This sort of problem is referred to as “embarrassingly
parallel”; this is not a pejorative, it just means that your work
decomposes into a bunch of independent chunks and all we have to do is
start them. You are already familiar with things that can be run this
way: anything that can be run with R’s lapply
could be
parallelised.
There are two similar bulk creation functions, which differ based on the way the data you have are structured:
task_create_bulk_call
is used where you have a
list
, where each element represents the key input to some
computation (this is similar to lapply()
)task_create_bulk_expr
is used where you have a
data.frame
, where each row represents the
inputs to some computation (this is a little similar to
dplyr
’s rowwise
support)The bulk call interface is the one that might feel most familiar to
you; it is modelled on ideas from functions like lapply
(or, if you use purrr
, its map()
function).
The idea is simple, we have a list of data and we apply some function to
each element within it.
We’ll start with a reminder of how lapply
works, then
adapt this to run in parallel on a cluster. Imagine that we want to run
some simple simulation with a different parameter. In this example we
simulate n
samples from a normal distribution and compute
the observed mean and variance:
We can run locally it like this:
Suppose that we have a vector of means to run this with:
We can apply mysim
to each of the elements of
mu
by writing:
lapply(mu, mysim, sd = 1)
#> [[1]]
#> [1] -0.007090899 1.017524211
#>
#> [[2]]
#> [1] 1.060129 1.032663
#>
#> [[3]]
#> [1] 2.018480 1.084035
#>
#> [[4]]
#> [1] 2.9722364 0.9351254
#>
#> [[5]]
#> [1] 3.972224 1.063601
Of note here:
mu
argument was iterated oversd
argument that was passed through to
every call to mysim
mu
,
with each element the result of applying mysim
to that
element.Nothing in the above said anything about the order in which
these calculations were carried out; one might assume that we applied
myfun
to the first element of mu
at once, then
the second, but that is just conjecture. This last point seems a bit
silly, but is a useful condition to think about when considering what
can be parallelised; if you can run a “loop” backwards and get the same
answer (ignoring things like the specific draws from random number
generating functions) then your problem is well suited to being
parallelised.
Our function mysim
is in a file called
simulation-bulk.R
, which we’ll add to our environment so
that it’s available on the cluster (alongside random_walk
from above):
hipercow_environment_create(sources = c("simulation.R", "simulation-bulk.R"))
#> ✔ Updated environment 'default'
We can then submit tasks to the cluster using
task_create_bulk_call
:
bundle <- task_create_bulk_call(mysim, mu, args = list(sd = 1))
#> ✔ Submitted 5 tasks using 'example'
#> ✔ Created bundle 'lapis_kingbird' with 5 tasks
bundle
#> → <hipercow_bundle 'lapis_kingbird' with 5 tasks>
This creates a task bundle which groups together
related tasks. There is a whole set of functions for working with
bundles that behave similarly to the task query functions. So where
task_status()
retrieves the status for a single task, we
can get the status for a bundle by running
which returns a vector over all tasks included in the bundle. You can also “reduce” this status to the “worst” status over all tasks:
Similarly, you can wait for a whole bundle to complete
And then get the results as a list
hipercow_bundle_result(bundle)
#> [[1]]
#> [1] -0.001660679 1.011284627
#>
#> [[2]]
#> [1] 0.9935368 1.0002750
#>
#> [[3]]
#> [1] 1.966141 1.056276
#>
#> [[4]]
#> [1] 2.9691034 0.9203908
#>
#> [[5]]
#> [1] 4.029472 1.102806
This flow (create
, wait
,
result
) is equivalent to lapply
and produces
data of the same shape in return, but the tasks will be carried out in
parallel! Each task is submitted to the cluster and picked up by the
first available node. You might submit 100 tasks and if the cluster is
quiet, a few seconds later all of them will be running at the same
time.
We might want to vary both mu
and
sd
, in which case it might be convenient to keep track of
our inputs in a data.frame
:
We can use task_create_bundle_call
with this, too:
b <- task_create_bulk_call(mysim, pars)
#> ✔ Submitted 5 tasks using 'example'
#> ✔ Created bundle 'subterrestrial_pug' with 5 tasks
hipercow_bundle_wait(b)
#> [1] TRUE
hipercow_bundle_result(b)
#> [[1]]
#> [1] 0.008032337 1.064424799
#>
#> [[2]]
#> [1] 0.9993199 2.1648579
#>
#> [[3]]
#> [1] 1.898639 2.958154
#>
#> [[4]]
#> [1] 3.054251 3.719258
#>
#> [[5]]
#> [1] 4.056007 4.911587
This iterates over the data in a row-wise way. Note
that this is very different to lapply
which would iterate
over columns (in practice we find that this is almost
never what people want). The names of the columns must match the names
of your function arguments and all columns must be used.
We could have passed additional arguments here too, for example
changing n
:
We also support a bulk expression interface, which can be clearer than the above.
b <- task_create_bulk_expr(mysim(mu, sd, n = 40), pars)
#> ✔ Submitted 5 tasks using 'example'
#> ✔ Created bundle 'malleable_fattaileddunnart' with 5 tasks
This would again work row-wise over pars
but evaluate
the expression in the first argument with the data found in the
data.frame
. This would allow you to use different column
names if convenient:
You can do most things to bundles that you can do to tasks:
Action | Single task | Bundle |
---|---|---|
Get result | task_result |
hipercow_bundle_result |
Wait for completion | task_wait |
hipercow_bundle_wait |
Retry failed tasks (see below) | task_retry |
hipercow_bundle_retry |
List | task_list |
hipercow_bundle_list |
Cancel | task_cancel |
hipercow_bundle_cancel |
Get log value | task_log_value |
hipercow_bundle_log_value |
There is no equivalent of task_log_watch
or
task_log_show
because we can’t easily do this for multiple
tasks at the same time in a satisfactory way.
hipercow_bundle_delete
will delete bundles, but leave
tasks alone. hipercow_purge
will delete tasks, causing
actual deletion of data.
Some of these functions naturally have slightly different semantics
to the single-task function; for example,
hipercow_bundle_result()
returns a list of results and
hipewcow_bundle_wait
has an option fail_early
to control if it shold return FALSE
as soon as any task
fails.
You can use the hipercow_bundle_list()
function to list
known bundles:
hipercow_bundle_list()
#> name time
#> 1 deadly_snowmonkey 2024-12-09 19:22:08
#> 2 malleable_fattaileddunnart 2024-12-09 19:22:08
#> 3 extrajudicial_iguanodon 2024-12-09 19:22:08
#> 4 subterrestrial_pug 2024-12-09 19:22:07
#> 5 lapis_kingbird 2024-12-09 19:22:07
Each bundle has a name (automatically generated by default) and the time that it was created. If you have launched a bundle and for some reason lost your session (e.g., windows update has rebooted your computer) you can use this to get your ids back.
If you’re not sure what you launched, you can use
task_info
:
task_info(bundle$ids[[1]])
#>
#> ── task fa9fd76721bc71b96e7caf94c6cea624 (success) ─────────────────────────────
#> ℹ Submitted with 'example'
#> ℹ Task type: expression
#> • Expression: mysim(mean, stddev, n = 40)
#> • Locals: mean and stddev
#> • Environment: default
#> R_GC_MEM_GROW: 3
#> ℹ Created at 2024-12-09 19:22:08.373661 (moments ago)
#> ℹ Started at 2024-12-09 19:22:08.388291 (moments ago; waited 15ms)
#> ℹ Finished at 2024-12-09 19:22:08.389771 (moments ago; ran for 2ms)
You can make this process a bit more friendly by setting your own
name into the bundle when creating it using the bundle_name
argument:
You can also make a bundle yourself from a group of tasks; this may be convenient if you need to launch a number of tasks individually for some reason but want to then consider them together as a group.
id1 <- task_create_expr(mysim(1, 2))
#> ✔ Submitted task '268b0ba491c400f7bfd7c1738a2fb7e0' using 'example'
id2 <- task_create_expr(mysim(2, 2))
#> ✔ Submitted task '5457780f8ef66cb3894e00170507980a' using 'example'
b <- hipercow_bundle_create(c(id1, id2), "my_new_bundle")
#> ✔ Created bundle 'my_new_bundle' with 2 tasks
b
#> → <hipercow_bundle 'my_new_bundle' with 2 tasks>
We can then use this bundle as above:
So far, the tasks we submitted have been run using a single core on
the cluster, with no special other requests made. Here is a simple
example using two cores; we’ll use hipercow_resources()
to
specify we want two cores on the cluster, and
hipercow_parallel()
to say that we want to set up two
processes on those cores, using the parallel
package. (We
also support future
).
resources <- hipercow_resources(cores = 2)
id <- task_create_expr(
parallel::clusterApply(NULL, 1:2, function(x) Sys.sleep(5)),
parallel = hipercow_parallel("parallel"),
resources = resources)
#> ✔ Submitted task '1fa810c76eb284523ed5579c5b39bacf' using 'example'
task_wait(id)
#> [1] TRUE
task_info(id)
#>
#> ── task 1fa810c76eb284523ed5579c5b39bacf (success) ─────────────────────────────
#> ℹ Submitted with 'example'
#> ℹ Task type: expression
#> • Expression: parallel::clusterApply(NULL, 1:2, function(x) Sys.sleep(5))
#> • Locals: (none)
#> • Environment: default
#> R_GC_MEM_GROW: 3
#> ℹ Created at 2024-12-09 19:22:09.609344 (moments ago)
#> ℹ Started at 2024-12-09 19:22:09.629348 (moments ago; waited 21ms)
#> ℹ Finished at 2024-12-09 19:22:15.027501 (moments ago; ran for 5.4s)
Both of our parallel tasks are to sleep for 5 seconds. We use
task_info()
to report how long it took for those two runs
to execute; if they ran one-by-one, we’d expect around 10 seconds, but
we are seeing a much shorter time than that, so our pair of processes
are running at the same time.
For details on specifying resources and launching different kinds of
parallel tasks, see vignette("parallel")
.
Suppose our simulation started not from 0, but from some point that
we have computed locally (say x
, imaginatively)
You can use this value to start the simulation by running:
id <- task_create_expr(random_walk(x, 10))
#> ✔ Submitted task 'd965d786b11b74697c101ec7af492bf1' using 'example'
Here the x
value has come from the environment where the
expression passed into task_create_expr()
was found
(specifically, we use the rlang
“tidy evaluation” framework you might be familiar with from
dplyr
and friends).
task_wait(id)
#> [1] TRUE
task_result(id)
#> [1] 99.98089 100.86575 100.24334 101.15352 101.71174 101.28594 102.78024
#> [8] 101.50478 102.82621 102.13937
If you pass in an expression that references a value that does not exist locally, you will get a (hopefully) informative error message when the task is created:
You can cancel a task if it has been submitted and not completed,
using task_cancel()
:
For example, here’s a task that will sleep for 10 seconds, which we submit to the cluster:
id <- task_create_expr(Sys.sleep(10))
#> ✔ Submitted task '6c39b6564889effef3c28fe086fc62e3' using 'example'
Having decided that this is a silly idea, we can try and cancel it:
task_cancel(id)
#> ✔ Successfully cancelled '6c39b6564889effef3c28fe086fc62e3'
#> [1] TRUE
task_status(id)
#> [1] "cancelled"
task_info(id)
#>
#> ── task 6c39b6564889effef3c28fe086fc62e3 (cancelled) ───────────────────────────
#> ℹ Submitted with 'example'
#> ℹ Task type: expression
#> • Expression: Sys.sleep(10)
#> • Locals: (none)
#> • Environment: default
#> R_GC_MEM_GROW: 3
#> ℹ Created at 2024-12-09 19:22:16.756749 (moments ago)
#> ✖ Start time unknown!
#> ℹ Finished at 2024-12-09 19:22:16.776896 (moments ago; ran for ???)
You can cancel a task that is submitted (waiting to be picked up by a cluster) or running (though not all drivers will support this; we need to add this to the example driver still, which will improve this example!).
You can cancel many tasks at once by passing a vector of identifiers at the same time. Tasks that have finished (successfully or not) cannot be cancelled.
There are lots of reasons why you might want to retry a task. For example:
You can retry tasks with task_retry()
, which is easier
than submitting a new task with the same content, and also preserves a
link between retried tasks.
Our random walk will give slightly different results each time we use it, so we demonstrate the idea with that:
id1 <- task_create_expr(random_walk(0, 10))
#> ✔ Submitted task 'e80ac2916f35af837bd91d894bc3f61d' using 'example'
task_wait(id1)
#> [1] TRUE
task_result(id1)
#> [1] -0.7048113 -0.9879313 -1.0760587 -1.9788949 -2.2407648 -1.8465894
#> [7] 1.2867708 2.5456362 3.1095356 3.4586366
Here we ran a random walk and it got to 3.4586366, which is clearly not what we were expecting. Let’s try it again:
Running task_retry()
creates a new task, with a
new id 104742...
compared with e80ac2...
.
Once this task has finished, we get a different result:
task_wait(id2)
#> [1] TRUE
task_result(id2)
#> [1] -2.410126 -2.695636 -2.706792 -2.806119 -2.809444 -3.684360 -3.030384
#> [8] -5.420159 -5.793249 -5.550869
Much better!
We get a hint that this is a retried task from the
task_info()
task_info(id2)
#>
#> ── task 10474254d50e8836c25ebefacf57d9ea (success) ─────────────────────────────
#> ℹ Submitted with 'example'
#> ℹ Task type: expression
#> • Expression: random_walk(0, 10)
#> • Locals: (none)
#> • Environment: default
#> R_GC_MEM_GROW: 3
#> ℹ Created at 2024-12-09 19:22:16.8406 (moments ago)
#> ℹ Started at 2024-12-09 19:22:17.96036 (moments ago; waited 1.1s)
#> ℹ Finished at 2024-12-09 19:22:17.961882 (moments ago; ran for 2ms)
#> ℹ Last of a chain of a task retried 1 time
You can see the full chain of retries here:
task_info(id2)$retry_chain
#> [1] "e80ac2916f35af837bd91d894bc3f61d" "10474254d50e8836c25ebefacf57d9ea"
Once a task has been retried it affects how you interact with the previous ids; by default they follow through to the most recent element in the chain:
task_result(id1)
#> [1] -2.410126 -2.695636 -2.706792 -2.806119 -2.809444 -3.684360 -3.030384
#> [8] -5.420159 -5.793249 -5.550869
task_result(id2)
#> [1] -2.410126 -2.695636 -2.706792 -2.806119 -2.809444 -3.684360 -3.030384
#> [8] -5.420159 -5.793249 -5.550869
You can get the original result back by passing the argument
follow = FALSE
:
task_result(id1, follow = FALSE)
#> [1] -0.7048113 -0.9879313 -1.0760587 -1.9788949 -2.2407648 -1.8465894
#> [7] 1.2867708 2.5456362 3.1095356 3.4586366
task_result(id2)
#> [1] -2.410126 -2.695636 -2.706792 -2.806119 -2.809444 -3.684360 -3.030384
#> [8] -5.420159 -5.793249 -5.550869
Only tasks that have been completed (success
,
failure
or cancelled
) can be retried, and
doing so adds a new task to the end of the chain; there is no
branching. Retrying the id1
here would create the chain
id1 -> id2 -> id3
, and following would select
id3
for any of the three tasks in the chain.
You cannot currently change any property of a retried task, we may change this in future.