Docstrings

Docstrings

Workload object for the Inception cnn, built using keyword constructors.

Fields

  • args :: NamedTuple - Arguments to pass to the startup script (see docs). Default: NamedTuple()
  • interactive :: Bool - If set to true, the container will launch into /bin/bash instead of Python. Used for debugging the container. Default: false.
source

Struct representing parameters for launching the Tensorflow Official Resnet Model on the Imagenet training set. Construct type using a key-word constructor

Fields

  • args::NamedTuple - Arguments passed to the Keras Python script that creates and trains Resnet.

  • interactive::Bool - Set to true to create a container that does not automatically run Resnet when launched. Useful for debugging what's going on inside the container.

create keywords

  • memory::Union{Nothing, Int} - The amount of memory to assign to this container. If this value is nothing, the container will have access to all system memory. Default: nothing.

  • cpuSets = "" - The CPU sets on which to run the workload. Defaults to all processors. Examples: "0", "0-3", "1,3".

source

Workload object for the RNN Translator, built using keyword constructors.

Fields

  • args :: NamedTuple - Arguments to pass to the startup script (see docs). Default: NamedTuple()
  • interactive :: Bool - If set to true, the container will launch into /bin/bash instead of Python. Used for debugging the container. Default: false.
source
Launcher.UnetType.

Workload object for the Keras Cifar10 cnn. Built using keyword constructors.

Fields

  • args :: NamedTuple - Arguments to pass to the startup script (see docs). Default: NamedTuple()
  • interactive :: Bool - If set to true, the container will launch into /bin/bash instead of Python. Used for debugging the container. Default: false.
source
Launcher.ANTSType.

Intermediate image with a compiled version of the ANTs image processing library.

source

Abstract supertype for workloads. Concrete subtypes should be implemented for each workload desired for analysis.

Required Methods

source
Launcher.BundleType.

Wrapper type for launching multiple workloads at the same time through the run command.

source
Launcher.BundleMethod.
Bundle(workloads...) -> Bundle

Wrap the workloads into a single type the will launch all workloads under run

Usage is as follows:

bundle = Bundle(workloadA, workloadB)

run(bundle)

Any keywords passed to run will be forwarded to each workload wrapped in bundle.

To log the output of these workloads, see BundleLogger.

source

BundleLogger(bundle::Bundle) -> BundleLogger

Create a BundleLogger from bundle to pass to the log keyword argument of run. This will store the logs for each container in bundle sequentially which can later be accessed by getindex. Example usage is shown below.

bundle = Bundle(workA, workB, workC)

logger = BundleLogger(bundle)

run(bundle; log = logger)

# After completion, logs can get accessed via
log1 = logger[1]

log2 = logger[2]

log3 = logger[3]
source
Launcher.GNMTType.

PyTorch docker container for the Launcher.Translator workload.

source

Docker image for Tensorflow compiled with MKL

source

Launch the test workload in a ubuntu image.

Fields

  • none

create Keyword Arguments

  • none
source
Launcher.Unet3dType.

Image containing the dependencies for the 3d Unet workload

source
Base.Libc.getpidMethod.
Launcher.getpid(container::Container)

Return the PID of container.

source
Base.runMethod.
run([f::Function], work::AbstractWorkload; log::IO = devnull, kw...)

Create and launch a container from work with

container = create(work; kw...)

Start the container and then call f(container). If f is not given, then attach to the container's stdout.

This function ensures that containers are stopped and cleaned up in case something goes wrong.

After the container is stopped, write the log to IO

Keyword Arguments

Extra keyword arguments will be forwarded to the Docker.create. With these arguments, it is possible to contstrain the resources available to the container. Standard arguments valid across all workloads are shown below:

  • user::String: The name of the user to run the container as. Default: "" (Root)

  • entryPoint::String : The entry point for the container as a string or an array of strings.

    If the array consists of exactly one empty string ([""]) then the entry point is reset to system default (i.e., the entry point used by docker when there is no ENTRYPOINT instruction in the Dockerfile)

    Default: ""

  • memory::Integer: Memory limit in bytes. Default: 0 (unlimited)

  • cpuSets::String: CPUs in which to allow execution (e.g., 0-3, 0,1). Default: All CPUs

  • cpuMems::String: Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. Default: All NUMA nodea.

  • env::Vector{String}: A list of environment variables to set inside the container in the form ["VAR=value", ...]. A variable without = is removed from the environment, rather than to have an empty value. Default: []

    NOTE: Some workloads (especially those working with MKL) may automatically specify some environmental variables. Consult the documentation for those workloads to see which are specified.

source
Launcher.benchmark_timeparser(file::String) -> Float64

Return the average number of images processed per second by the [TFBenchmark] workload. Applicable when the output is of the form below:

OMP: Info #250: KMP_AFFINITY: pid 1 tid 8618 thread 189 bound to OS proc set 93
OMP: Info #250: KMP_AFFINITY: pid 1 tid 8619 thread 190 bound to OS proc set 94
OMP: Info #250: KMP_AFFINITY: pid 1 tid 8620 thread 191 bound to OS proc set 95
OMP: Info #250: KMP_AFFINITY: pid 1 tid 8621 thread 192 bound to OS proc set 0
Done warm up
Step	Img/sec	total_loss
1	images/sec: 38.0 +/- 0.0 (jitter = 0.0)	7.419
10	images/sec: 22.6 +/- 2.5 (jitter = 1.3)	7.593
20	images/sec: 21.1 +/- 1.6 (jitter = 2.7)	7.597
30	images/sec: 22.4 +/- 1.5 (jitter = 4.5)	7.683
40	images/sec: 22.7 +/- 1.3 (jitter = 4.3)	7.576
50	images/sec: 22.8 +/- 1.2 (jitter = 3.9)	7.442
source
Launcher.bindMethod.
Launcher.bind(a, b) -> String

Create a docker volume binding string for paths a and b.

source
Launcher.createMethod.
create(work::AbstractWorkload; kw...) -> Container

Create a Docker Container for work, with optional keyword arguments. Concrete subtypes of AbstractWorkload must define this method and perform all the necessary steps to creating the Container. Note that the container should just be created by a call to Docker.create_container, and not actually started.

Keyword arguments supported by work should be included in that types documentation.

source
Launcher.filenameMethod.
filename(work::AbstractWorkload)

Create a filename for work based on the data type of work and the arguments.

source
Launcher.getargsMethod.
getargs(work::AbstractWorkloads)

Return the commandline arguments for work. Falls back to work.args. Extend this method for a workload if the fallback is not appropriate.

source
inception_cluster(;kw...)

Summary of Keyword arguments:

  • nworkers: Number of worker nodes in the cluster. Default: 1

  • cpusets::Vector{String}: The CPUs to assign to each worker.

  • memsets::Vector{String}: NUMA nodes to assign to each worker.

source
Launcher.inception_timeparser(file::String) -> Float64

Return the average time per step of a tensorflow based training run stored in io. Applicable when the output of the log is in the format shown below

I0125 15:02:43.353371 140087033124608 tf_logging.py:115] loss = 11.300481, step = 2 (10.912 sec)
source
instantiate(image::AbstractDockerImage)

Perform all the necessary build steps to build and tag the docker image for image.

source
Launcher.isrunningMethod.
Launcher.isrunning(container::Container) -> Bool

Return true if container is running.

source
Launcher.translator_parser(file::String) -> Float64

Return the mean time per step from an output log file for the Launcher.Translator workload.

source
Launcher.uidMethod.
Launcher.uid()

Return the user ID of the current user.

source
Launcher.usernameMethod.
Launcher.username()

Return the name of the current user.

source