[][src]Struct rusoto_sagemaker::AlgorithmSpecification

pub struct AlgorithmSpecification {
    pub algorithm_name: Option<String>,
    pub metric_definitions: Option<Vec<MetricDefinition>>,
    pub training_image: Option<String>,
    pub training_input_mode: String,
}

Specifies the training algorithm to use in a CreateTrainingJob request.

For more information about algorithms provided by Amazon SageMaker, see Algorithms. For information about using your own algorithms, see Using Your Own Algorithms with Amazon SageMaker.

Fields

algorithm_name: Option<String>

The name of the algorithm resource to use for the training job. This must be an algorithm resource that you created or subscribe to on AWS Marketplace. If you specify a value for this parameter, you can't specify a value for TrainingImage.

metric_definitions: Option<Vec<MetricDefinition>>

A list of metric definition objects. Each object specifies the metric name and regular expressions used to parse algorithm logs. Amazon SageMaker publishes each metric to Amazon CloudWatch.

training_image: Option<String>

The registry path of the Docker image that contains the training algorithm. For information about docker registry paths for built-in algorithms, see Algorithms Provided by Amazon SageMaker: Common Parameters. Amazon SageMaker supports both registry/repository[:tag] and registry/repository[@digest] image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker.

training_input_mode: String

The input mode that the algorithm supports. For the input modes that Amazon SageMaker algorithms support, see Algorithms. If an algorithm supports the File input mode, Amazon SageMaker downloads the training data from S3 to the provisioned ML storage Volume, and mounts the directory to docker volume for training container. If an algorithm supports the Pipe input mode, Amazon SageMaker streams data directly from S3 to the container.

In File mode, make sure you provision ML storage volume with sufficient capacity to accommodate the data download from S3. In addition to the training data, the ML storage volume also stores the output model. The algorithm container use ML storage volume to also store intermediate information, if any.

For distributed algorithms using File mode, training data is distributed uniformly, and your training duration is predictable if the input data objects size is approximately same. Amazon SageMaker does not split the files any further for model training. If the object sizes are skewed, training won't be optimal as the data distribution is also skewed where one host in a training cluster is overloaded, thus becoming bottleneck in training.

Trait Implementations

impl PartialEq<AlgorithmSpecification> for AlgorithmSpecification[src]

impl Default for AlgorithmSpecification[src]

impl Clone for AlgorithmSpecification[src]

fn clone_from(&mut self, source: &Self)
1.0.0
[src]

Performs copy-assignment from source. Read more

impl Debug for AlgorithmSpecification[src]

impl Serialize for AlgorithmSpecification[src]

impl<'de> Deserialize<'de> for AlgorithmSpecification[src]

Auto Trait Implementations

impl Send for AlgorithmSpecification

impl Sync for AlgorithmSpecification

Blanket Implementations

impl<T> From for T[src]

impl<T, U> Into for T where
    U: From<T>, 
[src]

impl<T> ToOwned for T where
    T: Clone
[src]

type Owned = T

impl<T, U> TryFrom for T where
    T: From<U>, 
[src]

type Error = !

🔬 This is a nightly-only experimental API. (try_from)

The type returned in the event of a conversion error.

impl<T> Borrow for T where
    T: ?Sized
[src]

impl<T> BorrowMut for T where
    T: ?Sized
[src]

impl<T, U> TryInto for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

🔬 This is a nightly-only experimental API. (try_from)

The type returned in the event of a conversion error.

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> DeserializeOwned for T where
    T: Deserialize<'de>, 
[src]

impl<T> Erased for T

impl<T> Same for T

type Output = T

Should always be Self