[][src]Struct rusoto_glue::Crawler

pub struct Crawler {
    pub classifiers: Option<Vec<String>>,
    pub configuration: Option<String>,
    pub crawl_elapsed_time: Option<i64>,
    pub crawler_security_configuration: Option<String>,
    pub creation_time: Option<f64>,
    pub database_name: Option<String>,
    pub description: Option<String>,
    pub last_crawl: Option<LastCrawlInfo>,
    pub last_updated: Option<f64>,
    pub name: Option<String>,
    pub role: Option<String>,
    pub schedule: Option<Schedule>,
    pub schema_change_policy: Option<SchemaChangePolicy>,
    pub state: Option<String>,
    pub table_prefix: Option<String>,
    pub targets: Option<CrawlerTargets>,
    pub version: Option<i64>,
}

Specifies a crawler program that examines a data source and uses classifiers to try to determine its schema. If successful, the crawler records metadata concerning the data source in the AWS Glue Data Catalog.

Fields

classifiers: Option<Vec<String>>

A list of UTF-8 strings that specify the custom classifiers that are associated with the crawler.

configuration: Option<String>

Crawler configuration information. This versioned JSON string allows users to specify aspects of a crawler's behavior. For more information, see Configuring a Crawler.

crawl_elapsed_time: Option<i64>

If the crawler is running, contains the total time elapsed since the last crawl began.

crawler_security_configuration: Option<String>

The name of the SecurityConfiguration structure to be used by this crawler.

creation_time: Option<f64>

The time that the crawler was created.

database_name: Option<String>

The name of the database in which the crawler's output is stored.

description: Option<String>

A description of the crawler.

last_crawl: Option<LastCrawlInfo>

The status of the last crawl, and potentially error information if an error occurred.

last_updated: Option<f64>

The time that the crawler was last updated.

name: Option<String>

The name of the crawler.

role: Option<String>

The Amazon Resource Name (ARN) of an IAM role that's used to access customer resources, such as Amazon Simple Storage Service (Amazon S3) data.

schedule: Option<Schedule>

For scheduled crawlers, the schedule when the crawler runs.

schema_change_policy: Option<SchemaChangePolicy>

The policy that specifies update and delete behaviors for the crawler.

state: Option<String>

Indicates whether the crawler is running, or whether a run is pending.

table_prefix: Option<String>

The prefix added to the names of tables that are created.

targets: Option<CrawlerTargets>

A collection of targets to crawl.

version: Option<i64>

The version of the crawler.

Trait Implementations

impl PartialEq<Crawler> for Crawler[src]

impl Default for Crawler[src]

impl Clone for Crawler[src]

fn clone_from(&mut self, source: &Self)
1.0.0
[src]

Performs copy-assignment from source. Read more

impl Debug for Crawler[src]

impl<'de> Deserialize<'de> for Crawler[src]

Auto Trait Implementations

impl Send for Crawler

impl Sync for Crawler

Blanket Implementations

impl<T> From for T[src]

impl<T, U> Into for T where
    U: From<T>, 
[src]

impl<T> ToOwned for T where
    T: Clone
[src]

type Owned = T

impl<T, U> TryFrom for T where
    T: From<U>, 
[src]

type Error = !

🔬 This is a nightly-only experimental API. (try_from)

The type returned in the event of a conversion error.

impl<T> Borrow for T where
    T: ?Sized
[src]

impl<T> BorrowMut for T where
    T: ?Sized
[src]

impl<T, U> TryInto for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

🔬 This is a nightly-only experimental API. (try_from)

The type returned in the event of a conversion error.

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> DeserializeOwned for T where
    T: Deserialize<'de>, 
[src]

impl<T> Erased for T

impl<T> Same for T

type Output = T

Should always be Self