class Endpoint(proto.Message): r"""Models are deployed into it, and afterwards Endpoint is called to obtain predictions and explanations. Attributes: name (str): Output only. The resource name of the Endpoint. display_name (str): Required. The display name of the Endpoint. The name can be up to 128 characters long and can be consist of any UTF-8 characters. description (str): The description of the Endpoint. deployed_models (Sequence[google.cloud.aiplatform_v1beta1.types.DeployedModel]): Output only. The models deployed in this Endpoint. To add or remove DeployedModels use [EndpointService.DeployModel][google.cloud.aiplatform.v1beta1.EndpointService.DeployModel] and [EndpointService.UndeployModel][google.cloud.aiplatform.v1beta1.EndpointService.UndeployModel] respectively. traffic_split (Sequence[google.cloud.aiplatform_v1beta1.types.Endpoint.TrafficSplitEntry]): A map from a DeployedModel's ID to the percentage of this Endpoint's traffic that should be forwarded to that DeployedModel. If a DeployedModel's ID is not listed in this map, then it receives no traffic. The traffic percentage values must add up to 100, or map must be empty if the Endpoint is to not accept any traffic at a moment. etag (str): Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. labels (Sequence[google.cloud.aiplatform_v1beta1.types.Endpoint.LabelsEntry]): The labels with user-defined metadata to organize your Endpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. create_time (google.protobuf.timestamp_pb2.Timestamp): Output only. Timestamp when this Endpoint was created. update_time (google.protobuf.timestamp_pb2.Timestamp): Output only. Timestamp when this Endpoint was last updated. encryption_spec (google.cloud.aiplatform_v1beta1.types.EncryptionSpec): Customer-managed encryption key spec for an Endpoint. If set, this Endpoint and all sub- resources of this Endpoint will be secured by this key. """ name = proto.Field(proto.STRING, number=1,) display_name = proto.Field(proto.STRING, number=2,) description = proto.Field(proto.STRING, number=3,) deployed_models = proto.RepeatedField( proto.MESSAGE, number=4, message="DeployedModel", ) traffic_split = proto.MapField(proto.STRING, proto.INT32, number=5,) etag = proto.Field(proto.STRING, number=6,) labels = proto.MapField(proto.STRING, proto.STRING, number=7,) create_time = proto.Field(proto.MESSAGE, number=8, message=timestamp_pb2.Timestamp,) update_time = proto.Field(proto.MESSAGE, number=9, message=timestamp_pb2.Timestamp,) encryption_spec = proto.Field( proto.MESSAGE, number=10, message=gca_encryption_spec.EncryptionSpec, )
class HttpRequest(proto.Message): r"""HTTP request. The task will be pushed to the worker as an HTTP request. If the worker or the redirected worker acknowledges the task by returning a successful HTTP response code ([``200`` - ``299``]), the task will be removed from the queue. If any other HTTP response code is returned or no response is received, the task will be retried according to the following: - User-specified throttling: [retry configuration][google.cloud.tasks.v2beta3.Queue.retry_config], [rate limits][google.cloud.tasks.v2beta3.Queue.rate_limits], and the [queue's state][google.cloud.tasks.v2beta3.Queue.state]. - System throttling: To prevent the worker from overloading, Cloud Tasks may temporarily reduce the queue's effective rate. User-specified settings will not be changed. System throttling happens because: - Cloud Tasks backs off on all errors. Normally the backoff specified in [rate limits][google.cloud.tasks.v2beta3.Queue.rate_limits] will be used. But if the worker returns ``429`` (Too Many Requests), ``503`` (Service Unavailable), or the rate of errors is high, Cloud Tasks will use a higher backoff rate. The retry specified in the ``Retry-After`` HTTP response header is considered. - To prevent traffic spikes and to smooth sudden increases in traffic, dispatches ramp up slowly when the queue is newly created or idle and if large numbers of tasks suddenly become available to dispatch (due to spikes in create task rates, the queue being unpaused, or many tasks that are scheduled at the same time). Attributes: url (str): Required. The full url path that the request will be sent to. This string must begin with either "http://" or "https://". Some examples are: ``http://acme.com`` and ``https://acme.com/sales:8080``. Cloud Tasks will encode some characters for safety and compatibility. The maximum allowed URL length is 2083 characters after encoding. The ``Location`` header response from a redirect response [``300`` - ``399``] may be followed. The redirect is not counted as a separate attempt. http_method (~.target.HttpMethod): The HTTP method to use for the request. The default is POST. headers (Sequence[~.target.HttpRequest.HeadersEntry]): HTTP request headers. This map contains the header field names and values. Headers can be set when the [task is created][google.cloud.tasks.v2beta3.CloudTasks.CreateTask]. These headers represent a subset of the headers that will accompany the task's HTTP request. Some HTTP request headers will be ignored or replaced. A partial list of headers that will be ignored or replaced is: - Host: This will be computed by Cloud Tasks and derived from [HttpRequest.url][google.cloud.tasks.v2beta3.HttpRequest.url]. - Content-Length: This will be computed by Cloud Tasks. - User-Agent: This will be set to ``"Google-Cloud-Tasks"``. - X-Google-\*: Google use only. - X-AppEngine-\*: Google use only. ``Content-Type`` won't be set by Cloud Tasks. You can explicitly set ``Content-Type`` to a media type when the [task is created][google.cloud.tasks.v2beta3.CloudTasks.CreateTask]. For example, ``Content-Type`` can be set to ``"application/octet-stream"`` or ``"application/json"``. Headers which can have multiple values (according to RFC2616) can be specified using comma-separated values. The size of the headers must be less than 80KB. body (bytes): HTTP request body. A request body is allowed only if the [HTTP method][google.cloud.tasks.v2beta3.HttpRequest.http_method] is POST, PUT, or PATCH. It is an error to set body on a task with an incompatible [HttpMethod][google.cloud.tasks.v2beta3.HttpMethod]. oauth_token (~.target.OAuthToken): If specified, an `OAuth token <https://developers.google.com/identity/protocols/OAuth2>`__ will be generated and attached as an ``Authorization`` header in the HTTP request. This type of authorization should generally only be used when calling Google APIs hosted on \*.googleapis.com. oidc_token (~.target.OidcToken): If specified, an `OIDC <https://developers.google.com/identity/protocols/OpenIDConnect>`__ token will be generated and attached as an ``Authorization`` header in the HTTP request. This type of authorization can be used for many scenarios, including calling Cloud Run, or endpoints where you intend to validate the token yourself. """ url = proto.Field(proto.STRING, number=1) http_method = proto.Field( proto.ENUM, number=2, enum="HttpMethod", ) headers = proto.MapField(proto.STRING, proto.STRING, number=3) body = proto.Field(proto.BYTES, number=4) oauth_token = proto.Field( proto.MESSAGE, number=5, oneof="authorization_header", message="OAuthToken", ) oidc_token = proto.Field( proto.MESSAGE, number=6, oneof="authorization_header", message="OidcToken", )
class CryptoKey(proto.Message): r"""A [CryptoKey][google.cloud.kms.v1.CryptoKey] represents a logical key that can be used for cryptographic operations. A [CryptoKey][google.cloud.kms.v1.CryptoKey] is made up of zero or more [versions][google.cloud.kms.v1.CryptoKeyVersion], which represent the actual key material used in cryptographic operations. Attributes: name (str): Output only. The resource name for this [CryptoKey][google.cloud.kms.v1.CryptoKey] in the format ``projects/*/locations/*/keyRings/*/cryptoKeys/*``. primary (google.cloud.kms_v1.types.CryptoKeyVersion): Output only. A copy of the "primary" [CryptoKeyVersion][google.cloud.kms.v1.CryptoKeyVersion] that will be used by [Encrypt][google.cloud.kms.v1.KeyManagementService.Encrypt] when this [CryptoKey][google.cloud.kms.v1.CryptoKey] is given in [EncryptRequest.name][google.cloud.kms.v1.EncryptRequest.name]. The [CryptoKey][google.cloud.kms.v1.CryptoKey]'s primary version can be updated via [UpdateCryptoKeyPrimaryVersion][google.cloud.kms.v1.KeyManagementService.UpdateCryptoKeyPrimaryVersion]. Keys with [purpose][google.cloud.kms.v1.CryptoKey.purpose] [ENCRYPT_DECRYPT][google.cloud.kms.v1.CryptoKey.CryptoKeyPurpose.ENCRYPT_DECRYPT] may have a primary. For other keys, this field will be omitted. purpose (google.cloud.kms_v1.types.CryptoKey.CryptoKeyPurpose): Immutable. The immutable purpose of this [CryptoKey][google.cloud.kms.v1.CryptoKey]. create_time (google.protobuf.timestamp_pb2.Timestamp): Output only. The time at which this [CryptoKey][google.cloud.kms.v1.CryptoKey] was created. next_rotation_time (google.protobuf.timestamp_pb2.Timestamp): At [next_rotation_time][google.cloud.kms.v1.CryptoKey.next_rotation_time], the Key Management Service will automatically: 1. Create a new version of this [CryptoKey][google.cloud.kms.v1.CryptoKey]. 2. Mark the new version as primary. Key rotations performed manually via [CreateCryptoKeyVersion][google.cloud.kms.v1.KeyManagementService.CreateCryptoKeyVersion] and [UpdateCryptoKeyPrimaryVersion][google.cloud.kms.v1.KeyManagementService.UpdateCryptoKeyPrimaryVersion] do not affect [next_rotation_time][google.cloud.kms.v1.CryptoKey.next_rotation_time]. Keys with [purpose][google.cloud.kms.v1.CryptoKey.purpose] [ENCRYPT_DECRYPT][google.cloud.kms.v1.CryptoKey.CryptoKeyPurpose.ENCRYPT_DECRYPT] support automatic rotation. For other keys, this field must be omitted. rotation_period (google.protobuf.duration_pb2.Duration): [next_rotation_time][google.cloud.kms.v1.CryptoKey.next_rotation_time] will be advanced by this period when the service automatically rotates a key. Must be at least 24 hours and at most 876,000 hours. If [rotation_period][google.cloud.kms.v1.CryptoKey.rotation_period] is set, [next_rotation_time][google.cloud.kms.v1.CryptoKey.next_rotation_time] must also be set. Keys with [purpose][google.cloud.kms.v1.CryptoKey.purpose] [ENCRYPT_DECRYPT][google.cloud.kms.v1.CryptoKey.CryptoKeyPurpose.ENCRYPT_DECRYPT] support automatic rotation. For other keys, this field must be omitted. version_template (google.cloud.kms_v1.types.CryptoKeyVersionTemplate): A template describing settings for new [CryptoKeyVersion][google.cloud.kms.v1.CryptoKeyVersion] instances. The properties of new [CryptoKeyVersion][google.cloud.kms.v1.CryptoKeyVersion] instances created by either [CreateCryptoKeyVersion][google.cloud.kms.v1.KeyManagementService.CreateCryptoKeyVersion] or auto-rotation are controlled by this template. labels (Sequence[google.cloud.kms_v1.types.CryptoKey.LabelsEntry]): Labels with user-defined metadata. For more information, see `Labeling Keys <https://cloud.google.com/kms/docs/labeling-keys>`__. """ class CryptoKeyPurpose(proto.Enum): r"""[CryptoKeyPurpose][google.cloud.kms.v1.CryptoKey.CryptoKeyPurpose] describes the cryptographic capabilities of a [CryptoKey][google.cloud.kms.v1.CryptoKey]. A given key can only be used for the operations allowed by its purpose. For more information, see `Key purposes <https://cloud.google.com/kms/docs/algorithms#key_purposes>`__. """ CRYPTO_KEY_PURPOSE_UNSPECIFIED = 0 ENCRYPT_DECRYPT = 1 ASYMMETRIC_SIGN = 5 ASYMMETRIC_DECRYPT = 6 name = proto.Field( proto.STRING, number=1, ) primary = proto.Field( proto.MESSAGE, number=2, message="CryptoKeyVersion", ) purpose = proto.Field( proto.ENUM, number=3, enum=CryptoKeyPurpose, ) create_time = proto.Field( proto.MESSAGE, number=5, message=timestamp_pb2.Timestamp, ) next_rotation_time = proto.Field( proto.MESSAGE, number=7, message=timestamp_pb2.Timestamp, ) rotation_period = proto.Field( proto.MESSAGE, number=8, oneof="rotation_schedule", message=duration_pb2.Duration, ) version_template = proto.Field( proto.MESSAGE, number=11, message="CryptoKeyVersionTemplate", ) labels = proto.MapField( proto.STRING, proto.STRING, number=10, )
class Dataset(proto.Message): r"""A collection of DataItems and Annotations on them. Attributes: name (str): Output only. The resource name of the Dataset. display_name (str): Required. The user-defined name of the Dataset. The name can be up to 128 characters long and can be consist of any UTF-8 characters. metadata_schema_uri (str): Required. Points to a YAML file stored on Google Cloud Storage describing additional information about the Dataset. The schema is defined as an OpenAPI 3.0.2 Schema Object. The schema files that can be used here are found in gs://google-cloud- aiplatform/schema/dataset/metadata/. metadata (google.protobuf.struct_pb2.Value): Required. Additional information about the Dataset. create_time (google.protobuf.timestamp_pb2.Timestamp): Output only. Timestamp when this Dataset was created. update_time (google.protobuf.timestamp_pb2.Timestamp): Output only. Timestamp when this Dataset was last updated. etag (str): Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. labels (Sequence[google.cloud.aiplatform_v1beta1.types.Dataset.LabelsEntry]): The labels with user-defined metadata to organize your Datasets. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Dataset (System labels are excluded). See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. Following system labels exist for each Dataset: - "aiplatform.googleapis.com/dataset_metadata_schema": output only, its value is the [metadata_schema's][google.cloud.aiplatform.v1beta1.Dataset.metadata_schema_uri] title. encryption_spec (google.cloud.aiplatform_v1beta1.types.EncryptionSpec): Customer-managed encryption key spec for a Dataset. If set, this Dataset and all sub- resources of this Dataset will be secured by this key. """ name = proto.Field( proto.STRING, number=1, ) display_name = proto.Field( proto.STRING, number=2, ) metadata_schema_uri = proto.Field( proto.STRING, number=3, ) metadata = proto.Field( proto.MESSAGE, number=8, message=struct_pb2.Value, ) create_time = proto.Field( proto.MESSAGE, number=4, message=timestamp_pb2.Timestamp, ) update_time = proto.Field( proto.MESSAGE, number=5, message=timestamp_pb2.Timestamp, ) etag = proto.Field( proto.STRING, number=6, ) labels = proto.MapField( proto.STRING, proto.STRING, number=7, ) encryption_spec = proto.Field( proto.MESSAGE, number=11, message=gca_encryption_spec.EncryptionSpec, )
class GameServerCluster(proto.Message): r"""A game server cluster resource. Attributes: name (str): Required. The resource name of the game server cluster. Uses the form: ``projects/{project}/locations/{location}/realms/{realm}/gameServerClusters/{cluster}``. For example, ``projects/my-project/locations/{location}/realms/zanzibar/gameServerClusters/my-onprem-cluster``. create_time (google.protobuf.timestamp_pb2.Timestamp): Output only. The creation time. update_time (google.protobuf.timestamp_pb2.Timestamp): Output only. The last-modified time. labels (Mapping[str, str]): The labels associated with this game server cluster. Each label is a key-value pair. connection_info (google.cloud.gaming_v1beta.types.GameServerClusterConnectionInfo): The game server cluster connection information. This information is used to manage game server clusters. etag (str): ETag of the resource. description (str): Human readable description of the cluster. """ name = proto.Field( proto.STRING, number=1, ) create_time = proto.Field( proto.MESSAGE, number=2, message=timestamp_pb2.Timestamp, ) update_time = proto.Field( proto.MESSAGE, number=3, message=timestamp_pb2.Timestamp, ) labels = proto.MapField( proto.STRING, proto.STRING, number=4, ) connection_info = proto.Field( proto.MESSAGE, number=5, message="GameServerClusterConnectionInfo", ) etag = proto.Field( proto.STRING, number=6, ) description = proto.Field( proto.STRING, number=7, )
class Filter(proto.Message): r"""A filter for a budget, limiting the scope of the cost to calculate. Attributes: projects (Sequence[str]): Optional. A set of projects of the form ``projects/{project}``, specifying that usage from only this set of projects should be included in the budget. If omitted, the report will include all usage for the billing account, regardless of which project the usage occurred on. Only zero or one project can be specified currently. credit_types (Sequence[str]): Optional. If [Filter.credit_types_treatment][google.cloud.billing.budgets.v1beta1.Filter.credit_types_treatment] is INCLUDE_SPECIFIED_CREDITS, this is a list of credit types to be subtracted from gross cost to determine the spend for threshold calculations. See `a list of acceptable credit type values <https://cloud.google.com/billing/docs/how-to/export-data-bigquery-tables#credits-type>`__. If [Filter.credit_types_treatment][google.cloud.billing.budgets.v1beta1.Filter.credit_types_treatment] is **not** INCLUDE_SPECIFIED_CREDITS, this field must be empty. credit_types_treatment (google.cloud.billing.budgets_v1beta1.types.Filter.CreditTypesTreatment): Optional. If not set, default behavior is ``INCLUDE_ALL_CREDITS``. services (Sequence[str]): Optional. A set of services of the form ``services/{service_id}``, specifying that usage from only this set of services should be included in the budget. If omitted, the report will include usage for all the services. The service names are available through the Catalog API: https://cloud.google.com/billing/v1/how-tos/catalog-api. subaccounts (Sequence[str]): Optional. A set of subaccounts of the form ``billingAccounts/{account_id}``, specifying that usage from only this set of subaccounts should be included in the budget. If a subaccount is set to the name of the parent account, usage from the parent account will be included. If omitted, the report will include usage from the parent account and all subaccounts, if they exist. labels (Sequence[google.cloud.billing.budgets_v1beta1.types.Filter.LabelsEntry]): Optional. A single label and value pair specifying that usage from only this set of labeled resources should be included in the budget. Currently, multiple entries or multiple values per entry are not allowed. If omitted, the report will include all labeled and unlabeled usage. calendar_period (google.cloud.billing.budgets_v1beta1.types.CalendarPeriod): Optional. Specifies to track usage for recurring calendar period. For example, assume that CalendarPeriod.QUARTER is set. The budget will track usage from April 1 to June 30, when the current calendar month is April, May, June. After that, it will track usage from July 1 to September 30 when the current calendar month is July, August, September, so on. custom_period (google.cloud.billing.budgets_v1beta1.types.CustomPeriod): Optional. Specifies to track usage from any start date (required) to any end date (optional). This time period is static, it does not recur. """ class CreditTypesTreatment(proto.Enum): r"""Specifies how credits are applied when determining the spend for threshold calculations. Budgets track the total cost minus any applicable selected credits. `See the documentation for a list of credit types <https://cloud.google.com/billing/docs/how-to/export-data-bigquery-tables#credits-type>`__. """ CREDIT_TYPES_TREATMENT_UNSPECIFIED = 0 INCLUDE_ALL_CREDITS = 1 EXCLUDE_ALL_CREDITS = 2 INCLUDE_SPECIFIED_CREDITS = 3 projects = proto.RepeatedField( proto.STRING, number=1, ) credit_types = proto.RepeatedField( proto.STRING, number=7, ) credit_types_treatment = proto.Field( proto.ENUM, number=4, enum=CreditTypesTreatment, ) services = proto.RepeatedField( proto.STRING, number=3, ) subaccounts = proto.RepeatedField( proto.STRING, number=5, ) labels = proto.MapField( proto.STRING, proto.MESSAGE, number=6, message=struct_pb2.ListValue, ) calendar_period = proto.Field( proto.ENUM, number=8, oneof='usage_period', enum='CalendarPeriod', ) custom_period = proto.Field( proto.MESSAGE, number=9, oneof='usage_period', message='CustomPeriod', )
class ResourceSearchResult(proto.Message): r"""A result of Resource Search, containing information of a cloud resource. Attributes: name (str): The full resource name of this resource. Example: ``//compute.googleapis.com/projects/my_project_123/zones/zone1/instances/instance1``. See `Cloud Asset Inventory Resource Name Format <https://cloud.google.com/asset-inventory/docs/resource-name-format>`__ for more information. To search against the ``name``: - use a field query. Example: ``name:instance1`` - use a free text query. Example: ``instance1`` asset_type (str): The type of this resource. Example: ``compute.googleapis.com/Disk``. To search against the ``asset_type``: - specify the ``asset_type`` field in your search request. project (str): The project that this resource belongs to, in the form of projects/{PROJECT_NUMBER}. This field is available when the resource belongs to a project. To search against ``project``: - use a field query. Example: ``project:12345`` - use a free text query. Example: ``12345`` - specify the ``scope`` field as this project in your search request. folders (Sequence[str]): The folder(s) that this resource belongs to, in the form of folders/{FOLDER_NUMBER}. This field is available when the resource belongs to one or more folders. To search against ``folders``: - use a field query. Example: ``folders:(123 OR 456)`` - use a free text query. Example: ``123`` - specify the ``scope`` field as this folder in your search request. organization (str): The organization that this resource belongs to, in the form of organizations/{ORGANIZATION_NUMBER}. This field is available when the resource belongs to an organization. To search against ``organization``: - use a field query. Example: ``organization:123`` - use a free text query. Example: ``123`` - specify the ``scope`` field as this organization in your search request. display_name (str): The display name of this resource. This field is available only when the resource's proto contains it. To search against the ``display_name``: - use a field query. Example: ``displayName:"My Instance"`` - use a free text query. Example: ``"My Instance"`` description (str): One or more paragraphs of text description of this resource. Maximum length could be up to 1M bytes. This field is available only when the resource's proto contains it. To search against the ``description``: - use a field query. Example: ``description:"important instance"`` - use a free text query. Example: ``"important instance"`` location (str): Location can be ``global``, regional like ``us-east1``, or zonal like ``us-west1-b``. This field is available only when the resource's proto contains it. To search against the ``location``: - use a field query. Example: ``location:us-west*`` - use a free text query. Example: ``us-west*`` labels (Sequence[google.cloud.asset_v1.types.ResourceSearchResult.LabelsEntry]): Labels associated with this resource. See `Labelling and grouping GCP resources <https://cloud.google.com/blog/products/gcp/labelling-and-grouping-your-google-cloud-platform-resources>`__ for more information. This field is available only when the resource's proto contains it. To search against the ``labels``: - use a field query: - query on any label's key or value. Example: ``labels:prod`` - query by a given label. Example: ``labels.env:prod`` - query by a given label's existence. Example: ``labels.env:*`` - use a free text query. Example: ``prod`` network_tags (Sequence[str]): Network tags associated with this resource. Like labels, network tags are a type of annotations used to group GCP resources. See `Labelling GCP resources <https://cloud.google.com/blog/products/gcp/labelling-and-grouping-your-google-cloud-platform-resources>`__ for more information. This field is available only when the resource's proto contains it. To search against the ``network_tags``: - use a field query. Example: ``networkTags:internal`` - use a free text query. Example: ``internal`` kms_key (str): The Cloud KMS `CryptoKey <https://cloud.google.com/kms/docs/reference/rest/v1/projects.locations.keyRings.cryptoKeys?hl=en>`__ name or `CryptoKeyVersion <https://cloud.google.com/kms/docs/reference/rest/v1/projects.locations.keyRings.cryptoKeys.cryptoKeyVersions?hl=en>`__ name. This field is available only when the resource's proto contains it. To search against the ``kms_key``: - use a field query. Example: ``kmsKey:key`` - use a free text query. Example: ``key`` create_time (google.protobuf.timestamp_pb2.Timestamp): The create timestamp of this resource, at which the resource was created. The granularity is in seconds. Timestamp.nanos will always be 0. This field is available only when the resource's proto contains it. To search against ``create_time``: - use a field query. - value in seconds since unix epoch. Example: ``createTime > 1609459200`` - value in date string. Example: ``createTime > 2021-01-01`` - value in date-time string (must be quoted). Example: ``createTime > "2021-01-01T00:00:00"`` update_time (google.protobuf.timestamp_pb2.Timestamp): The last update timestamp of this resource, at which the resource was last modified or deleted. The granularity is in seconds. Timestamp.nanos will always be 0. This field is available only when the resource's proto contains it. To search against ``update_time``: - use a field query. - value in seconds since unix epoch. Example: ``updateTime < 1609459200`` - value in date string. Example: ``updateTime < 2021-01-01`` - value in date-time string (must be quoted). Example: ``updateTime < "2021-01-01T00:00:00"`` state (str): The state of this resource. Different resources types have different state definitions that are mapped from various fields of different resource types. This field is available only when the resource's proto contains it. Example: If the resource is an instance provided by Compute Engine, its state will include PROVISIONING, STAGING, RUNNING, STOPPING, SUSPENDING, SUSPENDED, REPAIRING, and TERMINATED. See ``status`` definition in `API Reference <https://cloud.google.com/compute/docs/reference/rest/v1/instances>`__. If the resource is a project provided by Cloud Resource Manager, its state will include LIFECYCLE_STATE_UNSPECIFIED, ACTIVE, DELETE_REQUESTED and DELETE_IN_PROGRESS. See ``lifecycleState`` definition in `API Reference <https://cloud.google.com/resource-manager/reference/rest/v1/projects>`__. To search against the ``state``: - use a field query. Example: ``state:RUNNING`` - use a free text query. Example: ``RUNNING`` additional_attributes (google.protobuf.struct_pb2.Struct): The additional searchable attributes of this resource. The attributes may vary from one resource type to another. Examples: ``projectId`` for Project, ``dnsName`` for DNS ManagedZone. This field contains a subset of the resource metadata fields that are returned by the List or Get APIs provided by the corresponding GCP service (e.g., Compute Engine). see `API references and supported searchable attributes <https://cloud.google.com/asset-inventory/docs/supported-asset-types#searchable_asset_types>`__ to see which fields are included. You can search values of these fields through free text search. However, you should not consume the field programically as the field names and values may change as the GCP service updates to a new incompatible API version. To search against the ``additional_attributes``: - use a free text query to match the attributes values. Example: to search ``additional_attributes = { dnsName: "foobar" }``, you can issue a query ``foobar``. parent_full_resource_name (str): The full resource name of this resource's parent, if it has one. To search against the ``parent_full_resource_name``: - use a field query. Example: ``parentFullResourceName:"project-name"`` - use a free text query. Example: ``project-name`` parent_asset_type (str): The type of this resource's immediate parent, if there is one. To search against the ``parent_asset_type``: - use a field query. Example: ``parentAssetType:"cloudresourcemanager.googleapis.com/Project"`` - use a free text query. Example: ``cloudresourcemanager.googleapis.com/Project`` """ name = proto.Field( proto.STRING, number=1, ) asset_type = proto.Field( proto.STRING, number=2, ) project = proto.Field( proto.STRING, number=3, ) folders = proto.RepeatedField( proto.STRING, number=17, ) organization = proto.Field( proto.STRING, number=18, ) display_name = proto.Field( proto.STRING, number=4, ) description = proto.Field( proto.STRING, number=5, ) location = proto.Field( proto.STRING, number=6, ) labels = proto.MapField( proto.STRING, proto.STRING, number=7, ) network_tags = proto.RepeatedField( proto.STRING, number=8, ) kms_key = proto.Field( proto.STRING, number=10, ) create_time = proto.Field( proto.MESSAGE, number=11, message=timestamp_pb2.Timestamp, ) update_time = proto.Field( proto.MESSAGE, number=12, message=timestamp_pb2.Timestamp, ) state = proto.Field( proto.STRING, number=13, ) additional_attributes = proto.Field( proto.MESSAGE, number=9, message=struct_pb2.Struct, ) parent_full_resource_name = proto.Field( proto.STRING, number=19, ) parent_asset_type = proto.Field( proto.STRING, number=103, )
class SparkJob(proto.Message): r"""A Dataproc job for running `Apache Spark <http://spark.apache.org/>`__ applications on YARN. The specification of the main method to call to drive the job. Specify either the jar file that contains the main class or the main class name. To pass both a main jar and a main class in that jar, add the jar to ``CommonJob.jar_file_uris``, and then specify the main class name in ``main_class``. Attributes: main_jar_file_uri (str): The HCFS URI of the jar file that contains the main class. main_class (str): The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in ``jar_file_uris``. args (Sequence[str]): Optional. The arguments to pass to the driver. Do not include arguments, such as ``--conf``, that can be set as job properties, since a collision may occur that causes an incorrect job submission. jar_file_uris (Sequence[str]): Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks. file_uris (Sequence[str]): Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks. archive_uris (Sequence[str]): Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip. properties (Sequence[google.cloud.dataproc_v1beta2.types.SparkJob.PropertiesEntry]): Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code. logging_config (google.cloud.dataproc_v1beta2.types.LoggingConfig): Optional. The runtime log config for job execution. """ main_jar_file_uri = proto.Field( proto.STRING, number=1, oneof="driver", ) main_class = proto.Field( proto.STRING, number=2, oneof="driver", ) args = proto.RepeatedField( proto.STRING, number=3, ) jar_file_uris = proto.RepeatedField( proto.STRING, number=4, ) file_uris = proto.RepeatedField( proto.STRING, number=5, ) archive_uris = proto.RepeatedField( proto.STRING, number=6, ) properties = proto.MapField( proto.STRING, proto.STRING, number=7, ) logging_config = proto.Field( proto.MESSAGE, number=8, message="LoggingConfig", )
class PigJob(proto.Message): r"""A Dataproc job for running `Apache Pig <https://pig.apache.org/>`__ queries on YARN. Attributes: query_file_uri (str): The HCFS URI of the script that contains the Pig queries. query_list (google.cloud.dataproc_v1beta2.types.QueryList): A list of queries. continue_on_failure (bool): Optional. Whether to continue executing queries if a query fails. The default value is ``false``. Setting to ``true`` can be useful when executing independent parallel queries. script_variables (Sequence[google.cloud.dataproc_v1beta2.types.PigJob.ScriptVariablesEntry]): Optional. Mapping of query variable names to values (equivalent to the Pig command: ``name=[value]``). properties (Sequence[google.cloud.dataproc_v1beta2.types.PigJob.PropertiesEntry]): Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code. jar_file_uris (Sequence[str]): Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs. logging_config (google.cloud.dataproc_v1beta2.types.LoggingConfig): Optional. The runtime log config for job execution. """ query_file_uri = proto.Field( proto.STRING, number=1, oneof="queries", ) query_list = proto.Field( proto.MESSAGE, number=2, oneof="queries", message="QueryList", ) continue_on_failure = proto.Field( proto.BOOL, number=3, ) script_variables = proto.MapField( proto.STRING, proto.STRING, number=4, ) properties = proto.MapField( proto.STRING, proto.STRING, number=5, ) jar_file_uris = proto.RepeatedField( proto.STRING, number=6, ) logging_config = proto.Field( proto.MESSAGE, number=7, message="LoggingConfig", )
class QueryParameters(proto.Message): r"""Represents the parameters of a conversational query. Attributes: time_zone (str): The time zone of this conversational query from the `time zone database <https://www.iana.org/time-zones>`__, e.g., America/New_York, Europe/Paris. If not provided, the time zone specified in the agent is used. geo_location (google.type.latlng_pb2.LatLng): The geo location of this conversational query. session_entity_types (Sequence[google.cloud.dialogflowcx_v3.types.SessionEntityType]): Additional session entity types to replace or extend developer entity types with. The entity synonyms apply to all languages and persist for the session of this query. payload (google.protobuf.struct_pb2.Struct): This field can be used to pass custom data into the webhook associated with the agent. Arbitrary JSON objects are supported. Some integrations that query a Dialogflow agent may provide additional information in the payload. In particular, for the Dialogflow Phone Gateway integration, this field has the form: :: { "telephony": { "caller_id": "+18558363987" } } parameters (google.protobuf.struct_pb2.Struct): Additional parameters to be put into [session parameters][SessionInfo.parameters]. To remove a parameter from the session, clients should explicitly set the parameter value to null. You can reference the session parameters in the agent with the following format: $session.params.parameter-id. Depending on your protocol or client library language, this is a map, associative array, symbol table, dictionary, or JSON object composed of a collection of (MapKey, MapValue) pairs: - MapKey type: string - MapKey value: parameter name - MapValue type: - If parameter's entity type is a composite entity: map - Else: depending on parameter value type, could be one of string, number, boolean, null, list or map - MapValue value: - If parameter's entity type is a composite entity: map from composite entity property names to property values - Else: parameter value current_page (str): The unique identifier of the [page][google.cloud.dialogflow.cx.v3.Page] to override the [current page][QueryResult.current_page] in the session. Format: ``projects/<Project ID>/locations/<Location ID>/agents/<Agent ID>/flows/<Flow ID>/pages/<Page ID>``. If ``current_page`` is specified, the previous state of the session will be ignored by Dialogflow, including the [previous page][QueryResult.current_page] and the [previous session parameters][QueryResult.parameters]. In most cases, [current_page][google.cloud.dialogflow.cx.v3.QueryParameters.current_page] and [parameters][google.cloud.dialogflow.cx.v3.QueryParameters.parameters] should be configured together to direct a session to a specific state. disable_webhook (bool): Whether to disable webhook calls for this request. analyze_query_text_sentiment (bool): Configures whether sentiment analysis should be performed. If not provided, sentiment analysis is not performed. webhook_headers (Sequence[google.cloud.dialogflowcx_v3.types.QueryParameters.WebhookHeadersEntry]): This field can be used to pass HTTP headers for a webhook call. These headers will be sent to webhook along with the headers that have been configured through Dialogflow web console. The headers defined within this field will overwrite the headers configured through Dialogflow console if there is a conflict. Header names are case-insensitive. Google's specified headers are not allowed. Including: "Host", "Content-Length", "Connection", "From", "User-Agent", "Accept-Encoding", "If-Modified-Since", "If-None-Match", "X-Forwarded-For", etc. flow_versions (Sequence[str]): A list of flow versions to override for the request. Format: ``projects/<Project ID>/locations/<Location ID>/agents/<Agent ID>/flows/<Flow ID>/versions/<Version ID>``. If version 1 of flow X is included in this list, the traffic of flow X will go through version 1 regardless of the version configuration in the environment. Each flow can have at most one version specified in this list. """ time_zone = proto.Field(proto.STRING, number=1,) geo_location = proto.Field(proto.MESSAGE, number=2, message=latlng_pb2.LatLng,) session_entity_types = proto.RepeatedField( proto.MESSAGE, number=3, message=session_entity_type.SessionEntityType, ) payload = proto.Field(proto.MESSAGE, number=4, message=struct_pb2.Struct,) parameters = proto.Field(proto.MESSAGE, number=5, message=struct_pb2.Struct,) current_page = proto.Field(proto.STRING, number=6,) disable_webhook = proto.Field(proto.BOOL, number=7,) analyze_query_text_sentiment = proto.Field(proto.BOOL, number=8,) webhook_headers = proto.MapField(proto.STRING, proto.STRING, number=10,) flow_versions = proto.RepeatedField(proto.STRING, number=14,)
class BatchPredictRequest(proto.Message): r"""Request message for [PredictionService.BatchPredict][google.cloud.automl.v1.PredictionService.BatchPredict]. Attributes: name (str): Required. Name of the model requested to serve the batch prediction. input_config (google.cloud.automl_v1.types.BatchPredictInputConfig): Required. The input configuration for batch prediction. output_config (google.cloud.automl_v1.types.BatchPredictOutputConfig): Required. The Configuration specifying where output predictions should be written. params (Sequence[google.cloud.automl_v1.types.BatchPredictRequest.ParamsEntry]): Additional domain-specific parameters for the predictions, any string must be up to 25000 characters long. AutoML Natural Language Classification ``score_threshold`` : (float) A value from 0.0 to 1.0. When the model makes predictions for a text snippet, it will only produce results that have at least this confidence score. The default is 0.5. AutoML Vision Classification ``score_threshold`` : (float) A value from 0.0 to 1.0. When the model makes predictions for an image, it will only produce results that have at least this confidence score. The default is 0.5. AutoML Vision Object Detection ``score_threshold`` : (float) When Model detects objects on the image, it will only produce bounding boxes which have at least this confidence score. Value in 0 to 1 range, default is 0.5. ``max_bounding_box_count`` : (int64) The maximum number of bounding boxes returned per image. The default is 100, the number of bounding boxes returned might be limited by the server. AutoML Video Intelligence Classification ``score_threshold`` : (float) A value from 0.0 to 1.0. When the model makes predictions for a video, it will only produce results that have at least this confidence score. The default is 0.5. ``segment_classification`` : (boolean) Set to true to request segment-level classification. AutoML Video Intelligence returns labels and their confidence scores for the entire segment of the video that user specified in the request configuration. The default is true. ``shot_classification`` : (boolean) Set to true to request shot-level classification. AutoML Video Intelligence determines the boundaries for each camera shot in the entire segment of the video that user specified in the request configuration. AutoML Video Intelligence then returns labels and their confidence scores for each detected shot, along with the start and end time of the shot. The default is false. WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality. ``1s_interval_classification`` : (boolean) Set to true to request classification for a video at one-second intervals. AutoML Video Intelligence returns labels and their confidence scores for each second of the entire segment of the video that user specified in the request configuration. The default is false. WARNING: Model evaluation is not done for this classification type, the quality of it depends on training data, but there are no metrics provided to describe that quality. AutoML Video Intelligence Object Tracking ``score_threshold`` : (float) When Model detects objects on video frames, it will only produce bounding boxes which have at least this confidence score. Value in 0 to 1 range, default is 0.5. ``max_bounding_box_count`` : (int64) The maximum number of bounding boxes returned per image. The default is 100, the number of bounding boxes returned might be limited by the server. ``min_bounding_box_size`` : (float) Only bounding boxes with shortest edge at least that long as a relative value of video frame size are returned. Value in 0 to 1 range. Default is 0. """ name = proto.Field( proto.STRING, number=1, ) input_config = proto.Field( proto.MESSAGE, number=3, message=io.BatchPredictInputConfig, ) output_config = proto.Field( proto.MESSAGE, number=4, message=io.BatchPredictOutputConfig, ) params = proto.MapField( proto.STRING, proto.STRING, number=5, )
class JobConfiguration(proto.Message): r"""Job configuration information. See the `Jobs </bigquery/docs/reference/v2/jobs>`__ API resource for more details on individual fields. Attributes: query (google.cloud.bigquery_logging_v1.types.JobConfiguration.Query): Query job information. load (google.cloud.bigquery_logging_v1.types.JobConfiguration.Load): Load job information. extract (google.cloud.bigquery_logging_v1.types.JobConfiguration.Extract): Extract job information. table_copy (google.cloud.bigquery_logging_v1.types.JobConfiguration.TableCopy): TableCopy job information. dry_run (bool): If true, don't actually run the job. Just check that it would run. labels (Sequence[google.cloud.bigquery_logging_v1.types.JobConfiguration.LabelsEntry]): Labels provided for the job. """ class Query(proto.Message): r"""Describes a query job, which executes a SQL-like query. Attributes: query (str): The SQL query to run. destination_table (google.cloud.bigquery_logging_v1.types.TableName): The table where results are written. create_disposition (str): Describes when a job is allowed to create a table: ``CREATE_IF_NEEDED``, ``CREATE_NEVER``. write_disposition (str): Describes how writes affect existing tables: ``WRITE_TRUNCATE``, ``WRITE_APPEND``, ``WRITE_EMPTY``. default_dataset (google.cloud.bigquery_logging_v1.types.DatasetName): If a table name is specified without a dataset in a query, this dataset will be added to table name. table_definitions (Sequence[google.cloud.bigquery_logging_v1.types.TableDefinition]): Describes data sources outside BigQuery, if needed. query_priority (str): Describes the priority given to the query: ``QUERY_INTERACTIVE`` or ``QUERY_BATCH``. destination_table_encryption (google.cloud.bigquery_logging_v1.types.EncryptionInfo): Result table encryption information. Set when non-default encryption is used. statement_type (str): Type of the statement (e.g. SELECT, INSERT, CREATE_TABLE, CREATE_MODEL..) """ query = proto.Field( proto.STRING, number=1, ) destination_table = proto.Field( proto.MESSAGE, number=2, message='TableName', ) create_disposition = proto.Field( proto.STRING, number=3, ) write_disposition = proto.Field( proto.STRING, number=4, ) default_dataset = proto.Field( proto.MESSAGE, number=5, message='DatasetName', ) table_definitions = proto.RepeatedField( proto.MESSAGE, number=6, message='TableDefinition', ) query_priority = proto.Field( proto.STRING, number=7, ) destination_table_encryption = proto.Field( proto.MESSAGE, number=8, message='EncryptionInfo', ) statement_type = proto.Field( proto.STRING, number=9, ) class Load(proto.Message): r"""Describes a load job, which loads data from an external source via the import pipeline. Attributes: source_uris (Sequence[str]): URIs for the data to be imported. Only Google Cloud Storage URIs are supported. schema_json (str): The table schema in JSON format representation of a TableSchema. destination_table (google.cloud.bigquery_logging_v1.types.TableName): The table where the imported data is written. create_disposition (str): Describes when a job is allowed to create a table: ``CREATE_IF_NEEDED``, ``CREATE_NEVER``. write_disposition (str): Describes how writes affect existing tables: ``WRITE_TRUNCATE``, ``WRITE_APPEND``, ``WRITE_EMPTY``. destination_table_encryption (google.cloud.bigquery_logging_v1.types.EncryptionInfo): Result table encryption information. Set when non-default encryption is used. """ source_uris = proto.RepeatedField( proto.STRING, number=1, ) schema_json = proto.Field( proto.STRING, number=6, ) destination_table = proto.Field( proto.MESSAGE, number=3, message='TableName', ) create_disposition = proto.Field( proto.STRING, number=4, ) write_disposition = proto.Field( proto.STRING, number=5, ) destination_table_encryption = proto.Field( proto.MESSAGE, number=7, message='EncryptionInfo', ) class Extract(proto.Message): r"""Describes an extract job, which exports data to an external source via the export pipeline. Attributes: destination_uris (Sequence[str]): Google Cloud Storage URIs where extracted data should be written. source_table (google.cloud.bigquery_logging_v1.types.TableName): The source table. """ destination_uris = proto.RepeatedField( proto.STRING, number=1, ) source_table = proto.Field( proto.MESSAGE, number=2, message='TableName', ) class TableCopy(proto.Message): r"""Describes a copy job, which copies an existing table to another table. Attributes: source_tables (Sequence[google.cloud.bigquery_logging_v1.types.TableName]): Source tables. destination_table (google.cloud.bigquery_logging_v1.types.TableName): Destination table. create_disposition (str): Describes when a job is allowed to create a table: ``CREATE_IF_NEEDED``, ``CREATE_NEVER``. write_disposition (str): Describes how writes affect existing tables: ``WRITE_TRUNCATE``, ``WRITE_APPEND``, ``WRITE_EMPTY``. destination_table_encryption (google.cloud.bigquery_logging_v1.types.EncryptionInfo): Result table encryption information. Set when non-default encryption is used. """ source_tables = proto.RepeatedField( proto.MESSAGE, number=1, message='TableName', ) destination_table = proto.Field( proto.MESSAGE, number=2, message='TableName', ) create_disposition = proto.Field( proto.STRING, number=3, ) write_disposition = proto.Field( proto.STRING, number=4, ) destination_table_encryption = proto.Field( proto.MESSAGE, number=5, message='EncryptionInfo', ) query = proto.Field( proto.MESSAGE, number=5, oneof='configuration', message=Query, ) load = proto.Field( proto.MESSAGE, number=6, oneof='configuration', message=Load, ) extract = proto.Field( proto.MESSAGE, number=7, oneof='configuration', message=Extract, ) table_copy = proto.Field( proto.MESSAGE, number=8, oneof='configuration', message=TableCopy, ) dry_run = proto.Field( proto.BOOL, number=9, ) labels = proto.MapField( proto.STRING, proto.STRING, number=3, )
class WriteLogEntriesRequest(proto.Message): r"""The parameters to WriteLogEntries. Attributes: log_name (str): Optional. A default log resource name that is assigned to all log entries in ``entries`` that do not specify a value for ``log_name``: :: "projects/[PROJECT_ID]/logs/[LOG_ID]" "organizations/[ORGANIZATION_ID]/logs/[LOG_ID]" "billingAccounts/[BILLING_ACCOUNT_ID]/logs/[LOG_ID]" "folders/[FOLDER_ID]/logs/[LOG_ID]" ``[LOG_ID]`` must be URL-encoded. For example: :: "projects/my-project-id/logs/syslog" "organizations/1234567890/logs/cloudresourcemanager.googleapis.com%2Factivity" The permission ``logging.logEntries.create`` is needed on each project, organization, billing account, or folder that is receiving new log entries, whether the resource is specified in ``logName`` or in an individual log entry. resource (google.api.monitored_resource_pb2.MonitoredResource): Optional. A default monitored resource object that is assigned to all log entries in ``entries`` that do not specify a value for ``resource``. Example: :: { "type": "gce_instance", "labels": { "zone": "us-central1-a", "instance_id": "00000000000000000000" }} See [LogEntry][google.logging.v2.LogEntry]. labels (Sequence[google.cloud.logging_v2.types.WriteLogEntriesRequest.LabelsEntry]): Optional. Default labels that are added to the ``labels`` field of all log entries in ``entries``. If a log entry already has a label with the same key as a label in this parameter, then the log entry's label is not changed. See [LogEntry][google.logging.v2.LogEntry]. entries (Sequence[google.cloud.logging_v2.types.LogEntry]): Required. The log entries to send to Logging. The order of log entries in this list does not matter. Values supplied in this method's ``log_name``, ``resource``, and ``labels`` fields are copied into those log entries in this list that do not include values for their corresponding fields. For more information, see the [LogEntry][google.logging.v2.LogEntry] type. If the ``timestamp`` or ``insert_id`` fields are missing in log entries, then this method supplies the current time or a unique identifier, respectively. The supplied values are chosen so that, among the log entries that did not supply their own values, the entries earlier in the list will sort before the entries later in the list. See the ``entries.list`` method. Log entries with timestamps that are more than the `logs retention period <https://cloud.google.com/logging/quota-policy>`__ in the past or more than 24 hours in the future will not be available when calling ``entries.list``. However, those log entries can still be `exported with LogSinks <https://cloud.google.com/logging/docs/api/tasks/exporting-logs>`__. To improve throughput and to avoid exceeding the `quota limit <https://cloud.google.com/logging/quota-policy>`__ for calls to ``entries.write``, you should try to include several log entries in this list, rather than calling this method for each individual log entry. partial_success (bool): Optional. Whether valid entries should be written even if some other entries fail due to INVALID_ARGUMENT or PERMISSION_DENIED errors. If any entry is not written, then the response status is the error associated with one of the failed entries and the response includes error details keyed by the entries' zero-based index in the ``entries.write`` method. dry_run (bool): Optional. If true, the request should expect normal response, but the entries won't be persisted nor exported. Useful for checking whether the logging API endpoints are working properly before sending valuable data. """ log_name = proto.Field(proto.STRING, number=1,) resource = proto.Field( proto.MESSAGE, number=2, message=monitored_resource_pb2.MonitoredResource, ) labels = proto.MapField(proto.STRING, proto.STRING, number=3,) entries = proto.RepeatedField(proto.MESSAGE, number=4, message=log_entry.LogEntry,) partial_success = proto.Field(proto.BOOL, number=5,) dry_run = proto.Field(proto.BOOL, number=6,)
class Instance(proto.Message): r"""The definition of a notebook instance. Attributes: name (str): Output only. The name of this notebook instance. Format: ``projects/{project_id}/locations/{location}/instances/{instance_id}`` vm_image (google.cloud.notebooks_v1beta1.types.VmImage): Use a Compute Engine VM image to start the notebook instance. container_image (google.cloud.notebooks_v1beta1.types.ContainerImage): Use a container image to start the notebook instance. post_startup_script (str): Path to a Bash script that automatically runs after a notebook instance fully boots up. The path must be a URL or Cloud Storage path (``gs://path-to-file/file-name``). proxy_uri (str): Output only. The proxy endpoint that is used to access the Jupyter notebook. instance_owners (Sequence[str]): Input only. The owner of this instance after creation. Format: ``[email protected]`` Currently supports one owner only. If not specified, all of the service account users of your VM instance's service account can use the instance. service_account (str): The service account on this instance, giving access to other Google Cloud services. You can use any service account within the same project, but you must have the service account user permission to use the instance. If not specified, the `Compute Engine default service account <https://cloud.google.com/compute/docs/access/service-accounts#default_service_account>`__ is used. machine_type (str): Required. The `Compute Engine machine type <https://cloud.google.com/compute/docs/machine-types>`__ of this instance. accelerator_config (google.cloud.notebooks_v1beta1.types.Instance.AcceleratorConfig): The hardware accelerator used on this instance. If you use accelerators, make sure that your configuration has `enough vCPUs and memory to support the ``machine_type`` you have selected <https://cloud.google.com/compute/docs/gpus/#gpus-list>`__. state (google.cloud.notebooks_v1beta1.types.Instance.State): Output only. The state of this instance. install_gpu_driver (bool): Whether the end user authorizes Google Cloud to install GPU driver on this instance. If this field is empty or set to false, the GPU driver won't be installed. Only applicable to instances with GPUs. custom_gpu_driver_path (str): Specify a custom Cloud Storage path where the GPU driver is stored. If not specified, we'll automatically choose from official GPU drivers. boot_disk_type (google.cloud.notebooks_v1beta1.types.Instance.DiskType): Input only. The type of the boot disk attached to this instance, defaults to standard persistent disk (``PD_STANDARD``). boot_disk_size_gb (int): Input only. The size of the boot disk in GB attached to this instance, up to a maximum of 64000 GB (64 TB). The minimum recommended value is 100 GB. If not specified, this defaults to 100. data_disk_type (google.cloud.notebooks_v1beta1.types.Instance.DiskType): Input only. The type of the data disk attached to this instance, defaults to standard persistent disk (``PD_STANDARD``). data_disk_size_gb (int): Input only. The size of the data disk in GB attached to this instance, up to a maximum of 64000 GB (64 TB). You can choose the size of the data disk based on how big your notebooks and data are. If not specified, this defaults to 100. no_remove_data_disk (bool): Input only. If true, the data disk will not be auto deleted when deleting the instance. disk_encryption (google.cloud.notebooks_v1beta1.types.Instance.DiskEncryption): Input only. Disk encryption method used on the boot and data disks, defaults to GMEK. kms_key (str): Input only. The KMS key used to encrypt the disks, only applicable if disk_encryption is CMEK. Format: ``projects/{project_id}/locations/{location}/keyRings/{key_ring_id}/cryptoKeys/{key_id}`` Learn more about `using your own encryption keys <https://cloud.google.com/kms/docs/quickstart>`__. no_public_ip (bool): If true, no public IP will be assigned to this instance. no_proxy_access (bool): If true, the notebook instance will not register with the proxy. network (str): The name of the VPC that this instance is in. Format: ``projects/{project_id}/global/networks/{network_id}`` subnet (str): The name of the subnet that this instance is in. Format: ``projects/{project_id}/regions/{region}/subnetworks/{subnetwork_id}`` labels (Sequence[google.cloud.notebooks_v1beta1.types.Instance.LabelsEntry]): Labels to apply to this instance. These can be later modified by the setLabels method. metadata (Sequence[google.cloud.notebooks_v1beta1.types.Instance.MetadataEntry]): Custom metadata to apply to this instance. create_time (google.protobuf.timestamp_pb2.Timestamp): Output only. Instance creation time. update_time (google.protobuf.timestamp_pb2.Timestamp): Output only. Instance update time. """ class AcceleratorType(proto.Enum): r"""Definition of the types of hardware accelerators that can be used on this instance. """ ACCELERATOR_TYPE_UNSPECIFIED = 0 NVIDIA_TESLA_K80 = 1 NVIDIA_TESLA_P100 = 2 NVIDIA_TESLA_V100 = 3 NVIDIA_TESLA_P4 = 4 NVIDIA_TESLA_T4 = 5 NVIDIA_TESLA_T4_VWS = 8 NVIDIA_TESLA_P100_VWS = 9 NVIDIA_TESLA_P4_VWS = 10 TPU_V2 = 6 TPU_V3 = 7 class State(proto.Enum): r"""The definition of the states of this instance.""" STATE_UNSPECIFIED = 0 STARTING = 1 PROVISIONING = 2 ACTIVE = 3 STOPPING = 4 STOPPED = 5 DELETED = 6 UPGRADING = 7 INITIALIZING = 8 REGISTERING = 9 class DiskType(proto.Enum): r"""Possible disk types for notebook instances.""" DISK_TYPE_UNSPECIFIED = 0 PD_STANDARD = 1 PD_SSD = 2 PD_BALANCED = 3 class DiskEncryption(proto.Enum): r"""Definition of the disk encryption options.""" DISK_ENCRYPTION_UNSPECIFIED = 0 GMEK = 1 CMEK = 2 class AcceleratorConfig(proto.Message): r"""Definition of a hardware accelerator. Note that not all combinations of ``type`` and ``core_count`` are valid. Check `GPUs on Compute Engine </compute/docs/gpus/#gpus-list>`__ to find a valid combination. TPUs are not supported. Attributes: type_ (google.cloud.notebooks_v1beta1.types.Instance.AcceleratorType): Type of this accelerator. core_count (int): Count of cores of this accelerator. """ type_ = proto.Field( proto.ENUM, number=1, enum="Instance.AcceleratorType", ) core_count = proto.Field( proto.INT64, number=2, ) name = proto.Field( proto.STRING, number=1, ) vm_image = proto.Field( proto.MESSAGE, number=2, oneof="environment", message=environment.VmImage, ) container_image = proto.Field( proto.MESSAGE, number=3, oneof="environment", message=environment.ContainerImage, ) post_startup_script = proto.Field( proto.STRING, number=4, ) proxy_uri = proto.Field( proto.STRING, number=5, ) instance_owners = proto.RepeatedField( proto.STRING, number=6, ) service_account = proto.Field( proto.STRING, number=7, ) machine_type = proto.Field( proto.STRING, number=8, ) accelerator_config = proto.Field( proto.MESSAGE, number=9, message=AcceleratorConfig, ) state = proto.Field( proto.ENUM, number=10, enum=State, ) install_gpu_driver = proto.Field( proto.BOOL, number=11, ) custom_gpu_driver_path = proto.Field( proto.STRING, number=12, ) boot_disk_type = proto.Field( proto.ENUM, number=13, enum=DiskType, ) boot_disk_size_gb = proto.Field( proto.INT64, number=14, ) data_disk_type = proto.Field( proto.ENUM, number=25, enum=DiskType, ) data_disk_size_gb = proto.Field( proto.INT64, number=26, ) no_remove_data_disk = proto.Field( proto.BOOL, number=27, ) disk_encryption = proto.Field( proto.ENUM, number=15, enum=DiskEncryption, ) kms_key = proto.Field( proto.STRING, number=16, ) no_public_ip = proto.Field( proto.BOOL, number=17, ) no_proxy_access = proto.Field( proto.BOOL, number=18, ) network = proto.Field( proto.STRING, number=19, ) subnet = proto.Field( proto.STRING, number=20, ) labels = proto.MapField( proto.STRING, proto.STRING, number=21, ) metadata = proto.MapField( proto.STRING, proto.STRING, number=22, ) create_time = proto.Field( proto.MESSAGE, number=23, message=timestamp_pb2.Timestamp, ) update_time = proto.Field( proto.MESSAGE, number=24, message=timestamp_pb2.Timestamp, )
class Document(proto.Message): r"""A knowledge document to be used by a [KnowledgeBase][google.cloud.dialogflow.v2beta1.KnowledgeBase]. For more information, see the `knowledge base guide <https://cloud.google.com/dialogflow/docs/how/knowledge-bases>`__. Note: The ``projects.agent.knowledgeBases.documents`` resource is deprecated; only use ``projects.knowledgeBases.documents``. Attributes: name (str): Optional. The document resource name. The name must be empty when creating a document. Format: ``projects/<Project ID>/locations/<Location ID>/knowledgeBases/<Knowledge Base ID>/documents/<Document ID>``. display_name (str): Required. The display name of the document. The name must be 1024 bytes or less; otherwise, the creation request fails. mime_type (str): Required. The MIME type of this document. knowledge_types (Sequence[google.cloud.dialogflow_v2beta1.types.Document.KnowledgeType]): Required. The knowledge type of document content. content_uri (str): The URI where the file content is located. For documents stored in Google Cloud Storage, these URIs must have the form ``gs://<bucket-name>/<object-name>``. NOTE: External URLs must correspond to public webpages, i.e., they must be indexed by Google Search. In particular, URLs for showing documents in Google Cloud Storage (i.e. the URL in your browser) are not supported. Instead use the ``gs://`` format URI described above. content (str): The raw content of the document. This field is only permitted for EXTRACTIVE_QA and FAQ knowledge types. Note: This field is in the process of being deprecated, please use raw_content instead. raw_content (bytes): The raw content of the document. This field is only permitted for EXTRACTIVE_QA and FAQ knowledge types. enable_auto_reload (bool): Optional. If true, we try to automatically reload the document every day (at a time picked by the system). If false or unspecified, we don't try to automatically reload the document. Currently you can only enable automatic reload for documents sourced from a public url, see ``source`` field for the source types. Reload status can be tracked in ``latest_reload_status``. If a reload fails, we will keep the document unchanged. If a reload fails with internal errors, the system will try to reload the document on the next day. If a reload fails with non-retriable errors (e.g. PERMISION_DENIED), the system will not try to reload the document anymore. You need to manually reload the document successfully by calling ``ReloadDocument`` and clear the errors. latest_reload_status (google.cloud.dialogflow_v2beta1.types.Document.ReloadStatus): Output only. The time and status of the latest reload. This reload may have been triggered automatically or manually and may not have succeeded. metadata (Sequence[google.cloud.dialogflow_v2beta1.types.Document.MetadataEntry]): Optional. Metadata for the document. The metadata supports arbitrary key-value pairs. Suggested use cases include storing a document's title, an external URL distinct from the document's content_uri, etc. The max size of a ``key`` or a ``value`` of the metadata is 1024 bytes. """ class KnowledgeType(proto.Enum): r"""The knowledge type of document content.""" KNOWLEDGE_TYPE_UNSPECIFIED = 0 FAQ = 1 EXTRACTIVE_QA = 2 ARTICLE_SUGGESTION = 3 SMART_REPLY = 4 class ReloadStatus(proto.Message): r"""The status of a reload attempt. Attributes: time (google.protobuf.timestamp_pb2.Timestamp): Output only. The time of a reload attempt. This reload may have been triggered automatically or manually and may not have succeeded. status (google.rpc.status_pb2.Status): Output only. The status of a reload attempt or the initial load. """ time = proto.Field( proto.MESSAGE, number=1, message=timestamp_pb2.Timestamp, ) status = proto.Field( proto.MESSAGE, number=2, message=status_pb2.Status, ) name = proto.Field( proto.STRING, number=1, ) display_name = proto.Field( proto.STRING, number=2, ) mime_type = proto.Field( proto.STRING, number=3, ) knowledge_types = proto.RepeatedField( proto.ENUM, number=4, enum=KnowledgeType, ) content_uri = proto.Field( proto.STRING, number=5, oneof="source", ) content = proto.Field( proto.STRING, number=6, oneof="source", ) raw_content = proto.Field( proto.BYTES, number=9, oneof="source", ) enable_auto_reload = proto.Field( proto.BOOL, number=11, ) latest_reload_status = proto.Field( proto.MESSAGE, number=12, message=ReloadStatus, ) metadata = proto.MapField( proto.STRING, proto.STRING, number=7, )
class SparkRJob(proto.Message): r"""A Dataproc job for running `Apache SparkR <https://spark.apache.org/docs/latest/sparkr.html>`__ applications on YARN. Attributes: main_r_file_uri (str): Required. The HCFS URI of the main R file to use as the driver. Must be a .R file. args (Sequence[str]): Optional. The arguments to pass to the driver. Do not include arguments, such as ``--conf``, that can be set as job properties, since a collision may occur that causes an incorrect job submission. file_uris (Sequence[str]): Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks. archive_uris (Sequence[str]): Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip. properties (Sequence[google.cloud.dataproc_v1beta2.types.SparkRJob.PropertiesEntry]): Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code. logging_config (google.cloud.dataproc_v1beta2.types.LoggingConfig): Optional. The runtime log config for job execution. """ main_r_file_uri = proto.Field( proto.STRING, number=1, ) args = proto.RepeatedField( proto.STRING, number=2, ) file_uris = proto.RepeatedField( proto.STRING, number=3, ) archive_uris = proto.RepeatedField( proto.STRING, number=4, ) properties = proto.MapField( proto.STRING, proto.STRING, number=5, ) logging_config = proto.Field( proto.MESSAGE, number=6, message="LoggingConfig", )
class LogMetric(proto.Message): r"""Describes a logs-based metric. The value of the metric is the number of log entries that match a logs filter in a given time interval. Logs-based metrics can also be used to extract values from logs and create a distribution of the values. The distribution records the statistics of the extracted values along with an optional histogram of the values as specified by the bucket options. Attributes: name (str): Required. The client-assigned metric identifier. Examples: ``"error_count"``, ``"nginx/requests"``. Metric identifiers are limited to 100 characters and can include only the following characters: ``A-Z``, ``a-z``, ``0-9``, and the special characters ``_-.,+!*',()%/``. The forward-slash character (``/``) denotes a hierarchy of name pieces, and it cannot be the first character of the name. This field is the ``[METRIC_ID]`` part of a metric resource name in the format "projects/[PROJECT_ID]/metrics/[METRIC_ID]". Example: If the resource name of a metric is ``"projects/my-project/metrics/nginx%2Frequests"``, this field's value is ``"nginx/requests"``. description (str): Optional. A description of this metric, which is used in documentation. The maximum length of the description is 8000 characters. filter (str): Required. An `advanced logs filter <https://cloud.google.com/logging/docs/view/advanced_filters>`__ which is used to match log entries. Example: :: "resource.type=gae_app AND severity>=ERROR" The maximum length of the filter is 20000 characters. disabled (bool): Optional. If set to True, then this metric is disabled and it does not generate any points. metric_descriptor (google.api.metric_pb2.MetricDescriptor): Optional. The metric descriptor associated with the logs-based metric. If unspecified, it uses a default metric descriptor with a DELTA metric kind, INT64 value type, with no labels and a unit of "1". Such a metric counts the number of log entries matching the ``filter`` expression. The ``name``, ``type``, and ``description`` fields in the ``metric_descriptor`` are output only, and is constructed using the ``name`` and ``description`` field in the LogMetric. To create a logs-based metric that records a distribution of log values, a DELTA metric kind with a DISTRIBUTION value type must be used along with a ``value_extractor`` expression in the LogMetric. Each label in the metric descriptor must have a matching label name as the key and an extractor expression as the value in the ``label_extractors`` map. The ``metric_kind`` and ``value_type`` fields in the ``metric_descriptor`` cannot be updated once initially configured. New labels can be added in the ``metric_descriptor``, but existing labels cannot be modified except for their description. value_extractor (str): Optional. A ``value_extractor`` is required when using a distribution logs-based metric to extract the values to record from a log entry. Two functions are supported for value extraction: ``EXTRACT(field)`` or ``REGEXP_EXTRACT(field, regex)``. The argument are: 1. field: The name of the log entry field from which the value is to be extracted. 2. regex: A regular expression using the Google RE2 syntax (https://github.com/google/re2/wiki/Syntax) with a single capture group to extract data from the specified log entry field. The value of the field is converted to a string before applying the regex. It is an error to specify a regex that does not include exactly one capture group. The result of the extraction must be convertible to a double type, as the distribution always records double values. If either the extraction or the conversion to double fails, then those values are not recorded in the distribution. Example: ``REGEXP_EXTRACT(jsonPayload.request, ".*quantity=(\d+).*")`` label_extractors (Sequence[googlecloudsdk.third_party.gapic_clients.logging_v2.types.LogMetric.LabelExtractorsEntry]): Optional. A map from a label key string to an extractor expression which is used to extract data from a log entry field and assign as the label value. Each label key specified in the LabelDescriptor must have an associated extractor expression in this map. The syntax of the extractor expression is the same as for the ``value_extractor`` field. The extracted value is converted to the type defined in the label descriptor. If the either the extraction or the type conversion fails, the label will have a default value. The default value for a string label is an empty string, for an integer label its 0, and for a boolean label its ``false``. Note that there are upper bounds on the maximum number of labels and the number of active time series that are allowed in a project. bucket_options (google.api.distribution_pb2.BucketOptions): Optional. The ``bucket_options`` are required when the logs-based metric is using a DISTRIBUTION value type and it describes the bucket boundaries used to create a histogram of the extracted values. create_time (google.protobuf.timestamp_pb2.Timestamp): Output only. The creation timestamp of the metric. This field may not be present for older metrics. update_time (google.protobuf.timestamp_pb2.Timestamp): Output only. The last update timestamp of the metric. This field may not be present for older metrics. version (googlecloudsdk.third_party.gapic_clients.logging_v2.types.LogMetric.ApiVersion): Deprecated. The API version that created or updated this metric. The v2 format is used by default and cannot be changed. """ class ApiVersion(proto.Enum): r"""Logging API version.""" V2 = 0 V1 = 1 name = proto.Field( proto.STRING, number=1, ) description = proto.Field( proto.STRING, number=2, ) filter = proto.Field( proto.STRING, number=3, ) disabled = proto.Field( proto.BOOL, number=12, ) metric_descriptor = proto.Field( proto.MESSAGE, number=5, message=metric_pb2.MetricDescriptor, ) value_extractor = proto.Field( proto.STRING, number=6, ) label_extractors = proto.MapField( proto.STRING, proto.STRING, number=7, ) bucket_options = proto.Field( proto.MESSAGE, number=8, message=distribution_pb2.Distribution.BucketOptions, ) create_time = proto.Field( proto.MESSAGE, number=9, message=timestamp_pb2.Timestamp, ) update_time = proto.Field( proto.MESSAGE, number=10, message=timestamp_pb2.Timestamp, ) version = proto.Field( proto.ENUM, number=4, enum=ApiVersion, )
class PrestoJob(proto.Message): r"""A Dataproc job for running `Presto <https://prestosql.io/>`__ queries. **IMPORTANT**: The `Dataproc Presto Optional Component <https://cloud.google.com/dataproc/docs/concepts/components/presto>`__ must be enabled when the cluster is created to submit a Presto job to the cluster. Attributes: query_file_uri (str): The HCFS URI of the script that contains SQL queries. query_list (google.cloud.dataproc_v1beta2.types.QueryList): A list of queries. continue_on_failure (bool): Optional. Whether to continue executing queries if a query fails. The default value is ``false``. Setting to ``true`` can be useful when executing independent parallel queries. output_format (str): Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats client_tags (Sequence[str]): Optional. Presto client tags to attach to this query properties (Sequence[google.cloud.dataproc_v1beta2.types.PrestoJob.PropertiesEntry]): Optional. A mapping of property names to values. Used to set Presto `session properties <https://prestodb.io/docs/current/sql/set-session.html>`__ Equivalent to using the --session flag in the Presto CLI logging_config (google.cloud.dataproc_v1beta2.types.LoggingConfig): Optional. The runtime log config for job execution. """ query_file_uri = proto.Field( proto.STRING, number=1, oneof="queries", ) query_list = proto.Field( proto.MESSAGE, number=2, oneof="queries", message="QueryList", ) continue_on_failure = proto.Field( proto.BOOL, number=3, ) output_format = proto.Field( proto.STRING, number=4, ) client_tags = proto.RepeatedField( proto.STRING, number=5, ) properties = proto.MapField( proto.STRING, proto.STRING, number=6, ) logging_config = proto.Field( proto.MESSAGE, number=7, message="LoggingConfig", )
class Event(proto.Message): r"""An edge describing the relationship between an Artifact and an Execution in a lineage graph. Attributes: artifact (str): Required. The relative resource name of the Artifact in the Event. execution (str): Output only. The relative resource name of the Execution in the Event. event_time (google.protobuf.timestamp_pb2.Timestamp): Output only. Time the Event occurred. type_ (google.cloud.aiplatform_v1.types.Event.Type): Required. The type of the Event. labels (Sequence[google.cloud.aiplatform_v1.types.Event.LabelsEntry]): The labels with user-defined metadata to annotate Events. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Event (System labels are excluded). See https://goo.gl/xmQnxf for more information and examples of labels. System reserved label keys are prefixed with "aiplatform.googleapis.com/" and are immutable. """ class Type(proto.Enum): r"""Describes whether an Event's Artifact is the Execution's input or output. """ TYPE_UNSPECIFIED = 0 INPUT = 1 OUTPUT = 2 artifact = proto.Field( proto.STRING, number=1, ) execution = proto.Field( proto.STRING, number=2, ) event_time = proto.Field( proto.MESSAGE, number=3, message=timestamp_pb2.Timestamp, ) type_ = proto.Field( proto.ENUM, number=4, enum=Type, ) labels = proto.MapField( proto.STRING, proto.STRING, number=5, )
class HadoopJob(proto.Message): r"""A Dataproc job for running `Apache Hadoop MapReduce <https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html>`__ jobs on `Apache Hadoop YARN <https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html>`__. Attributes: main_jar_file_uri (str): The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract- useful-metrics-mr.jar' 'hdfs:/tmp/test- samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop- mapreduce-examples.jar' main_class (str): The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in ``jar_file_uris``. args (Sequence[str]): Optional. The arguments to pass to the driver. Do not include arguments, such as ``-libjars`` or ``-Dfoo=bar``, that can be set as job properties, since a collision may occur that causes an incorrect job submission. jar_file_uris (Sequence[str]): Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks. file_uris (Sequence[str]): Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks. archive_uris (Sequence[str]): Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip. properties (Sequence[google.cloud.dataproc_v1beta2.types.HadoopJob.PropertiesEntry]): Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code. logging_config (google.cloud.dataproc_v1beta2.types.LoggingConfig): Optional. The runtime log config for job execution. """ main_jar_file_uri = proto.Field( proto.STRING, number=1, oneof="driver", ) main_class = proto.Field( proto.STRING, number=2, oneof="driver", ) args = proto.RepeatedField( proto.STRING, number=3, ) jar_file_uris = proto.RepeatedField( proto.STRING, number=4, ) file_uris = proto.RepeatedField( proto.STRING, number=5, ) archive_uris = proto.RepeatedField( proto.STRING, number=6, ) properties = proto.MapField( proto.STRING, proto.STRING, number=7, ) logging_config = proto.Field( proto.MESSAGE, number=8, message="LoggingConfig", )
class Trial(proto.Message): r"""A message representing a Trial. A Trial contains a unique set of Parameters that has been or will be evaluated, along with the objective metrics got by running the Trial. Attributes: name (str): Output only. Resource name of the Trial assigned by the service. id (str): Output only. The identifier of the Trial assigned by the service. state (google.cloud.aiplatform_v1beta1.types.Trial.State): Output only. The detailed state of the Trial. parameters (Sequence[google.cloud.aiplatform_v1beta1.types.Trial.Parameter]): Output only. The parameters of the Trial. final_measurement (google.cloud.aiplatform_v1beta1.types.Measurement): Output only. The final measurement containing the objective value. measurements (Sequence[google.cloud.aiplatform_v1beta1.types.Measurement]): Output only. A list of measurements that are strictly lexicographically ordered by their induced tuples (steps, elapsed_duration). These are used for early stopping computations. start_time (google.protobuf.timestamp_pb2.Timestamp): Output only. Time when the Trial was started. end_time (google.protobuf.timestamp_pb2.Timestamp): Output only. Time when the Trial's status changed to ``SUCCEEDED`` or ``INFEASIBLE``. client_id (str): Output only. The identifier of the client that originally requested this Trial. Each client is identified by a unique client_id. When a client asks for a suggestion, Vertex AI Vizier will assign it a Trial. The client should evaluate the Trial, complete it, and report back to Vertex AI Vizier. If suggestion is asked again by same client_id before the Trial is completed, the same Trial will be returned. Multiple clients with different client_ids can ask for suggestions simultaneously, each of them will get their own Trial. infeasible_reason (str): Output only. A human readable string describing why the Trial is infeasible. This is set only if Trial state is ``INFEASIBLE``. custom_job (str): Output only. The CustomJob name linked to the Trial. It's set for a HyperparameterTuningJob's Trial. web_access_uris (Sequence[google.cloud.aiplatform_v1beta1.types.Trial.WebAccessUrisEntry]): Output only. URIs for accessing `interactive shells <https://cloud.google.com/vertex-ai/docs/training/monitor-debug-interactive-shell>`__ (one URI for each training node). Only available if this trial is part of a [HyperparameterTuningJob][google.cloud.aiplatform.v1beta1.HyperparameterTuningJob] and the job's [trial_job_spec.enable_web_access][google.cloud.aiplatform.v1beta1.CustomJobSpec.enable_web_access] field is ``true``. The keys are names of each node used for the trial; for example, ``workerpool0-0`` for the primary node, ``workerpool1-0`` for the first node in the second worker pool, and ``workerpool1-1`` for the second node in the second worker pool. The values are the URIs for each node's interactive shell. """ class State(proto.Enum): r"""Describes a Trial state.""" STATE_UNSPECIFIED = 0 REQUESTED = 1 ACTIVE = 2 STOPPING = 3 SUCCEEDED = 4 INFEASIBLE = 5 class Parameter(proto.Message): r"""A message representing a parameter to be tuned. Attributes: parameter_id (str): Output only. The ID of the parameter. The parameter should be defined in [StudySpec's Parameters][google.cloud.aiplatform.v1beta1.StudySpec.parameters]. value (google.protobuf.struct_pb2.Value): Output only. The value of the parameter. ``number_value`` will be set if a parameter defined in StudySpec is in type 'INTEGER', 'DOUBLE' or 'DISCRETE'. ``string_value`` will be set if a parameter defined in StudySpec is in type 'CATEGORICAL'. """ parameter_id = proto.Field(proto.STRING, number=1,) value = proto.Field(proto.MESSAGE, number=2, message=struct_pb2.Value,) name = proto.Field(proto.STRING, number=1,) id = proto.Field(proto.STRING, number=2,) state = proto.Field(proto.ENUM, number=3, enum=State,) parameters = proto.RepeatedField(proto.MESSAGE, number=4, message=Parameter,) final_measurement = proto.Field(proto.MESSAGE, number=5, message="Measurement",) measurements = proto.RepeatedField(proto.MESSAGE, number=6, message="Measurement",) start_time = proto.Field(proto.MESSAGE, number=7, message=timestamp_pb2.Timestamp,) end_time = proto.Field(proto.MESSAGE, number=8, message=timestamp_pb2.Timestamp,) client_id = proto.Field(proto.STRING, number=9,) infeasible_reason = proto.Field(proto.STRING, number=10,) custom_job = proto.Field(proto.STRING, number=11,) web_access_uris = proto.MapField(proto.STRING, proto.STRING, number=12,)
class Job(proto.Message): r"""A Dataproc job resource. Attributes: reference (google.cloud.dataproc_v1beta2.types.JobReference): Optional. The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id. placement (google.cloud.dataproc_v1beta2.types.JobPlacement): Required. Job information, including how, when, and where to run the job. hadoop_job (google.cloud.dataproc_v1beta2.types.HadoopJob): Optional. Job is a Hadoop job. spark_job (google.cloud.dataproc_v1beta2.types.SparkJob): Optional. Job is a Spark job. pyspark_job (google.cloud.dataproc_v1beta2.types.PySparkJob): Optional. Job is a PySpark job. hive_job (google.cloud.dataproc_v1beta2.types.HiveJob): Optional. Job is a Hive job. pig_job (google.cloud.dataproc_v1beta2.types.PigJob): Optional. Job is a Pig job. spark_r_job (google.cloud.dataproc_v1beta2.types.SparkRJob): Optional. Job is a SparkR job. spark_sql_job (google.cloud.dataproc_v1beta2.types.SparkSqlJob): Optional. Job is a SparkSql job. presto_job (google.cloud.dataproc_v1beta2.types.PrestoJob): Optional. Job is a Presto job. status (google.cloud.dataproc_v1beta2.types.JobStatus): Output only. The job status. Additional application-specific status information may be contained in the type_job and yarn_applications fields. status_history (Sequence[google.cloud.dataproc_v1beta2.types.JobStatus]): Output only. The previous job status. yarn_applications (Sequence[google.cloud.dataproc_v1beta2.types.YarnApplication]): Output only. The collection of YARN applications spun up by this job. **Beta** Feature: This report is available for testing purposes only. It may be changed before final release. submitted_by (str): Output only. The email address of the user submitting the job. For jobs submitted on the cluster, the address is <code>username@hostname</code>. driver_output_resource_uri (str): Output only. A URI pointing to the location of the stdout of the job's driver program. driver_control_files_uri (str): Output only. If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as ``driver_output_uri``. labels (Sequence[google.cloud.dataproc_v1beta2.types.Job.LabelsEntry]): Optional. The labels to associate with this job. Label **keys** must contain 1 to 63 characters, and must conform to `RFC 1035 <https://www.ietf.org/rfc/rfc1035.txt>`__. Label **values** may be empty, but, if present, must contain 1 to 63 characters, and must conform to `RFC 1035 <https://www.ietf.org/rfc/rfc1035.txt>`__. No more than 32 labels can be associated with a job. scheduling (google.cloud.dataproc_v1beta2.types.JobScheduling): Optional. Job scheduling configuration. job_uuid (str): Output only. A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that may be reused over time. done (bool): Output only. Indicates whether the job is completed. If the value is ``false``, the job is still in progress. If ``true``, the job is completed, and ``status.state`` field will indicate if it was successful, failed, or cancelled. """ reference = proto.Field( proto.MESSAGE, number=1, message="JobReference", ) placement = proto.Field( proto.MESSAGE, number=2, message="JobPlacement", ) hadoop_job = proto.Field( proto.MESSAGE, number=3, oneof="type_job", message="HadoopJob", ) spark_job = proto.Field( proto.MESSAGE, number=4, oneof="type_job", message="SparkJob", ) pyspark_job = proto.Field( proto.MESSAGE, number=5, oneof="type_job", message="PySparkJob", ) hive_job = proto.Field( proto.MESSAGE, number=6, oneof="type_job", message="HiveJob", ) pig_job = proto.Field( proto.MESSAGE, number=7, oneof="type_job", message="PigJob", ) spark_r_job = proto.Field( proto.MESSAGE, number=21, oneof="type_job", message="SparkRJob", ) spark_sql_job = proto.Field( proto.MESSAGE, number=12, oneof="type_job", message="SparkSqlJob", ) presto_job = proto.Field( proto.MESSAGE, number=23, oneof="type_job", message="PrestoJob", ) status = proto.Field( proto.MESSAGE, number=8, message="JobStatus", ) status_history = proto.RepeatedField( proto.MESSAGE, number=13, message="JobStatus", ) yarn_applications = proto.RepeatedField( proto.MESSAGE, number=9, message="YarnApplication", ) submitted_by = proto.Field( proto.STRING, number=10, ) driver_output_resource_uri = proto.Field( proto.STRING, number=17, ) driver_control_files_uri = proto.Field( proto.STRING, number=15, ) labels = proto.MapField( proto.STRING, proto.STRING, number=18, ) scheduling = proto.Field( proto.MESSAGE, number=20, message="JobScheduling", ) job_uuid = proto.Field( proto.STRING, number=22, ) done = proto.Field( proto.BOOL, number=24, )
class IndexEndpoint(proto.Message): r"""Indexes are deployed into it. An IndexEndpoint can have multiple DeployedIndexes. Attributes: name (str): Output only. The resource name of the IndexEndpoint. display_name (str): Required. The display name of the IndexEndpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters. description (str): The description of the IndexEndpoint. deployed_indexes (Sequence[google.cloud.aiplatform_v1.types.DeployedIndex]): Output only. The indexes deployed in this endpoint. etag (str): Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens. labels (Sequence[google.cloud.aiplatform_v1.types.IndexEndpoint.LabelsEntry]): The labels with user-defined metadata to organize your IndexEndpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. create_time (google.protobuf.timestamp_pb2.Timestamp): Output only. Timestamp when this IndexEndpoint was created. update_time (google.protobuf.timestamp_pb2.Timestamp): Output only. Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of. network (str): Optional. The full name of the Google Compute Engine `network <https://cloud.google.com/compute/docs/networks-and-firewalls#networks>`__ to which the IndexEndpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. Only one of the fields, [network][google.cloud.aiplatform.v1.IndexEndpoint.network] or [enable_private_service_connect][google.cloud.aiplatform.v1.IndexEndpoint.enable_private_service_connect], can be set. `Format <https://cloud.google.com/compute/docs/reference/rest/v1/networks/insert>`__: projects/{project}/global/networks/{network}. Where {project} is a project number, as in '12345', and {network} is network name. enable_private_service_connect (bool): Optional. If true, expose the IndexEndpoint via private service connect. Only one of the fields, [network][google.cloud.aiplatform.v1.IndexEndpoint.network] or [enable_private_service_connect][google.cloud.aiplatform.v1.IndexEndpoint.enable_private_service_connect], can be set. """ name = proto.Field( proto.STRING, number=1, ) display_name = proto.Field( proto.STRING, number=2, ) description = proto.Field( proto.STRING, number=3, ) deployed_indexes = proto.RepeatedField( proto.MESSAGE, number=4, message="DeployedIndex", ) etag = proto.Field( proto.STRING, number=5, ) labels = proto.MapField( proto.STRING, proto.STRING, number=6, ) create_time = proto.Field( proto.MESSAGE, number=7, message=timestamp_pb2.Timestamp, ) update_time = proto.Field( proto.MESSAGE, number=8, message=timestamp_pb2.Timestamp, ) network = proto.Field( proto.STRING, number=9, ) enable_private_service_connect = proto.Field( proto.BOOL, number=10, )
class ApiConfig(proto.Message): r"""An API Configuration is a combination of settings for both the Managed Service and Gateways serving this API Config. Attributes: name (str): Output only. Resource name of the API Config. Format: projects/{project}/locations/global/apis/{api}/configs/{api_config} create_time (google.protobuf.timestamp_pb2.Timestamp): Output only. Created time. update_time (google.protobuf.timestamp_pb2.Timestamp): Output only. Updated time. labels (Sequence[google.cloud.apigateway_v1.types.ApiConfig.LabelsEntry]): Optional. Resource labels to represent user- rovided metadata. Refer to cloud documentation on labels for more details. https://cloud.google.com/compute/docs/labeling- resources display_name (str): Optional. Display name. gateway_service_account (str): Immutable. The Google Cloud IAM Service Account that Gateways serving this config should use to authenticate to other services. This may either be the Service Account's email (``{ACCOUNT_ID}@{PROJECT}.iam.gserviceaccount.com``) or its full resource name (``projects/{PROJECT}/accounts/{UNIQUE_ID}``). This is most often used when the service is a GCP resource such as a Cloud Run Service or an IAP-secured service. service_config_id (str): Output only. The ID of the associated Service Config ( https://cloud.google.com/service- infrastructure/docs/glossary#config). state (google.cloud.apigateway_v1.types.ApiConfig.State): Output only. State of the API Config. openapi_documents (Sequence[google.cloud.apigateway_v1.types.ApiConfig.OpenApiDocument]): Optional. OpenAPI specification documents. If specified, grpc_services and managed_service_configs must not be included. grpc_services (Sequence[google.cloud.apigateway_v1.types.ApiConfig.GrpcServiceDefinition]): Optional. gRPC service definition files. If specified, openapi_documents must not be included. managed_service_configs (Sequence[google.cloud.apigateway_v1.types.ApiConfig.File]): Optional. Service Configuration files. At least one must be included when using gRPC service definitions. See https://cloud.google.com/endpoints/docs/grpc/grpc-service-config#service_configuration_overview for the expected file contents. If multiple files are specified, the files are merged with the following rules: - All singular scalar fields are merged using "last one wins" semantics in the order of the files uploaded. - Repeated fields are concatenated. - Singular embedded messages are merged using these rules for nested fields. """ class State(proto.Enum): r"""All the possible API Config states.""" STATE_UNSPECIFIED = 0 CREATING = 1 ACTIVE = 2 FAILED = 3 DELETING = 4 UPDATING = 5 ACTIVATING = 6 class File(proto.Message): r"""A lightweight description of a file. Attributes: path (str): The file path (full or relative path). This is typically the path of the file when it is uploaded. contents (bytes): The bytes that constitute the file. """ path = proto.Field( proto.STRING, number=1, ) contents = proto.Field( proto.BYTES, number=2, ) class OpenApiDocument(proto.Message): r"""An OpenAPI Specification Document describing an API. Attributes: document (google.cloud.apigateway_v1.types.ApiConfig.File): The OpenAPI Specification document file. """ document = proto.Field( proto.MESSAGE, number=1, message='ApiConfig.File', ) class GrpcServiceDefinition(proto.Message): r"""A gRPC service definition. Attributes: file_descriptor_set (google.cloud.apigateway_v1.types.ApiConfig.File): Input only. File descriptor set, generated by protoc. To generate, use protoc with imports and source info included. For an example test.proto file, the following command would put the value in a new file named out.pb. $ protoc --include_imports --include_source_info test.proto -o out.pb source (Sequence[google.cloud.apigateway_v1.types.ApiConfig.File]): Optional. Uncompiled proto files associated with the descriptor set, used for display purposes (server-side compilation is not supported). These should match the inputs to 'protoc' command used to generate file_descriptor_set. """ file_descriptor_set = proto.Field( proto.MESSAGE, number=1, message='ApiConfig.File', ) source = proto.RepeatedField( proto.MESSAGE, number=2, message='ApiConfig.File', ) name = proto.Field( proto.STRING, number=1, ) create_time = proto.Field( proto.MESSAGE, number=2, message=timestamp_pb2.Timestamp, ) update_time = proto.Field( proto.MESSAGE, number=3, message=timestamp_pb2.Timestamp, ) labels = proto.MapField( proto.STRING, proto.STRING, number=4, ) display_name = proto.Field( proto.STRING, number=5, ) gateway_service_account = proto.Field( proto.STRING, number=14, ) service_config_id = proto.Field( proto.STRING, number=12, ) state = proto.Field( proto.ENUM, number=8, enum=State, ) openapi_documents = proto.RepeatedField( proto.MESSAGE, number=9, message=OpenApiDocument, ) grpc_services = proto.RepeatedField( proto.MESSAGE, number=10, message=GrpcServiceDefinition, ) managed_service_configs = proto.RepeatedField( proto.MESSAGE, number=11, message=File, )
class AppEngineHttpRequest(proto.Message): r"""App Engine HTTP request. The message defines the HTTP request that is sent to an App Engine app when the task is dispatched. Using [AppEngineHttpRequest][google.cloud.tasks.v2beta3.AppEngineHttpRequest] requires ```appengine.applications.get`` <https://cloud.google.com/appengine/docs/admin-api/access-control>`__ Google IAM permission for the project and the following scope: ``https://www.googleapis.com/auth/cloud-platform`` The task will be delivered to the App Engine app which belongs to the same project as the queue. For more information, see `How Requests are Routed <https://cloud.google.com/appengine/docs/standard/python/how-requests-are-routed>`__ and how routing is affected by `dispatch files <https://cloud.google.com/appengine/docs/python/config/dispatchref>`__. Traffic is encrypted during transport and never leaves Google datacenters. Because this traffic is carried over a communication mechanism internal to Google, you cannot explicitly set the protocol (for example, HTTP or HTTPS). The request to the handler, however, will appear to have used the HTTP protocol. The [AppEngineRouting][google.cloud.tasks.v2beta3.AppEngineRouting] used to construct the URL that the task is delivered to can be set at the queue-level or task-level: - If set, [app_engine_routing_override][google.cloud.tasks.v2beta3.AppEngineHttpQueue.app_engine_routing_override] is used for all tasks in the queue, no matter what the setting is for the [task-level app_engine_routing][google.cloud.tasks.v2beta3.AppEngineHttpRequest.app_engine_routing]. The ``url`` that the task will be sent to is: - ``url =`` [host][google.cloud.tasks.v2beta3.AppEngineRouting.host] ``+`` [relative_uri][google.cloud.tasks.v2beta3.AppEngineHttpRequest.relative_uri] Tasks can be dispatched to secure app handlers, unsecure app handlers, and URIs restricted with ```login: admin`` <https://cloud.google.com/appengine/docs/standard/python/config/appref>`__. Because tasks are not run as any user, they cannot be dispatched to URIs restricted with ```login: required`` <https://cloud.google.com/appengine/docs/standard/python/config/appref>`__ Task dispatches also do not follow redirects. The task attempt has succeeded if the app's request handler returns an HTTP response code in the range [``200`` - ``299``]. The task attempt has failed if the app's handler returns a non-2xx response code or Cloud Tasks does not receive response before the [deadline][google.cloud.tasks.v2beta3.Task.dispatch_deadline]. Failed tasks will be retried according to the [retry configuration][google.cloud.tasks.v2beta3.Queue.retry_config]. ``503`` (Service Unavailable) is considered an App Engine system error instead of an application error and will cause Cloud Tasks' traffic congestion control to temporarily throttle the queue's dispatches. Unlike other types of task targets, a ``429`` (Too Many Requests) response from an app handler does not cause traffic congestion control to throttle the queue. Attributes: http_method (~.target.HttpMethod): The HTTP method to use for the request. The default is POST. The app's request handler for the task's target URL must be able to handle HTTP requests with this http_method, otherwise the task attempt fails with error code 405 (Method Not Allowed). See `Writing a push task request handler <https://cloud.google.com/appengine/docs/java/taskqueue/push/creating-handlers#writing_a_push_task_request_handler>`__ and the App Engine documentation for your runtime on `How Requests are Handled <https://cloud.google.com/appengine/docs/standard/python3/how-requests-are-handled>`__. app_engine_routing (~.target.AppEngineRouting): Task-level setting for App Engine routing. If set, [app_engine_routing_override][google.cloud.tasks.v2beta3.AppEngineHttpQueue.app_engine_routing_override] is used for all tasks in the queue, no matter what the setting is for the [task-level app_engine_routing][google.cloud.tasks.v2beta3.AppEngineHttpRequest.app_engine_routing]. relative_uri (str): The relative URI. The relative URI must begin with "/" and must be a valid HTTP relative URI. It can contain a path and query string arguments. If the relative URI is empty, then the root path "/" will be used. No spaces are allowed, and the maximum length allowed is 2083 characters. headers (Sequence[~.target.AppEngineHttpRequest.HeadersEntry]): HTTP request headers. This map contains the header field names and values. Headers can be set when the [task is created][google.cloud.tasks.v2beta3.CloudTasks.CreateTask]. Repeated headers are not supported but a header value can contain commas. Cloud Tasks sets some headers to default values: - ``User-Agent``: By default, this header is ``"AppEngine-Google; (+http://code.google.com/appengine)"``. This header can be modified, but Cloud Tasks will append ``"AppEngine-Google; (+http://code.google.com/appengine)"`` to the modified ``User-Agent``. If the task has a [body][google.cloud.tasks.v2beta3.AppEngineHttpRequest.body], Cloud Tasks sets the following headers: - ``Content-Type``: By default, the ``Content-Type`` header is set to ``"application/octet-stream"``. The default can be overridden by explicitly setting ``Content-Type`` to a particular media type when the [task is created][google.cloud.tasks.v2beta3.CloudTasks.CreateTask]. For example, ``Content-Type`` can be set to ``"application/json"``. - ``Content-Length``: This is computed by Cloud Tasks. This value is output only. It cannot be changed. The headers below cannot be set or overridden: - ``Host`` - ``X-Google-\*`` - ``X-AppEngine-\*`` In addition, Cloud Tasks sets some headers when the task is dispatched, such as headers containing information about the task; see `request headers <https://cloud.google.com/tasks/docs/creating-appengine-handlers#reading_request_headers>`__. These headers are set only when the task is dispatched, so they are not visible when the task is returned in a Cloud Tasks response. Although there is no specific limit for the maximum number of headers or the size, there is a limit on the maximum size of the [Task][google.cloud.tasks.v2beta3.Task]. For more information, see the [CreateTask][google.cloud.tasks.v2beta3.CloudTasks.CreateTask] documentation. body (bytes): HTTP request body. A request body is allowed only if the HTTP method is POST or PUT. It is an error to set a body on a task with an incompatible [HttpMethod][google.cloud.tasks.v2beta3.HttpMethod]. """ http_method = proto.Field( proto.ENUM, number=1, enum="HttpMethod", ) app_engine_routing = proto.Field( proto.MESSAGE, number=2, message="AppEngineRouting", ) relative_uri = proto.Field(proto.STRING, number=3) headers = proto.MapField(proto.STRING, proto.STRING, number=4) body = proto.Field(proto.BYTES, number=5)
class Gateway(proto.Message): r"""A Gateway is an API-aware HTTP proxy. It performs API-Method and/or API-Consumer specific actions based on an API Config such as authentication, policy enforcement, and backend selection. Attributes: name (str): Output only. Resource name of the Gateway. Format: projects/{project}/locations/{location}/gateways/{gateway} create_time (google.protobuf.timestamp_pb2.Timestamp): Output only. Created time. update_time (google.protobuf.timestamp_pb2.Timestamp): Output only. Updated time. labels (Sequence[google.cloud.apigateway_v1.types.Gateway.LabelsEntry]): Optional. Resource labels to represent user- rovided metadata. Refer to cloud documentation on labels for more details. https://cloud.google.com/compute/docs/labeling- resources display_name (str): Optional. Display name. api_config (str): Required. Resource name of the API Config for this Gateway. Format: projects/{project}/locations/global/apis/{api}/configs/{apiConfig} state (google.cloud.apigateway_v1.types.Gateway.State): Output only. The current state of the Gateway. default_hostname (str): Output only. The default API Gateway host name of the form ``{gateway_id}-{hash}.{region_code}.gateway.dev``. """ class State(proto.Enum): r"""All the possible Gateway states.""" STATE_UNSPECIFIED = 0 CREATING = 1 ACTIVE = 2 FAILED = 3 DELETING = 4 UPDATING = 5 name = proto.Field( proto.STRING, number=1, ) create_time = proto.Field( proto.MESSAGE, number=2, message=timestamp_pb2.Timestamp, ) update_time = proto.Field( proto.MESSAGE, number=3, message=timestamp_pb2.Timestamp, ) labels = proto.MapField( proto.STRING, proto.STRING, number=4, ) display_name = proto.Field( proto.STRING, number=5, ) api_config = proto.Field( proto.STRING, number=6, ) state = proto.Field( proto.ENUM, number=7, enum=State, ) default_hostname = proto.Field( proto.STRING, number=9, )
class Asset(proto.Message): r"""Security Command Center representation of a Google Cloud resource. The Asset is a Security Command Center resource that captures information about a single Google Cloud resource. All modifications to an Asset are only within the context of Security Command Center and don't affect the referenced Google Cloud resource. Attributes: name (str): The relative resource name of this asset. See: https://cloud.google.com/apis/design/resource_names#relative_resource_name Example: "organizations/{organization_id}/assets/{asset_id}". security_center_properties (google.cloud.securitycenter_v1p1beta1.types.Asset.SecurityCenterProperties): Security Command Center managed properties. These properties are managed by Security Command Center and cannot be modified by the user. resource_properties (Sequence[google.cloud.securitycenter_v1p1beta1.types.Asset.ResourcePropertiesEntry]): Resource managed properties. These properties are managed and defined by the Google Cloud resource and cannot be modified by the user. security_marks (google.cloud.securitycenter_v1p1beta1.types.SecurityMarks): User specified security marks. These marks are entirely managed by the user and come from the SecurityMarks resource that belongs to the asset. create_time (google.protobuf.timestamp_pb2.Timestamp): The time at which the asset was created in Security Command Center. update_time (google.protobuf.timestamp_pb2.Timestamp): The time at which the asset was last updated or added in Cloud SCC. iam_policy (google.cloud.securitycenter_v1p1beta1.types.Asset.IamPolicy): Cloud IAM Policy information associated with the Google Cloud resource described by the Security Command Center asset. This information is managed and defined by the Google Cloud resource and cannot be modified by the user. canonical_name (str): The canonical name of the resource. It's either "organizations/{organization_id}/assets/{asset_id}", "folders/{folder_id}/assets/{asset_id}" or "projects/{project_number}/assets/{asset_id}", depending on the closest CRM ancestor of the resource. """ class SecurityCenterProperties(proto.Message): r"""Security Command Center managed properties. These properties are managed by Security Command Center and cannot be modified by the user. Attributes: resource_name (str): The full resource name of the Google Cloud resource this asset represents. This field is immutable after create time. See: https://cloud.google.com/apis/design/resource_names#full_resource_name resource_type (str): The type of the Google Cloud resource. Examples include: APPLICATION, PROJECT, and ORGANIZATION. This is a case insensitive field defined by Security Command Center and/or the producer of the resource and is immutable after create time. resource_parent (str): The full resource name of the immediate parent of the resource. See: https://cloud.google.com/apis/design/resource_names#full_resource_name resource_project (str): The full resource name of the project the resource belongs to. See: https://cloud.google.com/apis/design/resource_names#full_resource_name resource_owners (Sequence[str]): Owners of the Google Cloud resource. resource_display_name (str): The user defined display name for this resource. resource_parent_display_name (str): The user defined display name for the parent of this resource. resource_project_display_name (str): The user defined display name for the project of this resource. folders (Sequence[google.cloud.securitycenter_v1p1beta1.types.Folder]): Contains a Folder message for each folder in the assets ancestry. The first folder is the deepest nested folder, and the last folder is the folder directly under the Organization. """ resource_name = proto.Field( proto.STRING, number=1, ) resource_type = proto.Field( proto.STRING, number=2, ) resource_parent = proto.Field( proto.STRING, number=3, ) resource_project = proto.Field( proto.STRING, number=4, ) resource_owners = proto.RepeatedField( proto.STRING, number=5, ) resource_display_name = proto.Field( proto.STRING, number=6, ) resource_parent_display_name = proto.Field( proto.STRING, number=7, ) resource_project_display_name = proto.Field( proto.STRING, number=8, ) folders = proto.RepeatedField( proto.MESSAGE, number=10, message=folder.Folder, ) class IamPolicy(proto.Message): r"""Cloud IAM Policy information associated with the Google Cloud resource described by the Security Command Center asset. This information is managed and defined by the Google Cloud resource and cannot be modified by the user. Attributes: policy_blob (str): The JSON representation of the Policy associated with the asset. See https://cloud.google.com/iam/docs/reference/rest/v1/Policy for format details. """ policy_blob = proto.Field( proto.STRING, number=1, ) name = proto.Field( proto.STRING, number=1, ) security_center_properties = proto.Field( proto.MESSAGE, number=2, message=SecurityCenterProperties, ) resource_properties = proto.MapField( proto.STRING, proto.MESSAGE, number=7, message=struct_pb2.Value, ) security_marks = proto.Field( proto.MESSAGE, number=8, message=gcs_security_marks.SecurityMarks, ) create_time = proto.Field( proto.MESSAGE, number=9, message=timestamp_pb2.Timestamp, ) update_time = proto.Field( proto.MESSAGE, number=10, message=timestamp_pb2.Timestamp, ) iam_policy = proto.Field( proto.MESSAGE, number=11, message=IamPolicy, ) canonical_name = proto.Field( proto.STRING, number=13, )
class Api(proto.Message): r"""An API that can be served by one or more Gateways. Attributes: name (str): Output only. Resource name of the API. Format: projects/{project}/locations/global/apis/{api} create_time (google.protobuf.timestamp_pb2.Timestamp): Output only. Created time. update_time (google.protobuf.timestamp_pb2.Timestamp): Output only. Updated time. labels (Sequence[google.cloud.apigateway_v1.types.Api.LabelsEntry]): Optional. Resource labels to represent user- rovided metadata. Refer to cloud documentation on labels for more details. https://cloud.google.com/compute/docs/labeling- resources display_name (str): Optional. Display name. managed_service (str): Optional. Immutable. The name of a Google Managed Service ( https://cloud.google.com/service- infrastructure/docs/glossary#managed). If not specified, a new Service will automatically be created in the same project as this API. state (google.cloud.apigateway_v1.types.Api.State): Output only. State of the API. """ class State(proto.Enum): r"""All the possible API states.""" STATE_UNSPECIFIED = 0 CREATING = 1 ACTIVE = 2 FAILED = 3 DELETING = 4 UPDATING = 5 name = proto.Field( proto.STRING, number=1, ) create_time = proto.Field( proto.MESSAGE, number=2, message=timestamp_pb2.Timestamp, ) update_time = proto.Field( proto.MESSAGE, number=3, message=timestamp_pb2.Timestamp, ) labels = proto.MapField( proto.STRING, proto.STRING, number=4, ) display_name = proto.Field( proto.STRING, number=5, ) managed_service = proto.Field( proto.STRING, number=7, ) state = proto.Field( proto.ENUM, number=12, enum=State, )
class ExplanationMetadata(proto.Message): r"""Metadata describing the Model's input and output for explanation. Attributes: inputs (Sequence[google.cloud.aiplatform_v1beta1.types.ExplanationMetadata.InputsEntry]): Required. Map from feature names to feature input metadata. Keys are the name of the features. Values are the specification of the feature. An empty InputMetadata is valid. It describes a text feature which has the name specified as the key in [ExplanationMetadata.inputs][google.cloud.aiplatform.v1beta1.ExplanationMetadata.inputs]. The baseline of the empty feature is chosen by Vertex AI. For Vertex AI-provided Tensorflow images, the key can be any friendly name of the feature. Once specified, [featureAttributions][google.cloud.aiplatform.v1beta1.Attribution.feature_attributions] are keyed by this key (if not grouped with another feature). For custom images, the key must match with the key in [instance][google.cloud.aiplatform.v1beta1.ExplainRequest.instances]. outputs (Sequence[google.cloud.aiplatform_v1beta1.types.ExplanationMetadata.OutputsEntry]): Required. Map from output names to output metadata. For Vertex AI-provided Tensorflow images, keys can be any user defined string that consists of any UTF-8 characters. For custom images, keys are the name of the output field in the prediction to be explained. Currently only one key is allowed. feature_attributions_schema_uri (str): Points to a YAML file stored on Google Cloud Storage describing the format of the [feature attributions][google.cloud.aiplatform.v1beta1.Attribution.feature_attributions]. The schema is defined as an OpenAPI 3.0.2 `Schema Object <https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schemaObject>`__. AutoML tabular Models always have this field populated by Vertex AI. Note: The URI given on output may be different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access. """ class InputMetadata(proto.Message): r"""Metadata of the input of a feature. Fields other than [InputMetadata.input_baselines][google.cloud.aiplatform.v1beta1.ExplanationMetadata.InputMetadata.input_baselines] are applicable only for Models that are using Vertex AI-provided images for Tensorflow. Attributes: input_baselines (Sequence[google.protobuf.struct_pb2.Value]): Baseline inputs for this feature. If no baseline is specified, Vertex AI chooses the baseline for this feature. If multiple baselines are specified, Vertex AI returns the average attributions across them in [Attribution.feature_attributions][google.cloud.aiplatform.v1beta1.Attribution.feature_attributions]. For Vertex AI-provided Tensorflow images (both 1.x and 2.x), the shape of each baseline must match the shape of the input tensor. If a scalar is provided, we broadcast to the same shape as the input tensor. For custom images, the element of the baselines must be in the same format as the feature's input in the [instance][google.cloud.aiplatform.v1beta1.ExplainRequest.instances][]. The schema of any single instance may be specified via Endpoint's DeployedModels' [Model's][google.cloud.aiplatform.v1beta1.DeployedModel.model] [PredictSchemata's][google.cloud.aiplatform.v1beta1.Model.predict_schemata] [instance_schema_uri][google.cloud.aiplatform.v1beta1.PredictSchemata.instance_schema_uri]. input_tensor_name (str): Name of the input tensor for this feature. Required and is only applicable to Vertex AI-provided images for Tensorflow. encoding (google.cloud.aiplatform_v1beta1.types.ExplanationMetadata.InputMetadata.Encoding): Defines how the feature is encoded into the input tensor. Defaults to IDENTITY. modality (str): Modality of the feature. Valid values are: numeric, image. Defaults to numeric. feature_value_domain (google.cloud.aiplatform_v1beta1.types.ExplanationMetadata.InputMetadata.FeatureValueDomain): The domain details of the input feature value. Like min/max, original mean or standard deviation if normalized. indices_tensor_name (str): Specifies the index of the values of the input tensor. Required when the input tensor is a sparse representation. Refer to Tensorflow documentation for more details: https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor. dense_shape_tensor_name (str): Specifies the shape of the values of the input if the input is a sparse representation. Refer to Tensorflow documentation for more details: https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor. index_feature_mapping (Sequence[str]): A list of feature names for each index in the input tensor. Required when the input [InputMetadata.encoding][google.cloud.aiplatform.v1beta1.ExplanationMetadata.InputMetadata.encoding] is BAG_OF_FEATURES, BAG_OF_FEATURES_SPARSE, INDICATOR. encoded_tensor_name (str): Encoded tensor is a transformation of the input tensor. Must be provided if choosing [Integrated Gradients attribution][google.cloud.aiplatform.v1beta1.ExplanationParameters.integrated_gradients_attribution] or [XRAI attribution][google.cloud.aiplatform.v1beta1.ExplanationParameters.xrai_attribution] and the input tensor is not differentiable. An encoded tensor is generated if the input tensor is encoded by a lookup table. encoded_baselines (Sequence[google.protobuf.struct_pb2.Value]): A list of baselines for the encoded tensor. The shape of each baseline should match the shape of the encoded tensor. If a scalar is provided, Vertex AI broadcasts to the same shape as the encoded tensor. visualization (google.cloud.aiplatform_v1beta1.types.ExplanationMetadata.InputMetadata.Visualization): Visualization configurations for image explanation. group_name (str): Name of the group that the input belongs to. Features with the same group name will be treated as one feature when computing attributions. Features grouped together can have different shapes in value. If provided, there will be one single attribution generated in [Attribution.feature_attributions][google.cloud.aiplatform.v1beta1.Attribution.feature_attributions], keyed by the group name. """ class Encoding(proto.Enum): r"""Defines how a feature is encoded. Defaults to IDENTITY.""" ENCODING_UNSPECIFIED = 0 IDENTITY = 1 BAG_OF_FEATURES = 2 BAG_OF_FEATURES_SPARSE = 3 INDICATOR = 4 COMBINED_EMBEDDING = 5 CONCAT_EMBEDDING = 6 class FeatureValueDomain(proto.Message): r"""Domain details of the input feature value. Provides numeric information about the feature, such as its range (min, max). If the feature has been pre-processed, for example with z-scoring, then it provides information about how to recover the original feature. For example, if the input feature is an image and it has been pre-processed to obtain 0-mean and stddev = 1 values, then original_mean, and original_stddev refer to the mean and stddev of the original feature (e.g. image tensor) from which input feature (with mean = 0 and stddev = 1) was obtained. Attributes: min_value (float): The minimum permissible value for this feature. max_value (float): The maximum permissible value for this feature. original_mean (float): If this input feature has been normalized to a mean value of 0, the original_mean specifies the mean value of the domain prior to normalization. original_stddev (float): If this input feature has been normalized to a standard deviation of 1.0, the original_stddev specifies the standard deviation of the domain prior to normalization. """ min_value = proto.Field( proto.FLOAT, number=1, ) max_value = proto.Field( proto.FLOAT, number=2, ) original_mean = proto.Field( proto.FLOAT, number=3, ) original_stddev = proto.Field( proto.FLOAT, number=4, ) class Visualization(proto.Message): r"""Visualization configurations for image explanation. Attributes: type_ (google.cloud.aiplatform_v1beta1.types.ExplanationMetadata.InputMetadata.Visualization.Type): Type of the image visualization. Only applicable to [Integrated Gradients attribution][google.cloud.aiplatform.v1beta1.ExplanationParameters.integrated_gradients_attribution]. OUTLINES shows regions of attribution, while PIXELS shows per-pixel attribution. Defaults to OUTLINES. polarity (google.cloud.aiplatform_v1beta1.types.ExplanationMetadata.InputMetadata.Visualization.Polarity): Whether to only highlight pixels with positive contributions, negative or both. Defaults to POSITIVE. color_map (google.cloud.aiplatform_v1beta1.types.ExplanationMetadata.InputMetadata.Visualization.ColorMap): The color scheme used for the highlighted areas. Defaults to PINK_GREEN for [Integrated Gradients attribution][google.cloud.aiplatform.v1beta1.ExplanationParameters.integrated_gradients_attribution], which shows positive attributions in green and negative in pink. Defaults to VIRIDIS for [XRAI attribution][google.cloud.aiplatform.v1beta1.ExplanationParameters.xrai_attribution], which highlights the most influential regions in yellow and the least influential in blue. clip_percent_upperbound (float): Excludes attributions above the specified percentile from the highlighted areas. Using the clip_percent_upperbound and clip_percent_lowerbound together can be useful for filtering out noise and making it easier to see areas of strong attribution. Defaults to 99.9. clip_percent_lowerbound (float): Excludes attributions below the specified percentile, from the highlighted areas. Defaults to 62. overlay_type (google.cloud.aiplatform_v1beta1.types.ExplanationMetadata.InputMetadata.Visualization.OverlayType): How the original image is displayed in the visualization. Adjusting the overlay can help increase visual clarity if the original image makes it difficult to view the visualization. Defaults to NONE. """ class Type(proto.Enum): r"""Type of the image visualization. Only applicable to [Integrated Gradients attribution][google.cloud.aiplatform.v1beta1.ExplanationParameters.integrated_gradients_attribution]. """ TYPE_UNSPECIFIED = 0 PIXELS = 1 OUTLINES = 2 class Polarity(proto.Enum): r"""Whether to only highlight pixels with positive contributions, negative or both. Defaults to POSITIVE. """ POLARITY_UNSPECIFIED = 0 POSITIVE = 1 NEGATIVE = 2 BOTH = 3 class ColorMap(proto.Enum): r"""The color scheme used for highlighting areas.""" COLOR_MAP_UNSPECIFIED = 0 PINK_GREEN = 1 VIRIDIS = 2 RED = 3 GREEN = 4 RED_GREEN = 6 PINK_WHITE_GREEN = 5 class OverlayType(proto.Enum): r"""How the original image is displayed in the visualization.""" OVERLAY_TYPE_UNSPECIFIED = 0 NONE = 1 ORIGINAL = 2 GRAYSCALE = 3 MASK_BLACK = 4 type_ = proto.Field( proto.ENUM, number=1, enum="ExplanationMetadata.InputMetadata.Visualization.Type", ) polarity = proto.Field( proto.ENUM, number=2, enum="ExplanationMetadata.InputMetadata.Visualization.Polarity", ) color_map = proto.Field( proto.ENUM, number=3, enum="ExplanationMetadata.InputMetadata.Visualization.ColorMap", ) clip_percent_upperbound = proto.Field( proto.FLOAT, number=4, ) clip_percent_lowerbound = proto.Field( proto.FLOAT, number=5, ) overlay_type = proto.Field( proto.ENUM, number=6, enum= "ExplanationMetadata.InputMetadata.Visualization.OverlayType", ) input_baselines = proto.RepeatedField( proto.MESSAGE, number=1, message=struct_pb2.Value, ) input_tensor_name = proto.Field( proto.STRING, number=2, ) encoding = proto.Field( proto.ENUM, number=3, enum="ExplanationMetadata.InputMetadata.Encoding", ) modality = proto.Field( proto.STRING, number=4, ) feature_value_domain = proto.Field( proto.MESSAGE, number=5, message="ExplanationMetadata.InputMetadata.FeatureValueDomain", ) indices_tensor_name = proto.Field( proto.STRING, number=6, ) dense_shape_tensor_name = proto.Field( proto.STRING, number=7, ) index_feature_mapping = proto.RepeatedField( proto.STRING, number=8, ) encoded_tensor_name = proto.Field( proto.STRING, number=9, ) encoded_baselines = proto.RepeatedField( proto.MESSAGE, number=10, message=struct_pb2.Value, ) visualization = proto.Field( proto.MESSAGE, number=11, message="ExplanationMetadata.InputMetadata.Visualization", ) group_name = proto.Field( proto.STRING, number=12, ) class OutputMetadata(proto.Message): r"""Metadata of the prediction output to be explained. This message has `oneof`_ fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members. .. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields Attributes: index_display_name_mapping (google.protobuf.struct_pb2.Value): Static mapping between the index and display name. Use this if the outputs are a deterministic n-dimensional array, e.g. a list of scores of all the classes in a pre-defined order for a multi-classification Model. It's not feasible if the outputs are non-deterministic, e.g. the Model produces top-k classes or sort the outputs by their values. The shape of the value must be an n-dimensional array of strings. The number of dimensions must match that of the outputs to be explained. The [Attribution.output_display_name][google.cloud.aiplatform.v1beta1.Attribution.output_display_name] is populated by locating in the mapping with [Attribution.output_index][google.cloud.aiplatform.v1beta1.Attribution.output_index]. This field is a member of `oneof`_ ``display_name_mapping``. display_name_mapping_key (str): Specify a field name in the prediction to look for the display name. Use this if the prediction contains the display names for the outputs. The display names in the prediction must have the same shape of the outputs, so that it can be located by [Attribution.output_index][google.cloud.aiplatform.v1beta1.Attribution.output_index] for a specific output. This field is a member of `oneof`_ ``display_name_mapping``. output_tensor_name (str): Name of the output tensor. Required and is only applicable to Vertex AI provided images for Tensorflow. """ index_display_name_mapping = proto.Field( proto.MESSAGE, number=1, oneof="display_name_mapping", message=struct_pb2.Value, ) display_name_mapping_key = proto.Field( proto.STRING, number=2, oneof="display_name_mapping", ) output_tensor_name = proto.Field( proto.STRING, number=3, ) inputs = proto.MapField( proto.STRING, proto.MESSAGE, number=1, message=InputMetadata, ) outputs = proto.MapField( proto.STRING, proto.MESSAGE, number=2, message=OutputMetadata, ) feature_attributions_schema_uri = proto.Field( proto.STRING, number=3, )
class TrainingPipeline(proto.Message): r"""The TrainingPipeline orchestrates tasks associated with training a Model. It always executes the training task, and optionally may also export data from AI Platform's Dataset which becomes the training input, ``upload`` the Model to AI Platform, and evaluate the Model. Attributes: name (str): Output only. Resource name of the TrainingPipeline. display_name (str): Required. The user-defined name of this TrainingPipeline. input_data_config (google.cloud.aiplatform_v1beta1.types.InputDataConfig): Specifies AI Platform owned input data that may be used for training the Model. The TrainingPipeline's ``training_task_definition`` should make clear whether this config is used and if there are any special requirements on how it should be filled. If nothing about this config is mentioned in the ``training_task_definition``, then it should be assumed that the TrainingPipeline does not depend on this configuration. training_task_definition (str): Required. A Google Cloud Storage path to the YAML file that defines the training task which is responsible for producing the model artifact, and may also include additional auxiliary work. The definition files that can be used here are found in gs://google-cloud- aiplatform/schema/trainingjob/definition/. Note: The URI given on output will be immutable and probably different, including the URI scheme, than the one given on input. The output URI will point to a location where the user only has a read access. training_task_inputs (google.protobuf.struct_pb2.Value): Required. The training task's parameter(s), as specified in the ``training_task_definition``'s ``inputs``. training_task_metadata (google.protobuf.struct_pb2.Value): Output only. The metadata information as specified in the ``training_task_definition``'s ``metadata``. This metadata is an auxiliary runtime and final information about the training task. While the pipeline is running this information is populated only at a best effort basis. Only present if the pipeline's ``training_task_definition`` contains ``metadata`` object. model_to_upload (google.cloud.aiplatform_v1beta1.types.Model): Describes the Model that may be uploaded (via ``ModelService.UploadModel``) by this TrainingPipeline. The TrainingPipeline's ``training_task_definition`` should make clear whether this Model description should be populated, and if there are any special requirements regarding how it should be filled. If nothing is mentioned in the ``training_task_definition``, then it should be assumed that this field should not be filled and the training task either uploads the Model without a need of this information, or that training task does not support uploading a Model as part of the pipeline. When the Pipeline's state becomes ``PIPELINE_STATE_SUCCEEDED`` and the trained Model had been uploaded into AI Platform, then the model_to_upload's resource ``name`` is populated. The Model is always uploaded into the Project and Location in which this pipeline is. state (google.cloud.aiplatform_v1beta1.types.PipelineState): Output only. The detailed state of the pipeline. error (google.rpc.status_pb2.Status): Output only. Only populated when the pipeline's state is ``PIPELINE_STATE_FAILED`` or ``PIPELINE_STATE_CANCELLED``. create_time (google.protobuf.timestamp_pb2.Timestamp): Output only. Time when the TrainingPipeline was created. start_time (google.protobuf.timestamp_pb2.Timestamp): Output only. Time when the TrainingPipeline for the first time entered the ``PIPELINE_STATE_RUNNING`` state. end_time (google.protobuf.timestamp_pb2.Timestamp): Output only. Time when the TrainingPipeline entered any of the following states: ``PIPELINE_STATE_SUCCEEDED``, ``PIPELINE_STATE_FAILED``, ``PIPELINE_STATE_CANCELLED``. update_time (google.protobuf.timestamp_pb2.Timestamp): Output only. Time when the TrainingPipeline was most recently updated. labels (Sequence[google.cloud.aiplatform_v1beta1.types.TrainingPipeline.LabelsEntry]): The labels with user-defined metadata to organize TrainingPipelines. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. encryption_spec (google.cloud.aiplatform_v1beta1.types.EncryptionSpec): Customer-managed encryption key spec for a TrainingPipeline. If set, this TrainingPipeline will be secured by this key. Note: Model trained by this TrainingPipeline is also secured by this key if ``model_to_upload`` is not set separately. """ name = proto.Field(proto.STRING, number=1) display_name = proto.Field(proto.STRING, number=2) input_data_config = proto.Field( proto.MESSAGE, number=3, message="InputDataConfig", ) training_task_definition = proto.Field(proto.STRING, number=4) training_task_inputs = proto.Field( proto.MESSAGE, number=5, message=struct.Value, ) training_task_metadata = proto.Field( proto.MESSAGE, number=6, message=struct.Value, ) model_to_upload = proto.Field( proto.MESSAGE, number=7, message=model.Model, ) state = proto.Field( proto.ENUM, number=9, enum=pipeline_state.PipelineState, ) error = proto.Field( proto.MESSAGE, number=10, message=status.Status, ) create_time = proto.Field( proto.MESSAGE, number=11, message=timestamp.Timestamp, ) start_time = proto.Field( proto.MESSAGE, number=12, message=timestamp.Timestamp, ) end_time = proto.Field( proto.MESSAGE, number=13, message=timestamp.Timestamp, ) update_time = proto.Field( proto.MESSAGE, number=14, message=timestamp.Timestamp, ) labels = proto.MapField(proto.STRING, proto.STRING, number=15) encryption_spec = proto.Field( proto.MESSAGE, number=18, message=gca_encryption_spec.EncryptionSpec, )