Skip to content

linar-jether/gcloud-python

 
 

Repository files navigation

Google Cloud Python Client

Python idiomatic client for Google Cloud Platform services.

pypi build coverage versions

This client supports the following Google Cloud Platform services:

If you need support for other Google APIs, check out the Google APIs Python Client library.

Quick Start

$ pip install --upgrade gcloud

Example Applications

  • getting-started-python - A sample and tutorial that demonstrates how to build a complete web application using Cloud Datastore, Cloud Storage, and Cloud Pub/Sub and deploy it to Google App Engine or Google Compute Engine.
  • gcloud-python-expenses-demo - A sample expenses demo using Cloud Datastore and Cloud Storage

Authentication

With gcloud-python we try to make authentication as painless as possible. Check out the Authentication section in our documentation to learn more. You may also find the authentication document shared by all the gcloud-* libraries to be helpful.

Google Cloud Datastore

Google Cloud Datastore (Datastore API docs) is a fully managed, schemaless database for storing non-relational data. Cloud Datastore automatically scales with your users and supports ACID transactions, high availability of reads and writes, strong consistency for reads and ancestor queries, and eventual consistency for all other queries.

See the gcloud-python API datastore documentation to learn how to interact with the Cloud Datastore using this Client Library.

See the official Google Cloud Datastore documentation for more details on how to activate Cloud Datastore for your project.

from gcloud import datastore
# Create, populate and persist an entity
entity = datastore.Entity(key=datastore.Key('EntityKind'))
entity.update({
    'foo': u'bar',
    'baz': 1337,
    'qux': False,
})
# Then query for entities
query = datastore.Query(kind='EntityKind')
for result in query.fetch():
    print result

Google Cloud Storage

Google Cloud Storage (Storage API docs) allows you to store data on Google infrastructure with very high reliability, performance and availability, and can be used to distribute large data objects to users via direct download.

See the gcloud-python API storage documentation to learn how to connect to Cloud Storage using this Client Library.

You need to create a Google Cloud Storage bucket to use this client library. Follow along with the official Google Cloud Storage documentation to learn how to create a bucket.

from gcloud import storage
client = storage.Client()
bucket = client.get_bucket('bucket-id-here')
# Then do other things...
blob = bucket.get_blob('remote/path/to/file.txt')
print blob.download_as_string()
blob.upload_from_string('New contents!')
blob2 = bucket.blob('remote/path/storage.txt')
blob2.upload_from_filename(filename='/local/path.txt')

Google Cloud Pub/Sub

Google Cloud Pub/Sub (Pub/Sub API docs) is designed to provide reliable, many-to-many, asynchronous messaging between applications. Publisher applications can send messages to a topic and other applications can subscribe to that topic to receive the messages. By decoupling senders and receivers, Google Cloud Pub/Sub allows developers to communicate between independently written applications.

See the gcloud-python API Pub/Sub documentation to learn how to connect to Cloud Pub/Sub using this Client Library.

To get started with this API, you'll need to create

from gcloud import pubsub

client = pubsub.Client()
topic = client.topic('topic_name')
topic.create()

topic.publish('this is the message_payload',
              attr1='value1', attr2='value2')

Google BigQuery

Querying massive datasets can be time consuming and expensive without the right hardware and infrastructure. Google BigQuery (BigQuery API docs) solves this problem by enabling super-fast, SQL-like queries against append-only tables, using the processing power of Google's infrastructure.

This package is still being implemented, but it is almost complete!

Load data from CSV

import csv

from gcloud import bigquery
from gcloud.bigquery import SchemaField

client = bigquery.Client()

dataset = client.dataset('dataset_name')
dataset.create()  # API request

SCHEMA = [
    SchemaField('full_name', 'STRING', mode='required'),
    SchemaField('age', 'INTEGER', mode='required'),
]
table = dataset.table('table_name', SCHEMA)
table.create()

with open('csv_file', 'rb') as readable:
    table.upload_from_file(
        readable, source_format='CSV', skip_leading_rows=1)

Perform a synchronous query

# Perform a synchronous query.
QUERY = (
    'SELECT name FROM [bigquery-public-data:usa_names.usa_1910_2013] '
    'WHERE state = "TX"')
query = client.run_sync_query('%s LIMIT 100' % QUERY)
query.timeout_ms = TIMEOUT_MS
query.run()

for row in query.rows:
    print row

See the gcloud-python API BigQuery documentation to learn how to connect to BigQuery using this Client Library.

Google Cloud Resource Manager

The Cloud Resource Manager API (Resource Manager API docs) provides methods that you can use to programmatically manage your projects in the Google Cloud Platform.

See the gcloud-python API Resource Manager documentation to learn how to manage projects using this Client Library.

Google Stackdriver Logging

Stackdriver Logging API (Logging API docs) allows you to store, search, analyze, monitor, and alert on log data and events from Google Cloud Platform.

from gcloud import logging
client = logging.Client()
logger = client.logger('log_name')
logger.log_text("A simple entry")  # API call

Example of fetching entries:

entries, token = logger.list_entries()
for entry in entries:
    print entry.payload

See the gcloud-python API logging documentation to learn how to connect to Stackdriver Logging using this Client Library.

Contributing

Contributions to this library are always welcome and highly encouraged.

See CONTRIBUTING for more information on how to get started.

License

Apache 2.0 - See LICENSE for more information.

Packages

No packages published

Languages

  • Python 94.2%
  • Protocol Buffer 5.4%
  • Other 0.4%