Skip to content

ordergroove/pybigquery

 
 

Repository files navigation

SQLAlchemy dialect and API client for BigQuery.

Usage

SQLAchemy

from sqlalchemy import *
from sqlalchemy.engine import create_engine
from sqlalchemy.schema import *
engine = create_engine('bigquery://project')
table = Table('dataset.table', MetaData(bind=engine), autoload=True)
print(select([func.count('*')], from_obj=table).scalar())

API Client

from pybigquery.api import ApiClient
api_client = ApiClient()
print(api_client.dry_run_query(query=sqlstr).total_bytes_processed)

Project

project in bigquery://project is used to instantiate BigQuery client with the specific project ID. To infer project from the environment, use bigquery:// – without project

Authentication

Follow the Google Cloud library guide for authentication. Alternatively, you can provide the path to a service account JSON file in create_engine():

engine = create_engine('bigquery://', credentials_path='/path/to/keyfile.json')

Location

To specify location of your datasets pass location to create_engine():

engine = create_engine('bigquery://project', location="asia-northeast1")

Table names

To query tables from non-default projects or datasets, use the following format for the SQLAlchemy schema name: [project.]dataset, e.g.:

# If neither dataset nor project are the default
sample_table_1 = Table('natality', schema='bigquery-public-data.samples')
# If just dataset is not the default
sample_table_2 = Table('natality', schema='bigquery-public-data')

Batch size

By default, arraysize is set to 5000. arraysize is used to set the batch size for fetching results. To change it, pass arraysize to create_engine():

engine = create_engine('bigquery://project', arraysize=1000)

Adding a Default Dataset

If you want to have the Client use a default dataset, specify it as the "database" portion of the connection string.

engine = create_engine('bigquery://project/dataset')

When using a default dataset, don't include the dataset name in the table name, e.g.:

table = Table('table_name')

Note that specifying a default dataset doesn't restrict execution of queries to that particular dataset when using raw queries, e.g.:

# Set default dataset to dataset_a
engine = create_engine('bigquery://project/dataset_a')

# This will still execute and return rows from dataset_b
engine.execute('SELECT * FROM dataset_b.table').fetchall()

Connection String Parameters

There are many situations where you can't call create_engine directly, such as when using tools like Flask SQLAlchemy. For situations like these, or for situations where you want the Client to have a default_query_job_config, you can pass many arguments in the query of the connection string.

The credentials_path, credentials_info, location, and arraysize parameters are used by this library, and the rest are used to create a QueryJobConfig

Note that if you want to use query strings, it will be more reliable if you use three slashes, so 'bigquery:///?a=b' will work reliably, but 'bigquery://?a=b' might be interpreted as having a "database" of ?a=b, depending on the system being used to parse the connection string.

Here are examples of all the supported arguments. Any not present are either for legacy sql (which isn't supported by this library), or are too complex and are not implemented.

engine = create_engine(
    'bigquery://some-project/some-dataset' '?'
    'credentials_path=/some/path/to.json' '&'
    'location=some-location' '&'
    'arraysize=1000' '&'
    'clustering_fields=a,b,c' '&'
    'create_disposition=CREATE_IF_NEEDED' '&'
    'destination=different-project.different-dataset.table' '&'
    'destination_encryption_configuration=some-configuration' '&'
    'dry_run=true' '&'
    'labels=a:b,c:d' '&'
    'maximum_bytes_billed=1000' '&'
    'priority=INTERACTIVE' '&'
    'schema_update_options=ALLOW_FIELD_ADDITION,ALLOW_FIELD_RELAXATION' '&'
    'use_query_cache=true' '&'
    'write_disposition=WRITE_APPEND'
)

Creating tables

To add metadata to a table:

table = Table('mytable',
              ...,
               bigquery_description='my table description',
               bigquery_friendly_name='my table friendly name',
               bigquery_partition_by="partitioning_field",
               bigquery_cluster_by=["cluster_field_1", "cluster_field_2"],
               bigquery_require_partition_filter=False)

To add metadata to a column:

Column('mycolumn', doc='my column description')

Requirements

Install using

  • pip install pybigquery

Testing

Load sample tables:

./scripts/load_test_data.sh

This will create a dataset test_pybigquery with tables named sample_one_row and sample.

Set up an environment and run tests:

pyvenv .env
source .env/bin/activate
pip install -r dev_requirements.txt
pytest

About

SQLAlchemy dialect for BigQuery

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 97.8%
  • Shell 2.2%