bigrquery 1.5.1

bigrquery 1.5.0

Major changes

Significant DBI improvements

Significant dbplyr improvements

Minor improvements and bug fixes

bigrquery 1.4.2

bigrquery 1.4.1

bigrquery 1.4.0

bigrquery 1.3.2

bigrquery 1.3.1

bigrquery 1.3.0

bigrquery 1.2.0

Auth from gargle

bigrquery’s auth functionality now comes from the gargle package, which provides R infrastructure to work with Google APIs, in general. The same transition is underway in several other packages, such as googledrive. This will make user interfaces more consistent and makes two new token flows available in bigrquery:

Where to learn more:

Changes that a user will notice

Temporary files are now deleted after table download. (@meztez, #343)

OAuth2 tokens are now cached at the user level, by default, instead of in .httr-oauth in the current project. The default OAuth app has also changed. This means you will need to re-authorize bigrquery (i.e. get a new token). You may want to delete any vestigial .httr-oauth files lying around your bigrquery projects.

The OAuth2 token key-value store now incorporates the associated Google user when indexing, which makes it easier to switch between Google identities.

bq_user() is a new function that reveals the email of the user associated with the current token.

If you previously used set_service_token() to use a service account token, it still works. But you’ll get a deprecation warning. Switch over to bq_auth(path = "/path/to/your/service-account.json"). Several other functions are similarly soft-deprecated.

Dependency changes

R 3.1 is no longer explicitly supported or tested. Our general practice is to support the current release (3.6), devel, and the 4 previous versions of R (3.5, 3.4, 3.3, 3.2).

gargle and rlang are newly Imported.

bigrquery 1.1.1

bigrquery 1.1.0

Improved type support

SQL translation

Minor bug fixes and improvements

bigrquery 1.0.0

Improved downloads

The system for downloading data from BigQuery into R has been rewritten from the ground up to give considerable improvements in performance and flexibility.

I can now download the first million rows of publicdata.samples.natality in about a minute. This data frame is about 170 MB in BigQuery and 140 MB in R; a minute to download this much data seems reasonable to me. The bottleneck for loading BigQuery data is now parsing BigQuery’s json format. I don’t see any obvious way to make this faster as I’m already using the fastest C++ json parser, RapidJson. If this is still too slow for you (i.e. you’re downloading GBs of data), see ?bq_table_download for an alternative approach.

New features

dplyr

DBI

Low-level API

The low-level API has been completely overhauled to make it easier to use. The primary motivation was to make bigrquery development more enjoyable for me, but it should also be helpful to you when you need to go outside of the features provided by higher-level DBI and dplyr interfaces. The old API has been soft-deprecated - it will continue to work, but no further development will occur (including bug fixes). It will be formally deprecated in the next version, and then removed in the version after that.

Bug fixes and minor improvements

dplyr

Low-level

Version 0.4.1

Version 0.4.0

New features

Big fixes and minor improvements

Version 0.3.0

Version 0.2.0.