Class: Google::Cloud::Bigquery::QueryJob
- Defined in:
 - lib/google/cloud/bigquery/query_job.rb
 
Overview
QueryJob
A Job subclass representing a query operation that may be performed on a Table. A QueryJob instance is created when you call Project#query_job, Dataset#query_job.
Direct Known Subclasses
Defined Under Namespace
Attributes collapse
- 
  
    
      #data(token: nil, max: nil, start: nil)  ⇒ Google::Cloud::Bigquery::Data 
    
    
      (also: #query_results)
    
  
  
  
  
  
  
  
  
  
    
Retrieves the query results for the job.
 - 
  
    
      #encryption  ⇒ Google::Cloud::BigQuery::EncryptionConfiguration 
    
    
  
  
  
  
  
  
  
  
  
    
The encryption configuration of the destination table.
 - 
  
    
      #time_partitioning?  ⇒ Boolean? 
    
    
  
  
  
  
  
  
  
  
  
    
Checks if the destination table will be time-partitioned.
 - 
  
    
      #time_partitioning_expiration  ⇒ Integer? 
    
    
  
  
  
  
  
  
  
  
  
    
The expiration for the destination table partitions, if any, in seconds.
 - 
  
    
      #time_partitioning_field  ⇒ String? 
    
    
  
  
  
  
  
  
  
  
  
    
The field on which the destination table will be partitioned, if any.
 - 
  
    
      #time_partitioning_require_filter?  ⇒ Boolean 
    
    
  
  
  
  
  
  
  
  
  
    
If set to true, queries over the destination table will require a partition filter that can be used for partition elimination to be specified.
 - 
  
    
      #time_partitioning_type  ⇒ String? 
    
    
  
  
  
  
  
  
  
  
  
    
The period for which the destination table will be partitioned, if any.
 - 
  
    
      #wait_until_done!  ⇒ Object 
    
    
  
  
  
  
  
  
  
  
  
    
Refreshes the job until the job is
DONE. 
Instance Method Summary collapse
- 
  
    
      #batch?  ⇒ Boolean 
    
    
  
  
  
  
  
  
  
  
  
    
Checks if the priority for the query is
BATCH. - 
  
    
      #bytes_processed  ⇒ Integer? 
    
    
  
  
  
  
  
  
  
  
  
    
The number of bytes processed by the query.
 - 
  
    
      #cache?  ⇒ Boolean 
    
    
  
  
  
  
  
  
  
  
  
    
Checks if the query job looks for an existing result in the query cache.
 - 
  
    
      #cache_hit?  ⇒ Boolean 
    
    
  
  
  
  
  
  
  
  
  
    
Checks if the query results are from the query cache.
 - 
  
    
      #ddl_operation_performed  ⇒ String? 
    
    
  
  
  
  
  
  
  
  
  
    
The DDL operation performed, possibly dependent on the pre-existence of the DDL target.
 - 
  
    
      #ddl_target_table  ⇒ Google::Cloud::Bigquery::Table? 
    
    
  
  
  
  
  
  
  
  
  
    
The DDL target table, in reference state.
 - 
  
    
      #destination  ⇒ Table 
    
    
  
  
  
  
  
  
  
  
  
    
The table in which the query results are stored.
 - 
  
    
      #flatten?  ⇒ Boolean 
    
    
  
  
  
  
  
  
  
  
  
    
Checks if the query job flattens nested and repeated fields in the query results.
 - 
  
    
      #interactive?  ⇒ Boolean 
    
    
  
  
  
  
  
  
  
  
  
    
Checks if the priority for the query is
INTERACTIVE. - 
  
    
      #large_results?  ⇒ Boolean 
    
    
  
  
  
  
  
  
  
  
  
    
Checks if the the query job allows arbitrarily large results at a slight cost to performance.
 - 
  
    
      #legacy_sql?  ⇒ Boolean 
    
    
  
  
  
  
  
  
  
  
  
    
Checks if the query job is using legacy sql.
 - 
  
    
      #maximum_billing_tier  ⇒ Integer? 
    
    
  
  
  
  
  
  
  
  
  
    
Limits the billing tier for this job.
 - 
  
    
      #maximum_bytes_billed  ⇒ Integer? 
    
    
  
  
  
  
  
  
  
  
  
    
Limits the bytes billed for this job.
 - 
  
    
      #query_plan  ⇒ Array<Google::Cloud::Bigquery::QueryJob::Stage>? 
    
    
  
  
  
  
  
  
  
  
  
    
Describes the execution plan for the query.
 - 
  
    
      #standard_sql?  ⇒ Boolean 
    
    
  
  
  
  
  
  
  
  
  
    
Checks if the query job is using standard sql.
 - 
  
    
      #statement_type  ⇒ String? 
    
    
  
  
  
  
  
  
  
  
  
    
The type of query statement, if valid.
 - 
  
    
      #udfs  ⇒ Array<String> 
    
    
  
  
  
  
  
  
  
  
  
    
The user-defined function resources used in the query.
 
Methods inherited from Job
#cancel, #configuration, #created_at, #done?, #ended_at, #error, #errors, #failed?, #job_id, #labels, #location, #pending?, #project_id, #reload!, #rerun!, #running?, #started_at, #state, #statistics, #status, #user_email
Instance Method Details
#batch? ⇒ Boolean
Checks if the priority for the query is BATCH.
      58 59 60 61  | 
    
      # File 'lib/google/cloud/bigquery/query_job.rb', line 58 def batch? val = @gapi.configuration.query.priority val == "BATCH" end  | 
  
#bytes_processed ⇒ Integer?
The number of bytes processed by the query.
      160 161 162 163 164  | 
    
      # File 'lib/google/cloud/bigquery/query_job.rb', line 160 def bytes_processed Integer @gapi.statistics.query.total_bytes_processed rescue StandardError nil end  | 
  
#cache? ⇒ Boolean
Checks if the query job looks for an existing result in the query cache. For more information, see Query Caching.
      96 97 98 99 100  | 
    
      # File 'lib/google/cloud/bigquery/query_job.rb', line 96 def cache? val = @gapi.configuration.query.use_query_cache return false if val.nil? val end  | 
  
#cache_hit? ⇒ Boolean
Checks if the query results are from the query cache.
      150 151 152 153  | 
    
      # File 'lib/google/cloud/bigquery/query_job.rb', line 150 def cache_hit? return false unless @gapi.statistics.query @gapi.statistics.query.cache_hit end  | 
  
#data(token: nil, max: nil, start: nil) ⇒ Google::Cloud::Bigquery::Data Also known as: query_results
Retrieves the query results for the job.
      452 453 454 455 456 457 458 459 460 461 462 463  | 
    
      # File 'lib/google/cloud/bigquery/query_job.rb', line 452 def data token: nil, max: nil, start: nil return nil unless done? ensure_schema! = { token: token, max: max, start: start } data_hash = service.list_tabledata \ destination_table_dataset_id, destination_table_table_id, Data.from_gapi_json data_hash, destination_table_gapi, service end  | 
  
#ddl_operation_performed ⇒ String?
The DDL operation performed, possibly dependent on the pre-existence of the DDL target. (See #ddl_target_table.) Possible values (new values might be added in the future):
- "CREATE": The query created the DDL target.
 - "SKIP": No-op. Example cases: the query is
CREATE TABLE IF NOT EXISTSwhile the table already exists, or the query isDROP TABLE IF EXISTSwhile the table does not exist. - "REPLACE": The query replaced the DDL target. Example case: the
query is 
CREATE OR REPLACE TABLE, and the table already exists. - "DROP": The query deleted the DDL target.
 
      235 236 237 238  | 
    
      # File 'lib/google/cloud/bigquery/query_job.rb', line 235 def ddl_operation_performed return nil unless @gapi.statistics.query @gapi.statistics.query.ddl_operation_performed end  | 
  
#ddl_target_table ⇒ Google::Cloud::Bigquery::Table?
The DDL target table, in reference state. (See Table#reference?.)
Present only for CREATE/DROP TABLE/VIEW queries. (See
#statement_type.)
      248 249 250 251 252 253 254  | 
    
      # File 'lib/google/cloud/bigquery/query_job.rb', line 248 def ddl_target_table return nil unless @gapi.statistics.query ensure_service! table = @gapi.statistics.query.ddl_target_table return nil unless table Google::Cloud::Bigquery::Table.new_reference_from_gapi table, service end  | 
  
#destination ⇒ Table
The table in which the query results are stored.
      261 262 263 264 265 266 267  | 
    
      # File 'lib/google/cloud/bigquery/query_job.rb', line 261 def destination table = @gapi.configuration.query.destination_table return nil unless table retrieve_table table.project_id, table.dataset_id, table.table_id end  | 
  
#encryption ⇒ Google::Cloud::BigQuery::EncryptionConfiguration
The encryption configuration of the destination table.
      315 316 317 318 319  | 
    
      # File 'lib/google/cloud/bigquery/query_job.rb', line 315 def encryption EncryptionConfiguration.from_gapi( @gapi.configuration.query.destination_encryption_configuration ) end  | 
  
#flatten? ⇒ Boolean
Checks if the query job flattens nested and repeated fields in the
query results. The default is true. If the value is false,
large_results? should return true.
  
      110 111 112 113 114  | 
    
      # File 'lib/google/cloud/bigquery/query_job.rb', line 110 def flatten? val = @gapi.configuration.query.flatten_results return true if val.nil? val end  | 
  
#interactive? ⇒ Boolean
Checks if the priority for the query is INTERACTIVE.
      69 70 71 72 73  | 
    
      # File 'lib/google/cloud/bigquery/query_job.rb', line 69 def interactive? val = @gapi.configuration.query.priority return true if val.nil? val == "INTERACTIVE" end  | 
  
#large_results? ⇒ Boolean
Checks if the the query job allows arbitrarily large results at a slight cost to performance.
      82 83 84 85 86  | 
    
      # File 'lib/google/cloud/bigquery/query_job.rb', line 82 def large_results? val = @gapi.configuration.query.allow_large_results return false if val.nil? val end  | 
  
#legacy_sql? ⇒ Boolean
Checks if the query job is using legacy sql.
      274 275 276 277 278  | 
    
      # File 'lib/google/cloud/bigquery/query_job.rb', line 274 def legacy_sql? val = @gapi.configuration.query.use_legacy_sql return true if val.nil? val end  | 
  
#maximum_billing_tier ⇒ Integer?
Limits the billing tier for this job. Queries that have resource usage beyond this tier will raise (without incurring a charge). If unspecified, this will be set to your project default. For more information, see High-Compute queries.
      126 127 128  | 
    
      # File 'lib/google/cloud/bigquery/query_job.rb', line 126 def maximum_billing_tier @gapi.configuration.query.maximum_billing_tier end  | 
  
#maximum_bytes_billed ⇒ Integer?
Limits the bytes billed for this job. Queries that will have bytes
billed beyond this limit will raise (without incurring a charge). If
nil, this will be set to your project default.
      138 139 140 141 142  | 
    
      # File 'lib/google/cloud/bigquery/query_job.rb', line 138 def maximum_bytes_billed Integer @gapi.configuration.query.maximum_bytes_billed rescue StandardError nil end  | 
  
#query_plan ⇒ Array<Google::Cloud::Bigquery::QueryJob::Stage>?
Describes the execution plan for the query.
      191 192 193 194 195 196 197  | 
    
      # File 'lib/google/cloud/bigquery/query_job.rb', line 191 def query_plan return nil unless @gapi.statistics.query && @gapi.statistics.query.query_plan Array(@gapi.statistics.query.query_plan).map do |stage| Stage.from_gapi stage end end  | 
  
#standard_sql? ⇒ Boolean
Checks if the query job is using standard sql.
      285 286 287  | 
    
      # File 'lib/google/cloud/bigquery/query_job.rb', line 285 def standard_sql? !legacy_sql? end  | 
  
#statement_type ⇒ String?
The type of query statement, if valid. Possible values (new values might be added in the future):
- "SELECT": 
SELECTquery. - "INSERT": 
INSERTquery; see https://cloud.google.com/bigquery/docs/reference/standard-sql/data-manipulation-language - "UPDATE": 
UPDATEquery; see https://cloud.google.com/bigquery/docs/reference/standard-sql/data-manipulation-language - "DELETE": 
DELETEquery; see https://cloud.google.com/bigquery/docs/reference/standard-sql/data-manipulation-language - "CREATE_TABLE": 
CREATE [OR REPLACE] TABLEwithoutAS SELECT. - "CREATE_TABLE_AS_SELECT": 
CREATE [OR REPLACE] TABLE ... AS SELECT. - "DROP_TABLE": 
DROP TABLEquery. - "CREATE_VIEW": 
CREATE [OR REPLACE] VIEW ... AS SELECT .... - "DROP_VIEW": 
DROP VIEWquery. 
      215 216 217 218  | 
    
      # File 'lib/google/cloud/bigquery/query_job.rb', line 215 def statement_type return nil unless @gapi.statistics.query @gapi.statistics.query.statement_type end  | 
  
#time_partitioning? ⇒ Boolean?
Checks if the destination table will be time-partitioned. See Partitioned Tables.
      330 331 332  | 
    
      # File 'lib/google/cloud/bigquery/query_job.rb', line 330 def time_partitioning? !@gapi.configuration.query.time_partitioning.nil? end  | 
  
#time_partitioning_expiration ⇒ Integer?
The expiration for the destination table partitions, if any, in seconds. See Partitioned Tables.
      374 375 376 377  | 
    
      # File 'lib/google/cloud/bigquery/query_job.rb', line 374 def time_partitioning_expiration tp = @gapi.configuration.query.time_partitioning tp.expiration_ms / 1_000 if tp && !tp.expiration_ms.nil? end  | 
  
#time_partitioning_field ⇒ String?
The field on which the destination table will be partitioned, if any.
If not set, the destination table will be partitioned by pseudo column
_PARTITIONTIME; if set, the table will be partitioned by this field.
See Partitioned Tables.
      359 360 361 362  | 
    
      # File 'lib/google/cloud/bigquery/query_job.rb', line 359 def time_partitioning_field return nil unless time_partitioning? @gapi.configuration.query.time_partitioning.field end  | 
  
#time_partitioning_require_filter? ⇒ Boolean
If set to true, queries over the destination table will require a partition filter that can be used for partition elimination to be specified. See Partitioned Tables.
      390 391 392 393 394  | 
    
      # File 'lib/google/cloud/bigquery/query_job.rb', line 390 def time_partitioning_require_filter? tp = @gapi.configuration.query.time_partitioning return false if tp.nil? || tp.require_partition_filter.nil? tp.require_partition_filter end  | 
  
#time_partitioning_type ⇒ String?
The period for which the destination table will be partitioned, if any. See Partitioned Tables.
      343 344 345  | 
    
      # File 'lib/google/cloud/bigquery/query_job.rb', line 343 def time_partitioning_type @gapi.configuration.query.time_partitioning.type if time_partitioning? end  | 
  
#udfs ⇒ Array<String>
The user-defined function resources used in the query. May be either a
code resource to load from a Google Cloud Storage URI
(gs://bucket/path), or an inline resource that contains code for a
user-defined function (UDF). Providing an inline code resource is
equivalent to providing a URI for a file containing the same code. See
User-Defined Functions.
      300 301 302 303 304 305 306  | 
    
      # File 'lib/google/cloud/bigquery/query_job.rb', line 300 def udfs udfs_gapi = @gapi.configuration.query.user_defined_function_resources return nil unless udfs_gapi Array(udfs_gapi).map do |udf| udf.inline_code || udf.resource_uri end end  | 
  
#wait_until_done! ⇒ Object
Refreshes the job until the job is DONE.
The delay between refreshes will incrementally increase.
      411 412 413 414 415 416 417 418 419 420 421 422 423 424  | 
    
      # File 'lib/google/cloud/bigquery/query_job.rb', line 411 def wait_until_done! return if done? ensure_service! loop do query_results_gapi = service.job_query_results \ job_id, location: location, max: 0 if query_results_gapi.job_complete @destination_schema_gapi = query_results_gapi.schema break end end reload! end  |