Accessing Your BigQuery Reports and Data

This document explains how evaluation partners can securely access, download, and analyze their reports stored in Google Cloud Storage (GCS) and BigQuery.

Overview

Every day, data about your eval users’ trades are automatically aggregated into reports. Once the Evaluation Support team provides you with your service account key, you can either download your reports as CSV files from Google Cloud Storage or analyze the datasets directly in BigQuery.

Types of Reports

The following table lists the reports you can access.

While all of the reports are available for you to review in BigQuery, only some of the reports are delivered as a CSV file to your Google Cloud Storage (GCS) bucket. If you want other reports delivered as a CSV file to your GCS bucket, contact Evaluation Support.

Report NameDescriptionAvailable SchemasAccess Method
account-statusProvides account status information of your traders with associated users, permitted users, permission status, and liquidation status.demo, liveBigQuery
all-accountsProvides a list of all vendor’s accounts and their details, including ID, name, active status, and creation timestamp.demo, liveBigQuery
cash-balanceProvides end-of-day cash balances for vendor accounts created before 5:00 pm ET on the trade date. May also include rollover balances with a 0 change from the prior day for accounts with no current-day activity.demo, liveBigQuery
cash-historyObtain all balance logs for your accounts that have balance change on a trading day.demo, liveCSV via Google Cloud Storage
daily-fillsProvides all fills on a trading day for vendor’s accounts, including fill timestamp, buy/sell, price, contract, similar to fraud fills but has different schema.demo, liveCSV via Google Cloud Storage
fraud-fillsProvides all fills on a trading day for vendor’s accounts, including fill timestamp, buy/sell, price, and contract.demo, liveBigQuery
position-historyObtain all position logs for your accounts that have trading activity on a trading day. It has sell/sell fill information, including contract, price, and timestamp.demo, liveBigQuery
super-perfThis is a version of the ‘Performance Report’ from dashboards extrapolated over every account in your organization (instead of a single account). Includes the same details as a Performance Reportdemo, liveCSV via Google Cloud Storage
test-counterReturns the number of active accounts belonging to your org with account names that include the given keyword.demoBigQuery
users-with-same-ipProvides a list of users with from the past 3 months with the same IP address.demo, liveBigQuery

How to Access Your Reports

Prerequisites

Before you can access your BigQuery reports, you’ll need these items.

Report Details

Contact Evaluation Support and provide the following information:

  • Which reports (if any) you want delivered as a CSV file to your GCS bucket
  • What time (including time zone) you want your reports to run

Service Account Key

A Service Account Key is a JSON file that grants your organization access to the GCS bucket and read-only access to the BigQuery dataset. It acts as authentication credentials for your organization.

Contact Evaluation Support for your Service Account Key.

Warning: Treat your JSON key as a password. Do not commit it to source control, share it via email, or share it via a messaging app. Instead, use a secure file transfer or shared secret tooling.

Tip: Store the JSON key in a secrets manager or a secure keystore.

GCS Bucket Information

Contact Evaluation Support to confirm the Google Cloud Storage bucket name where daily reports are stored. The format is:

  • Staging: eval-partner-<evalPartnerName>-devel-<uniqueId>
  • Production: eval-partner-<evalPartnerName>-prod-<uniqueId>

Dataset Information

Contact Evaluation Support to confirm your BigQuery dataset name where the data is hosted. The format is:

ep_<evalPartnerName>

Project ID

Use the following Project IDs:

  • Staging: plenary-cascade-781
  • Production: airy-passkey-867

Access Your Reports Using Command Line Interface (CLI)

Using the Command Line Interface (CLI) method does not require programming knowledge and is ideal for analysts or operators who want to manually list, download, or analyze data.

To access your reports using the CLI method, follow these steps:

  1. Install the Google Cloud SDK from https://cloud.google.com/sdk/docs/install.

  2. Open Terminal (Mac) or Command Prompt (Windows).

  3. Authenticate with your Service Account Key:

    gcloud auth activate-service-account --key-file=~/keys/vendor-sa.json
  4. Confirm the active account:

    gcloud auth list
  5. Set your active project.

    # For staging
    gcloud config set project plenary-cascade-781
    # For production
    gcloud config set project airy-passkey-867
Examples
List all available reports in your bucket (Production)
gsutil ls gs://eval-partner-<evalPartnerName>-prod-<uniqueId>/
Download a specific report (Production)
gsutil cp gs://eval-partner-<evalPartnerName>-prod-<uniqueId>/<reportName>_<vendorName>_<yyyy_MM_dd>.csv .
List all tables in your BigQuery dataset (Production)
bq ls --project_id=airy-passkey-867 ep_<evalPartnerName>
View a specific table’s schema (Production)
bq show --project_id=airy-passkey-867 ep_<evalPartnerName>.<table_name>

Note: Replace ep_<evalPartnerName> with your actual dataset name, <table_name> with your actual table name, and use the appropriate project ID for your environment.

Tip: Wrap paths in quotes if they include spaces or special characters.

Access Your Reports Programmatically using SDKs or Client Libraries

Programmatic access is ideal for developers who want to integrate BigQuery reports into their applications or automate downloads.

These libraries allow you to connect directly to GCS or BigQuery from supported languages such as Node.js or Python.

To access your reports using the programmatic access method, follow these steps:

  1. Create a new file (for example, new_file.js).
  2. Install the required packages:
    1. Node.js for GCS: npm install @google-cloud/storage
    2. Node.js for BigQuery: npm install @google-cloud/bigquery
    3. Python for GCS: pip install google-cloud-storage
    4. Python for BigQuery: pip install google-cloud-bigquery
  3. Copy the code from Example Commands into your file.
  4. Edit the code as needed (for example, service account key, GCS bucket name, and dataset name).
  5. Save the file.
  6. Open Terminal (Mac) or Command Prompt (Windows) and run node new_file.js.

Example Commands

Node.js

Download a CSV from GCS (Production)

import { Storage } from '@google-cloud/storage';
// Uses GOOGLE_APPLICATION_CREDENTIALS if set
const storage = new Storage({ projectId: 'airy-passkey-867' });
async function downloadReport() {
const bucketName = 'eval-partner-<evalPartnerName>-prod-<uniqueId>';
const objectName = 'users-with-same-ip_<vendorName>_2025_08_18.csv';
const destination = `./${objectName}`;
await storage.bucket(bucketName).file(objectName).download({ destination });
console.log(`Downloaded: ${destination}`);
}
downloadReport().catch(console.error);

List all tables in your BigQuery dataset (Production)

const { BigQuery } = require("@google-cloud/bigquery");
const bigquery = new BigQuery({ projectId: 'airy-passkey-867' });
// Replace <evalPartnerName> with your actual partner name
const datasetId = 'airy-passkey-867.ep_<evalPartnerName>';
async function listTables(datasetId) {
console.log(`Tables in ${datasetId}:`);
const dataset = bigquery.dataset(datasetId);
const [tables] = await dataset.getTables();
for (let i = 0; i < tables.length; i++) {
const table = tables[i];
const [tableData] = await dataset.table(table.id).get();
console.log("TABLE: ", table.id);
console.log("FIELDS: ", tableData.metadata.schema.fields);
console.log("");
}
}
// Usage: node tables.js
listTables(datasetId).catch(console.error);

Run BigQuery queries and export to CSV (Staging)

const { BigQuery } = require("@google-cloud/bigquery");
const bigquery = new BigQuery({ projectId: 'plenary-cascade-781' });
const fs = require("fs");
// Replace <evalPartnerName> with your actual partner name
const datasetId = 'plenary-cascade-781.ep_<evalPartnerName>';
async function runQuery(query, asCsvFile) {
const [job] = await bigquery.createQueryJob(query);
const [rows] = await job.getQueryResults();
if (asCsvFile) {
writeCSV(rows, asCsvFile);
} else {
console.table(rows);
}
}
function writeCSV(data, csvFileName) {
if (data.length === 0) {
console.log("No data to write.");
return;
}
const fieldNames = Object.keys(data[0]);
const headerLine = fieldNames.join(",");
const csvLines = [headerLine].concat(
data.map((row) => {
return fieldNames
.map((fieldName) => escapeCSVValue(row[fieldName]))
.join(",");
})
);
const csvData = csvLines.join("\n");
fs.writeFile(csvFileName, csvData, (err) => {
if (err) {
console.error("Error writing CSV file:", err);
} else {
console.log(`Data has been written to ${csvFileName}`);
}
});
}
function escapeCSVValue(value) {
if (typeof value === "string") {
if (value.includes('"')) {
return '"' + value.replace(/"/g, '""') + '"';
}
if (value.includes(",") || /^\s|\s$/.test(value)) {
return '"' + value + '"';
}
}
if (typeof value === "object") {
if (value.value) {
return escapeCSVValue(value.value);
}
return JSON.stringify(value);
}
return value;
}
// Usage: node query.js "SELECT * FROM \`plenary-cascade-781.ep_<evalPartnerName>.<table_name>\` LIMIT 10" [--csv filename]
if (process.argv.length !== 3 && process.argv.length !== 5) {
console.error("node query.js <query> [--csv filename]\n\n--csv: output as CSV instead of table");
process.exit(1);
} else {
runQuery(
process.argv[2],
process.argv.length > 3 && process.argv[3] === "--csv"
? process.argv[4]
: null
).catch(console.error);
}
Python

Query Tables in BigQuery (Staging)

from google.cloud import bigquery
client = bigquery.Client(project="plenary-cascade-781")
dataset_id = "plenary-cascade-781.ep_<evalPartnerName>"
tables = client.list_tables(dataset_id)
print(f"Tables in {dataset_id}:")
for table in tables:
print(table.table_id)

Troubleshooting

This table shows common issues and their solutions.

SymptomPossible CauseRemedy
403: AccessDeniedExceptionThe Service Account doesn’t have access to the specified project or bucket.1. Verify that you’re using the correct project ID:
Staging: plenary-cascade-781
Production: airy-passkey-867
2. Confirm your bucket name and dataset name match your environment.
403: SignatureDoesNotMatchUsing an unsupported authentication method or expired token.Always authenticate via gcloud auth activate-service-account --key-file=<path> or through a properly configured SDK using your key file.
Dataset or Table Not FoundUsing the wrong project or dataset name.Double-check that your dataset name matches the correct environment:
Staging: ep_<evalPartnerName> under plenary-cascade-781
Production: ep_<evalPartnerName> under airy-passkey-867
File Not Found (404)The report for that date hasn’t been generated or the filename format is incorrect.Run gsutil ls gs://eval-partner-<evalPartnerName>-prod-<uniqueId>/ to list all files.
2. Check for typos in <reportName> or <vendorName>.
Gsutil asks to log in interactivelyThe Service Account credentials are not active in the current shell session.Re-run gcloud auth activate-service-account --key-file=<path>
2. Confirm your GOOGLE_APPLICATION_CREDENTIALS variable is set correctly.
Mixing staging and production credentialsUsing one Service Account or project ID while referencing resources from another environment.Make sure the bucket, dataset, and project ID all belong to the same environment.
Example: airy-passkey-867 + eval-partner-<evalPartnerName>-prod-* plenary-cascade-781 + eval-partner-<evalPartnerName>-devel-*