LogoLogo
HOMEBLOG
  • Getting Started
  • Connect to Data
    • Projects
    • Data Connectors
      • Google BigQuery
      • Google Cloud Storage
      • Iceberg
      • Snowflake
      • AWS S3
      • AWS Athena
      • AWS Redshift
      • Databricks Delta
      • Azure Blob
      • Salesforce
      • SAP Hana
      • File Path Options
      • SQL Server
      • Trino
    • Connection Modes
    • Triggering Scans
    • Configuring a Data Source
  • Profiling Data
    • Data Health Metrics
    • Data Health Overview Page
    • Interactive Profiling Tool: Investigator
    • Data Diff
    • Compound Attributes
      • List of Supported Functions
  • Monitoring Data
    • Data Quality Metrics
    • Alert Policies
    • Data Trends and Alerts
    • Metrics Inspector
  • Data Quality Rules
    • Rules Expression Examples
  • PII Data Detection
  • Remediation
    • Data Binning
    • Circuit Breaker
  • Integrations
    • Jira Integration
    • Slack
    • Jobs Status Notification
  • User Management
    • Microsoft Entra IDP Setup
    • Auth0 Setup
    • Okta SSO Setup
    • SSO Configuration
  • API Reference
    • Authentication API
    • API Keys
    • Telmai IP List
    • Get Google Service Account API
  • Source APIs
    • Source APIs
  • Upload Data APIs
    • Upload data from Cloud
      • RedShift Request data
      • GCS Request data
      • Azure Request data
      • GBQ Request data
      • Snowflake Request data
      • Amazon S3 Request data
      • Delta Lake Request
      • Trino Request data
    • Track upload job
    • Check for alerts
  • Admin APIs
    • User Management
  • Telmai Releases
    • Release Notes
      • 25.2.1
      • 25.2.0
      • 25.1.3
      • 25.1.2
      • 25.1.0
Powered by GitBook
On this page
  1. Connect to Data
  2. Data Connectors

File Path Options

For GCS and Amazon S3, users can specify either a single file name as input or provide a folder containing multiple files as input.

Single file

If sending a single file, then specify the full path of a json/csv/parquet file inside the bucket.

Copy

path=’<folder1>/file.csv’ 
path=’<folder1>/file.parquet 
path=’file.csv’
  • Files cannot be compressed

  • Files have an extension of csv, json or parquet.

  • Csv files should have the header line.

Note:

  • The JSON file should be in Newline Delimited JSON format - with .json extension.

  • Column Headers in Parquet file should not contain any spaces.

Folder

If a folder contains multiple files that are to be used as input, then specify the path of the folder inside the bucket, and ensure

  • Path does not have a trailing slash

  • All files in the folder have the same extension, either csv/json/parquet.

  • All csv files should have the same header line

If folder2 contains all input files, then

Copy

path=”<folder1>/<folder2>”

Wildcard Support

You can use a * in your file path to match your file path. To enable wildcard you need to select checkbox for "Resolve wildcards (*) to folder names"

For example: /<folder1>/log*.csv will match paths like /home/user/log_1.csv or /home/user/logs.csv.

👉 Important: Currently only one * is supported in the path.

PreviousSAP HanaNextSQL Server

Last updated 1 month ago