LogoLogo
HOMEBLOG
  • Getting Started
  • Connect to Data
    • Projects
    • Data Connectors
      • Google BigQuery
      • Google Cloud Storage
      • Iceberg
      • Snowflake
      • AWS S3
      • AWS Athena
      • AWS Redshift
      • Databricks Delta
      • Azure Blob
      • Salesforce
      • SAP Hana
      • File Path Options
      • SQL Server
      • Trino
    • Connection Modes
    • Triggering Scans
    • Configuring a Data Source
  • Profiling Data
    • Data Health Metrics
    • Data Health Overview Page
    • Interactive Profiling Tool: Investigator
    • Data Diff
    • Compound Attributes
      • List of Supported Functions
  • Monitoring Data
    • Data Quality Metrics
    • Alert Policies
    • Data Trends and Alerts
    • Metrics Inspector
  • Data Quality Rules
    • Rules Expression Examples
  • PII Data Detection
  • Remediation
    • Data Binning
    • Circuit Breaker
  • Integrations
    • Jira Integration
    • Slack
    • Jobs Status Notification
  • User Management
    • Microsoft Entra IDP Setup
    • Auth0 Setup
    • Okta SSO Setup
    • SSO Configuration
  • API Reference
    • Authentication API
    • API Keys
    • Telmai IP List
    • Get Google Service Account API
  • Source APIs
    • Source APIs
  • Upload Data APIs
    • Upload data from Cloud
      • RedShift Request data
      • GCS Request data
      • Azure Request data
      • GBQ Request data
      • Snowflake Request data
      • Amazon S3 Request data
      • Delta Lake Request
      • Trino Request data
    • Track upload job
    • Check for alerts
  • Admin APIs
    • User Management
  • Telmai Releases
    • Release Notes
      • 25.2.1
      • 25.2.0
      • 25.1.3
      • 25.1.2
      • 25.1.0
Powered by GitBook
On this page
  1. Connect to Data
  2. Data Connectors

AWS Redshift

Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. To connect to Redshift provisioned clusters, the following details are required:

  • Endpoint Prefix:

    • Navigate to Amazon Redshift Clusters -> <Cluster Name> -> General Information -> Endpoint.

    • Retrieve the value, which will be in the format [endpoint_prefix].redshift.amazonaws.com:[port]/[database].

    • Use the value of [endpoint_prefix] here.

  • Database:

    • Navigate to Amazon Redshift Clusters -> <Cluster Name> -> General Information -> Endpoint.

    • Use the value of [database] from the endpoint.

  • Schema: The schema where the tables are located.

  • User: The username to use for the connection (must be created in Redshift).

  • Password: The password to use for the connection (must be created in Redshift).

Note: In this case, the username and password need to be created inside the Redshift and it's not the IAM user.

PreviousAWS AthenaNextDatabricks Delta

Last updated 8 months ago