DEV Community

Hyperswitch Off page
Hyperswitch Off page

Posted on

Exporting Payment Data with Hyperswitch: A Comprehensive Guide

In today's data-driven business landscape, having seamless access to payment data is crucial for both operational efficiency and strategic decision-making. For organizations leveraging Hyperswitch powered by Juspay, the ability to efficiently export payment data to robust analytics platforms like Hyperswitch Redshift is a game-changer. This guide delves into the process, benefits, and technical intricacies of exporting payment data from Hyperswitch, providing valuable insights for developers and business leaders alike.

Understanding Hyperswitch and Its Capabilities

Hyperswitch, a comprehensive payment orchestration platform powered by Juspay, offers a suite of features designed to optimize payment processing and data management. It enables businesses to integrate multiple payment gateways, streamline transaction workflows, and access detailed payment analytics. The platform's capability to export payment data to Redshift stands out, providing a powerful tool for data-driven insights.

Why Export Payment Data?

Exporting payment data is not just a technical necessity; it's a strategic advantage. Here's why:

Enhanced Analytics: By exporting payment data to a high-performance analytics platform like Redshift, businesses can leverage advanced query capabilities to gain deeper insights into transaction patterns, customer behavior, and revenue streams.

Improved Reporting: Access to comprehensive payment data facilitates accurate and timely financial reporting, ensuring compliance with regulatory requirements and aiding in strategic planning.

Data-Driven Decision Making: With detailed payment data at their fingertips, business leaders can make informed decisions about marketing strategies, customer retention programs, and operational improvements.

The Architecture of Exporting Payment Data to Redshift
The process of exporting payment data from Hyperswitch to Redshift involves several key components and steps. Understanding this architecture is essential for a seamless integration.

Prerequisites

Before initiating the export process, ensure you have the following prerequisites in place:

AWS Account: An active AWS account with Redshift enabled.
IAM Role: A new IAM role for Redshift with S3 read permissions. This role's ARN must be provided to Hyperswitch.
S3 Bucket: An S3 bucket where Hyperswitch will store the payment data files.

Integration Steps

IAM Role Creation: Create an IAM role with S3 read permissions and share the ARN with Hyperswitch.
S3 Bucket Configuration: Hyperswitch will share the S3 bucket path that will be synced for data storage.
Table Schema Creation: Set up the necessary table schema on Redshift to accommodate the incoming data.
Data Ingestion: Utilize scripts or automated tools like Redshift's auto-ingestion to handle the data transfer and processing.
File Format and Path Specifications
Hyperswitch exports payment data as plain CSV files with headers, stored in a structured path in the S3 bucket. The typical file path format is:

php

s3://////.csv
Data Update Frequency and Retention
Update Schedule: Data is updated every 6 hours.
Retention Period: Data is retained in the S3 bucket for 7 days.
Type of Data: Payment data as per the defined schema.
Technical Implementation
Let's dive into the technical details of setting up and executing the export process.

Creating the Table Schema in Redshift
The first step in preparing for data ingestion is creating the table schema in Redshift. The following SQL command sets up the necessary table:

sql

CREATE TABLE payments (
    payment_id VARCHAR(64),
    attempt_id VARCHAR(64),
    status TEXT,
    amount INTEGER,
    currency VARCHAR(10),
    amount_to_capture INTEGER,
    customer_id VARCHAR(64),
    created_at TIMESTAMP,
    order_details VARCHAR(255),
    connector VARCHAR(255),
    error_message VARCHAR(255),
    connector_transaction_id VARCHAR(255),
    capture_method VARCHAR(255),
    authentication_type VARCHAR(255),
    mandate_id VARCHAR(64),
    payment_method VARCHAR(255),
    payment_method_type TEXT,
    metadata TEXT,
    setup_future_usage TEXT,
    statement_descriptor_name TEXT,
    description TEXT,
    off_session TEXT,
    business_country TEXT,
    business_label TEXT,
    business_sub_label TEXT,
    allowed_payment_method_types TEXT
);
Enter fullscreen mode Exit fullscreen mode

Ingesting Data from S3 to Redshift
To efficiently ingest data from S3 to Redshift, a COPY job is used. This job can be automated for continuous data updates. Here is an example of the SQL command for data ingestion:

sql

CREATE TEMP TABLE payments_stage (LIKE payments);

COPY payments_stage FROM 


's3://<BUCKET_NAME>/<MERCHANT_ID>/<VERSION>/payments'
CREDENTIALS 'aws_iam_role=<ARN_ROLE>'
IGNOREHEADER 1
TIMEFORMAT 'YYYY-MM-DD HH:MI:SS'
CSV;

MERGE INTO payments USING payments_stage ON 


payments.payment_id = payments_stage.payment_id

**WHEN MATCHED THEN UPDATE SET**



payment_id = payments_stage.payment_id,
    attempt_id = payments_stage.attempt_id,
    status = payments_stage.status,
    amount = payments_stage.amount,
    currency = payments_stage.currency,
    amount_to_capture = payments_stage.amount_to_capture,
    customer_id = payments_stage.customer_id,
    created_at = payments_stage.created_at,
    order_details = payments_stage.order_details,
    connector = payments_stage.connector,
    error_message = payments_stage.error_message,
    connector_transaction_id = payments_stage.connector_transaction_id,
    capture_method = payments_stage.capture_method,
    authentication_type = payments_stage.authentication_type,
    mandate_id = payments_stage.mandate_id,
    payment_method = payments_stage.payment_method,
    payment_method_type = payments_stage.payment_method_type,
    metadata = payments_stage.metadata,
    setup_future_usage = payments_stage.setup_future_usage,
    statement_descriptor_name = payments_stage.statement_descriptor_name,
    description = payments_stage.description,
    off_session = payments_stage.off_session,
    business_country = payments_stage.business_country,
    business_label = payments_stage.business_label,
    business_sub_label = payments_stage.business_sub_label,
    allowed_payment_method_types = payments_stage.allowed_payment_method_types
WHEN NOT MATCHED THEN INSERT VALUES (
    payments_stage.payment_id,
    payments_stage.attempt_id,
    payments_stage.status,
    payments_stage.amount,
    payments_stage.currency,
    payments_stage.amount_to_capture,
    payments_stage.customer_id,
    payments_stage.created_at,
    payments_stage.order_details,
    payments_stage.connector,
    payments_stage.error_message,
    payments_stage.connector_transaction_id,
    payments_stage.capture_method,
    payments_stage.authentication_type,
    payments_stage.mandate_id,
    payments_stage.payment_method,
    payments_stage.payment_method_type,
    payments_stage.metadata,
    payments_stage.setup_future_usage,
    statement_descriptor_name,
    payments_stage.description,
    payments_stage.off_session,
    payments_stage.business_country,
    payments_stage.business_label,
    payments_stage.business_sub_label,
    payments_stage.allowed_payment_method_types
);

Enter fullscreen mode Exit fullscreen mode

DROP TABLE payments_stage;
This script creates a temporary table, copies data from the S3 bucket, and merges it with the main table, ensuring no duplicate entries based on the payment_id.

Business and Developer Benefits

For Business Leaders

Enhanced Business Intelligence: Access to detailed payment data enables better understanding of market trends and customer preferences.

Strategic Decision Making: Insights derived from the data help in making informed decisions about product launches, marketing campaigns, and customer retention strategies.

Regulatory Compliance: Accurate and timely data ensures compliance with financial regulations and reporting requirements.
For Developers

Simplified Data Management: The structured process of exporting payment data simplifies data handling and reduces the workload on development teams.

Scalable Solutions: Utilizing AWS services like Redshift and S3 ensures that the data infrastructure is scalable, accommodating growth in data volume.

Automation and Efficiency: Automated data ingestion processes minimize manual intervention, reducing the risk of errors and enhancing operational efficiency.

Conclusion

Exporting payment data from Hyperswitch to Redshift is a powerful capability that bridges the gap between raw transaction data and actionable business insights. By leveraging this feature, organizations can enhance their analytics, improve reporting accuracy, and make data-driven decisions that propel business growth. For both developers and business leaders, understanding and implementing this process is crucial to unlocking the full potential of their payment data.

For more detailed technical guidance, you can explore the exporting payment data documentation on the Hyperswitch website. Additionally, learn more about how Hyperswitch can transform your payment processing by visiting the Hyperswitch powered by Juspay homepage.

Top comments (0)