Seamless Integration Between Excel and Databases: A High-Efficiency Data Management Solution
Leverage Trinity’s component-based design so you can easily manage Excel data and perform seamless data transposition with confidence.
The Challenge
With Trinity’s powerful component-based architecture, you can streamline and simplify the way you manage and transform Excel-based data. Traditional methods of consolidating daily Excel reports into a database demand excessive manual effort, such as opening each file, reformatting data, and pasting it into a master file before import. Trinity removes these repetitive tasks through intelligent configuration, allowing you to automate processing with minimal human intervention. Once configured, the system processes multiple reports simultaneously while maintaining accuracy and consistency across your data.
The Solution
The solution enables you to transpose Excel-based statistical data and import it directly into a database. The key transformation converts horizontal headers such as “Date,” “Shift,” and “Team” into standardized vertical records, giving you normalized data for reliable archival and analysis.
Workflow Overview and Steps
- Read
Use the Reader component to ingest the original Excel file.
- Assign Keys
Utilize the Transformer component to add a unique sequence number (seq) and a constant key (key) for each row to facilitate tracking and merging.
- Split
Use the Router component to split the data based on the sequence condition—separating header information from data rows.
- Join
Use the Local Join component to merge split rows based on the key, aligning each data row with its corresponding metadata (e.g., “Date”, “Shift”, “Team”).
- Write
Use the Writer component to output the transformed, normalized data to the target database or file system.
Implementation Steps
Step 1
Refer to Figure 1: The second row of the Excel file contains merged headers (“Date”, “Shift”, “Team”). These need to be transposed and mapped into each data row to produce normalized output as shown in Figure 2.
Refer to Figure 1: The second row of the Excel file contains merged headers (“Date”, “Shift”, “Team”). These need to be transposed and mapped into each data row to produce normalized output as shown in Figure 2.

Step 2
Design the job workflow based on the architecture in the diagram.
Design the job workflow based on the architecture in the diagram.

Step 3
Configure the Reader to read all rows from the Excel file.。
Configure the Reader to read all rows from the Excel file.。

Step 4
In the Transformer, add two fields:
In the Transformer, add two fields:
- key: Assigned a constant value such as ‘key’
- seq: Use the built-in SEQUENCE(1) function to generate a unique index for each row

Step 5
In the Router, configure the upper output path with the condition seq > 3 to extract actual data rows (ignoring header rows).
In the Router, configure the upper output path with the condition seq > 3 to extract actual data rows (ignoring header rows).


Step 6
Configure the lower output path with seq == 2 to extract the header row (expected to contain values for “Date”, “Shift”, and “Team”).
Configure the lower output path with seq == 2 to extract the header row (expected to contain values for “Date”, “Shift”, and “Team”).


Step 7
In the Local Join, join both streams using the key field. This maps each data row with the corresponding header values, enriching the data with contextual metadata.
In the Local Join, join both streams using the key field. This maps each data row with the corresponding header values, enriching the data with contextual metadata.

Step 8
With this design, Excel’s merged cells (e.g., headers in row 2) are transformed into normalized columns in each row of output, which are then written to the database. This structure supports future data analytics with clean, structured input.
With this design, Excel’s merged cells (e.g., headers in row 2) are transformed into normalized columns in each row of output, which are then written to the database. This structure supports future data analytics with clean, structured input.

This Trinity-based solution allows organizations to automate the transformation and integration of Excel data into databases, eliminating tedious manual steps while laying a solid foundation for future analytics and reporting initiatives. Merged cells, once a common pain point, are now seamlessly normalized—making your data not only clean but analysis-ready.
