Using data prepared by the AA loader process, the Event Enrich process applies functions to the data, joins lookups, and writes the data into an event-level dataset. The dataset is the foundation for building the session, product, and visitor datasets, but can also be used for analysis and user-defined analytics datasets.
Process Configuration
The Event Enrich process includes four screens providing the ability to join multiple datasets, map to the schema, apply desired filters, and understand where the data is being written. Below are details of each screen and descriptions of each of the fields.
Click on the Event Enrich Node to access the editor.
Join
This section provides the information Syntasa needs if more than one set of data will be joined.
Joins
To create a join, click the green plus button.
- Join Type - left or inner join at this time.
- Dataset selector - choose the dataset that will be joined with the first dataset.
- Alias - Type a table alias if a different name is desired or required.
- Left Value - Choose the field from the first dataset that will provide a link with the joined dataset (i.e., customer ID if joining a CRM dataset)
- Operator - select how the left value should be compared with the right value, for a join this will typically be a = sign
- Right Value - Select the joining dataset value that is being compared with the left value.
Mapping
This section is where the raw data schema is declared, user-defined or Adobe API-defined labels are applied, and then mapped into the Syntasa schema.
Syntasa has a growing set of custom functions that can be applied along with any Hive functions to perform data transformations.
It is recommended to consult Syntasa professional services with any questions before applying other than the default functions.
- Name - fixed Syntasa table column labels
- Label - customizable user-friendly names
- Function - raw file fields are mapped into the Syntasa columns using pre-defined or custom functions.
Actions
For Event Enrich there are three options available: Autofill, Import, and Export. Autofill is used for Adobe Input apps that have the Adobe API process configured. Import is selected if there is no API process configured but the client has JSON data available to provide the custom mappings. Export is utilized to export the existing mapping schema in a .csv format that can be used to assist in the editing or manipulation of the schema. This updated file could then be used to input an updated schema into the dataset.
To perform Autofill:
- Click Actions Button
- Click Autofill
- Select the day of the week reporting Starts
- Click Apply
- Wait 60 seconds to ensure the process of pulling in mappings and labels is complete.
- Use the scroll, order, and search options to locate the cust_fields and cust_metrics fields to ensure all the report suite custom eVars, s.Props and Events have been mapped.
To perform Import:
- Click the Actions button.
- Click Import
- To import a file, simply click on the green paperclip icon and browse through your folders to find the desired document.
- Once a file is selected, click Open.
- Click Apply
- Wait 60 seconds to ensure the process of pulling in mappings and labels is complete.
- Use the scroll, order, and search options to locate the cust_fields and cust_metrics fields to ensure all the report suite custom eVars, s.Props and Events have been mapped.
To perform Export:
- Click the Actions button.
- Click Export
- Syntasa_mapping_export.csv will be created and downloaded for the user.
Filters
Filters provide the user with the ability to filter the dataset (apply a Where clause).
To create a filter:
- Click the green plus button.
- Filter editor screen will appear.
- Ensure the proper (AND/OR) logic is applied.
- Select the appropriate Left Value from the drop-down list or click --Function Editor-- to create and apply a custom function.
- Select the appropriate Operator from the drop-down list.
- Select the desired Right Value for the filter from the drop-down list or click --Function Editor-- to create and apply a custom function.
- Multiple filters can be created and applied.
Outputs
The Outputs tab provides the ability to name tables and displayed names on the graph canvas, along with selecting whether to load to Big Query (BQ) if in the Google Cloud Platform (GCP), load to Redshift or RDS if in Amazon Web Services (AWS), or simply write to HDFS if using on-premises Hadoop.
Expected Output
The expected output of the Event Enrich process are the below tables within the environment the data is processed (e.g., AWS, GCP, on-premises Hadoop):
- tb_event - event level table using Syntasa-defined column names.
- vw_event - view built off tb_event providing user-friendly labels.
These tables can be queried directly using an enterprise-provided query engine.
Additionally, the tables can serve as the foundation for building other datasets, such as Syntasa product, session, visitor tables, and custom-built datasets.