Using data prepared by the Unified Event Enrich process, the Unified Visitor Enrich process applies functions to the data, joins lookups, and writes the data into a visitor-level dataset, which can be thought of as event data aggregated at the visitor level.
Much of the settings are similar to that of the Unified Event Enrich configuration.
Process Configuration
The Unified Visitor Enrich process includes four screens providing the ability to join multiple datasets, map to the schema, apply desired filters, and understand where the data is being written. Below are details of each screen and descriptions of each of the fields.
Click on the Visitor Enrich node to access the editor.
Join
This section provides the information Syntasa needs if more than one set of data will be joined.
To create a join, click the green plus button.
- Primary Source - The first dataset connected on the graph will appear by default
- Alias - type a table alias if a different name is desired or required.
Joins
- Join Type - Available options are Left, Right, Inner, and Full outer
- Source - Provide a source name. It is a mandatory field
- Alias - type a table alias if a different name is desired or required
- Left Value - choose the field from the first dataset that will provide a link with the joined dataset
- Operator - select how the left value should be compared with the right value, for joining this will typically be an = sign
- Right Value - select the joining dataset value that is being compared with the left value
Mapping
The table is where the event data is defined and labeled into the Syntasa schema.
- Name - fixed Syntasa table column labels, some names are editable
- Label - Some labels are customizable and user-friendly
- Function - This is where we write our enrichment, this can be one of the following:
- You can write custom logic like regex or a case statement.
- Combining columns into one column.
- Note - User can add custom notes
Actions
For Unified Visitor Enrich there are two options available: Import and Export. Import is selected if the user wants to provide a custom mapping schema that they have created using an Excel .csv file. Export is utilized to export the existing mapping schema in a .csv format that can assist in editing or manipulating the schema. This updated file could then be used to input an updated schema into the dataset via the Import selection.
To perform Import:
- Click Import button
- Click on the green paperclip icon to browse to the desired file to import
- Once the file is selected, click Open
- Click Apply
- Wait 60 seconds to ensure the process of pulling in mappings and labels is complete
- Use the scroll, order, and search options to locate the cust_fields and cust_metrics fields to ensure all the report suite custom fields have been mapped
To perform Export:
- Click Export button
- syntasa_mapping_export.csv will be created and downloaded for the user
Filters
Filters allow you to filter the dataset
To create a filter, click the green (+) button, which brings up the filter editor screen. Multiple filters can be used; however, the right (AND/OR) logic must be used.
Output
The Output tab provides the ability to name tables and displayed names on the graph canvas, along with selecting whether to load to Big Query (BQ) if in the Google Cloud Platform (GCP).
- Table Name - defines the name of the database table where the output data will be written. Please ensure that the table name is unique to all other tables within the defined Event Store, otherwise, data previously written by another process will get overwritten
- Display Name - The label of the process output icon is displayed on the app graph canvas.
- Configurations
- Partition Scheme - Defines how the output table should be stored in a segmented fashion. Options are Daily, Hourly, and None. Daily is typically chosen.
- File Format - Defines the format of the output file. Options are Avro, Orc, Parquet, Textfile
- Load To BQ - This option is only relevant to Google Cloud Platform deployments. BQ stands for Big Query and this option allows for the ability to create a Big Query table. If using AWS, this will have the option to Load To RedShift / Athena, and if an on-premise installation data is normally written to HDFS and does not display a Load To option.
- Location - Storage bucket or HDFS location where source raw files will be stored for the Syntasa Event Enrich process to use.
Expected Output
The expected output of the Unified Visitor Enrich process are the below tables within the environment the data is processed (e.g. AWS, GCP, on-prem Hadoop):
- tb_visitor_daily - visitor level table using Syntasa-defined column names
- vw_visitor_daily - view built off tb_event providing user-friendly labels