Using data prepared by the Event Enrich process, the Adobe Product Enrich process applies functions to the data, joins lookups and writes the data into a product level dataset, which can be thought of as a sub-event level.
For example, if you buy 10 items online the purchase event will be one record with all products in a string in the Event table. The Product table essentially breaks that 10 item string up and creates 10 rows, one for each product. This facilitates a product code level analysis capability.
Much of the configuration is the same as the Event Enrich configuration.
The Product Enrich process includes four screens providing the ability to join multiple datasets, map to the schema, apply desired filters, and understand where the data is being written. Below are details of each screen and descriptions of each of the fields.
This section provides the information Syntasa needs if more than one set of data will be joined.
To create a join, click the green plus button.
- Join Type - left or inner join
- Dataset selector - choose the dataset that will be joined with first dataset
- Alias - type a table alias if a different name is desired or required
- Left Value - choose the field from the first dataset that will provide a link with the joined dataset (i.e. customer ID if joining a CRM dataset)
- Operator - select how the left value should be compared with the right value, for joins this will typically be an = sign
- Right Value - select the joining dataset value that is being compared with the left value
This section is where the product data is defined and labeled into the Syntasa schema.
Syntasa has a growing set of custom functions that can be applied along with any Hive functions perform data transformation. It is recommended to consult Syntasa professional services with any questions before applying other than the default functions.
- Field - fixed Syntasa table column labels
- Label - customizable user-friendly names
- Function - raw file fields are mapped into the Syntasa columns.
For Product Enrich process there are two options available at this time: Import and Export. Import is selected if the client has JSON data available to provide the custom mappings. Export is utilized to export the existing mapping schema in a .csv format that can be used to assist in the editing or manipulation of the schema. This updated file could then be used to input an updated schema into the dataset.
To perform Import:
- Click Actions button
- Click Import
- Click on the green paperclip icon to browse to the desired file to import
- Once file is selected, click Open
- Click Apply
- Wait 60 seconds to ensure the process of pulling in mappings and labels is complete
- Use the scroll, order and search options to locate the cust_fields and cust_metrics fields to ensure all the report suite custom fields have been mapped
To perform Export:
- Click Actions button
- Click Export
- syntasa_mapping_export.csv will be created and downloaded for the user
Filters provides the user the ability to filter the dataset (apply a Where clause) if required.
To create a filter:
- click the Apply Where Clause button to enable filter editing
- filter editor screen will appear
- select the appropriate Left Value from the drop-down list or click --Function Editor-- to create and apply custom function
- select the appropriate Operator from the drop-down list
- select the desired Right Value for filter from the drop-down list or click --Function Editor-- to create and apply custom function
- multiple filters can be applied
- ensure the proper (AND/OR) logic is applied when adding additional filtering if required
The Outputs tab provides the ability to name tables and displayed names on the graph canvas, along with selecting whether to load to Big Query (BQ) if in the the Google Cloud Platform (GCP), load to Redshift or RDS if in Amazon Web Services (AWS), or simply write to HDFS if an using on-premise Hadoop.
The expected output of the Product Enrich process are the below tables within the environment the data is processed (e.g. AWS, GCP, on-premise Hadoop):
- tb_product - product level table using Syntasa defined column names
- vw_product - view built off tb_event providing user-friendly labels
These tables can be queried directly using an enterprise provided query engine.
Additionally, the tables can serve as the foundation for building other datasets, such as Syntasa custom built datasets.