New Features
-
Visualizations taking center stage - A new Dashboards menu has been introduced to highlight and empower users to have reports and dashboards at their fingertips. Also, reports and dashboards can now be created from within the Syntasa application.
Dashboards have been a staple feature where users can link and publish dashboards, but this was previously only available within a Syntasa app. The new Dashboards menu keeps intact the old abilities and enables the same publishing functionality now at the system level. Those reports and dashboards that are needed to be available regularly and quickly can be added within the new system-level Dashboards menu for easy access and further highlighted by favoriting.
In addition, the Dashboards menu has an Analytics section for creating new reports and dashboards. The visualizations created here can be published within the dashboard functionality within an app or at the system level.
- Process additions and updates - Syntasa empowers you to build data pipelines and models quickly with apps utilizing the drag-and-drop workflow canvas. The canvas is powerful because of the many off-the-shelf processes, pre-packaged with the intelligence and algorithms built-in so that you can get to results faster. We're constantly adding new processes to expand the number of available ingredients at your disposal:
- Dash - Focusing on visualizations, the new Dash process enables data scientists to build dashboards, instead of just dataset tables, from the data crunched through their pipeline. This is an additional tool to the Dashboards functionality, both spotlighting turning data into dashboards.
- Container Code Processor - Giving data scientists additional tools and flexibility, the new Container Code Processer process allows for a container runtime to run code in any of the following languages: Python, Scala, Java, R, and shell script.
- Spark Processor - The Spark Processor process has been updated to add the ability to include additional libraries and python packages in the classpath of spark code processes.
- Product Assistant - With our improved product assistant, you’ll get help automatically configuring processes and get feedback earlier and more often as you’re using Syntasa. This real-time feedback helps avoid common mistakes and saves time by alerting you if you’re creating a process that might produce unexpected results. Improved and added validations include the following:
- Adobe Analytics (AA) Loader auto-configure - The AA Loader process includes the ability to auto-configure the input settings, an option to validate the input settings, and a feature to auto-fill the schema.
- From File auto-configure - The From File process includes the ability to auto-configure the input settings, an option to validate the input settings, and features to test the regex setting and to preview the file that is being read with the given configuration values.
- Preview Audience results - While utilizing the Audience module users will have several apps writing results to the same datasets. The preview to check in on these datasets can show results from any of the apps writing to them. We've added the ability to filter the dataset preview so you can get a quick look at the data important to you.
- Test connections - When creating a new connection you can now test it right away, even before saving, to ensure it's in working order before trying to use it in an app.
- Validate code - When working with Spark Processor and BQ Process processes you can validate your code before trying to run them in a job. Quickly weeding out silly errors in your written code saves time and gets to the results of your app faster.
- Leveraging latest cloud compute features - The Syntasa application runs on top of your cloud provider of choice and utilizes the technology and features these clouds provide to process the data. With the underlying technology in an ever-changing and improving state, the Syntasa application needs to keep pace to take advantage of the latest and greatest.
In this version of the Syntasa application, new types of runtime templates, which users set up to create cluster instances to execute jobs, have been added to the available runtimes so that features such as auto-scale and streaming can be utilized.
Improvements
- Managing your event stores - The event stores created to house the apps and datasets used to be a black-box and only the configuration of the event store was visible. It wasn't possible to easily see what apps and datasets were utilizing the event store after its creation.
The Event Stores screen has been overhauled so that you can now do exactly that, that is, easily view all datasets that are in an event store and see exactly what apps are using the event store.
- Smoothing out the user experience - They may be small individually, but there are always little improvements that add up to make a big difference in enjoying the use of the software day-to-day. We've made a number of improvements to this version including:
- Download grids - You can now download any of the grids displaying data throughout the application. Whether it be job activity information, a list of connections, or a dataset preview, all the grids have a download button making it easy to export the information.
- Copy code processes - If there's an existing code process that you'd like to quickly copy or download the code from, you can now do so without needing to unlock the app. Just view and open the details of the code process and it's possible to copy or download the code right away.
- Faster workflow screens - Keeping you focused on your train of thought and not having to wait on pages to load is key to being productive. To keep the train moving smoothly, we've sped up the loading of the workflow canvas of an app, especially for those large, complicated apps; sped up the saving of the apps' workflow canvas; and sped up the scroll speed of the grids throughout the application.
- Adobe Event Enrich auto-configure - The Adobe Event Enrich process has been streamlined to configure. Mapping the fields from the raw input into the enriched dataset can be time-consuming if not doing this automatically with the auto-fill functionality. We've provided some additional validations, options for the behavior of the auto-fill, and the ability to edit the fields so the mapping is exactly the way you want.
- Runtime setup validations - To eliminate common mistakes and waste your time, we've added some validations to the creation of Spark runtimes to ensure you're set up correctly from the get-go instead of finding out when trying to use the runtime in a job.