Multi-Pipeline Orchestration - One of our most highly requested features, our advanced pipeline orchestration provides the ability to coordinate execution between multiple pipelines based on a number of factors including time as well as the completion or failure of specific previous steps. Jobs can now be started, i.e. triggered, based on the completion of another job(s), process(es), and/or time.
Interactive Mode - Our new interactive mode combines the real-time feedback of an interactive notebook with the reliability of a canvas. Set up your data and cluster and you’re good to go. Get results faster and prevent mistakes from compounding down the line, all while building in a production-ready environment. The interactive mode adds the ability to start and leave a cluster up while building or editing an app, eliminating time wasted waiting for a cluster to spin up before each and every job run.
Updated Look and Feel - We’ve done some redecorating to streamline and improve your user experience. We redesigned our navigation to help you keep track of where you are in the app at any given time. Menus are now grouped together more logically to make it easier for you to find exactly what you need. You can also favorite apps so that they’re always front and center in your interface, and even add custom icons to really make them your own.
Favorite Apps - Users are now able to star apps to be set as their favorite(s). A user's favorite apps can be viewed in a new app group, Favorites, for quick access and no longer needing to constantly filter through your and everybody else's apps.
Custom Icons - Users are now able to upload icons to be able to set a custom icon at an app and/or process so you can have that quick visual reminder on what the app and/or process is representing.
App Activity - No longer do you have to leave the workflow canvas, where you are building and testing your app, to check a job's status. From this side panel you can quickly check the status and progression of a process you may have just kicked off directly from the workflow canvas.
SQL support for Spark Processor - Spark Processor process type now supports SQL as a code language in addition to existing support for Python, Scala, and R.
Adobe Auditprocess email notification - Now able to add email recipients to receive mail upon process completion. Note - Environment needs SMTP setup for functionality to operate.
Jupyter Notebook from Spark Runtime - Spark runtimes now has an option "Jupyter Notebook". When enabled job activity log will have display a link to Jupyter Notebook.
Improvements
Product Assistant - With our improved product assistant, you’ll get feedback earlier and more often as you’re using Syntasa. This real-time feedback helps avoid common mistakes and saves time by alerting you if you’re creating a process that might produce unexpected results. Improved and added validations:
Users are now warned and not allowed to delete a runtime that is currently used by one or more jobs.
Improved validations on field names for process types that include mappings.
Runtime template no longer allowed to have a configuration row with no value.
Runtime Dependencies - Ever wonder how many apps and jobs are using that big cluster you spun up to test something out? Now with the Dependencies screen within the Runtimes management, it's possible to see all apps and the associated jobs that are configured to use the runtime that you are interested in.
Skip header in From File process - From File process type now has the option "Contains Header" that when enabled will skip the first row in the file.
Process import/export format change - The import/export functionality available within the Mapping section of a process's parameters has had its format changed from csv to xlsx.
Job schedule in local timezone - Jobs can now be scheduled in local timezone. This will eliminate the change of job runtimes when daylight saving time shifts occur.
Lookback adds Lookback Lag option - Lookback process type now has param for Lookback Lag that can be defined as a number of days to add to the Lookback Window Length to simulate a delay in data. This is similar to Lookahead Lag.