What Spark defaults are we overriding (if any) during cluster instantiation?
I've heard that when we are instantiating a Spark cluster there may be some settings we are making that (for lack of better words) override Spark defaults. Are these documented somewhere? I'm most interested in EMR at this point, but will need to know if those differ from environment to environment.
Please sign in to leave a comment.
Comments
2 comments