Apache Spark — Job monitoring

For the most part the primary reason why we want to use Spark is to accomplish optimal runtime performance from big data workloads. Given that is accepted motivation, how do we actually verify if performance is what is expected to be or see what is going on at basic level of task ? We can do that … Read more about “Apache Spark — Job monitoring”