Get the attempt ID for this run, if the cluster manager supports multiple attempts.
Get the attempt ID for this run, if the cluster manager supports multiple attempts. Applications run in client mode will not have attempt IDs.
The application attempt id, if available.
Overriding the spark app id function to provide a snappy specific app id.
Overriding the spark app id function to provide a snappy specific app id.
An application ID
Kill the given list of executors through the cluster manager.
Kill the given list of executors through the cluster manager.
whether the kill request is acknowledged.
Request executors from the cluster manager by specifying the total number desired, including existing pending and running executors.
Request executors from the cluster manager by specifying the total number desired, including existing pending and running executors.
The semantics here guarantee that we do not over-allocate executors for this application, since a later request overrides the value of any prior request. The alternative interface of requesting a delta of executors risks double counting new executors when there are insufficient resources to satisfy the first request. We make the assumption here that the cluster manager will eventually fulfill all requests when resources free up.
a future whose evaluation indicates whether the request is acknowledged.
Get the URLs for the driver logs.
Get the URLs for the driver logs. These URLs are used to display the links in the UI Executors tab for the driver.
Map containing the log names and their respective URLs
Get the list of currently active executors
Get the list of currently active executors
Request that the cluster manager kill the specified executor.
Request that the cluster manager kill the specified executor.
whether the request is acknowledged by the cluster manager.
Request that the cluster manager kill the specified executors.
Request that the cluster manager kill the specified executors.
When asking the executor to be replaced, the executor loss is considered a failure, and killed tasks that are running on the executor will count towards the failure limits. If no replacement is being requested, then the tasks will not count towards the limit.
identifiers of executors to kill
whether to replace the killed executors with new ones
whether to force kill busy executors
whether the kill request is acknowledged. If list to kill is empty, it will return false.
Request that the cluster manager kill the specified executors.
Request that the cluster manager kill the specified executors.
whether the kill request is acknowledged. If list to kill is empty, it will return false.
Called by subclasses when notified of a lost worker.
Called by subclasses when notified of a lost worker. It just fires the message and returns at once.
Request an additional number of executors from the cluster manager.
Request an additional number of executors from the cluster manager.
whether the request is acknowledged.
Update the cluster manager on our scheduling needs.
Update the cluster manager on our scheduling needs. Three bits of information are included to help it make decisions.
The total number of executors we'd like to have. The cluster manager shouldn't kill any running executor to reach this number, but, if all existing executors were to die, this is the number of executors we'd want to be allocated.
The number of tasks in all active stages that have a locality preferences. This includes running, pending, and completed tasks.
A map of hosts to the number of tasks from all active stages that would like to like to run on that host. This includes running, pending, and completed tasks.
whether the request is acknowledged by the cluster manager.
Reset the state of CoarseGrainedSchedulerBackend to the initial state.
Reset the state of CoarseGrainedSchedulerBackend to the initial state. Currently it will only be called in the yarn-client mode when AM re-registers after a failure.