Can we assign more memory to the Spark APP than cluster it self? -
say spark cluster stand alone cluster. master having 1gb memory , slave having 1gb memory.
when submit application cluster, can specify how memory driver program , worker program can have. possible specify higher value 10gb driver , 10gb worker?
i mean happen if program submitted requiring more memory cluster self. (let assume physical computer having enough memory)
spark has feature called "dynamic allocation". can turned on using
spark.dynamicallocation.enabled = true
more details here http://www.slideshare.net/databricks/dynamic-allocation-in-spark
Comments
Post a Comment