Can we assign more memory to the Spark APP than cluster it self? -


say spark cluster stand alone cluster. master having 1gb memory , slave having 1gb memory.

when submit application cluster, can specify how memory driver program , worker program can have. possible specify higher value 10gb driver , 10gb worker?

i mean happen if program submitted requiring more memory cluster self. (let assume physical computer having enough memory)

spark has feature called "dynamic allocation". can turned on using

spark.dynamicallocation.enabled = true 

more details here http://www.slideshare.net/databricks/dynamic-allocation-in-spark


Comments

Popular posts from this blog

Ansible - ERROR! the field 'hosts' is required but was not set -

customize file_field button ruby on rails -

SoapUI on windows 10 - high DPI/4K scaling issue -