Spark cluster you use, works with Dynamic Allocation enabled by default - https://spark.apache.org/docs/latest/configuration.html#dynamic-allocation, plus with “Min Resources Queues - each user is guaranteed to get 3 executors at minimum and rest is opportunistic”
It means, that spark.executor.instances parameter will be just used initially for burst, but then it will downscale/upscale number of executors according to your estimated by spark needs (based on dynamics of your tasks). It is to ensure you efficiently use resources and downscale when you don’t use.
burst on start
Set default resources available for the executor
What we usually recommend is that you don’t change these defaults. For development, please use dynamic allocation with low default. When you have your workload ready, and want to increase the scale, you can adjust spark.dynamicAllocation.minExecutors to e.g. 10 executors (as you know you will have 40 cores -> 10 exec).
Please however take note that “Min Queues” will penalise you (kill your executors) if other users want resources and you don’t work with default dynamic allocation.