driver core: Attach devices on CPU local to device node

Call the asynchronous probe routines on a CPU local to the device node. By
doing this we should be able to improve our initialization time
significantly as we can avoid having to access the device from a remote
node which may introduce higher latency.

For example, in the case of initializing memory for NVDIMM this can have a
significant impact as initialing 3TB on remote node can take up to 39
seconds while initialing it on a local node only takes 23 seconds. It is
situations like this where we will see the biggest improvement.

Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This commit is contained in:
Alexander Duyck 2019-01-22 10:39:37 -08:00 committed by Greg Kroah-Hartman
parent 6be9238e5c
commit c37e20eaf4

View File

@ -829,7 +829,7 @@ static int __device_attach(struct device *dev, bool allow_async)
*/
dev_dbg(dev, "scheduling asynchronous probe\n");
get_device(dev);
async_schedule(__device_attach_async_helper, dev);
async_schedule_dev(__device_attach_async_helper, dev);
} else {
pm_request_idle(dev);
}
@ -989,7 +989,7 @@ static int __driver_attach(struct device *dev, void *data)
if (!dev->driver) {
get_device(dev);
dev->p->async_driver = drv;
async_schedule(__driver_attach_async_helper, dev);
async_schedule_dev(__driver_attach_async_helper, dev);
}
device_unlock(dev);
return 0;