forked from luck/tmp_suning_uos_patched
Merge branch 'pm-assorted'
* pm-assorted: PM / QoS: Add pm_qos and dev_pm_qos to events-power.txt PM / QoS: Add dev_pm_qos_request tracepoints PM / QoS: Add pm_qos_request tracepoints PM / QoS: Add pm_qos_update_target/flags tracepoints PM / QoS: Update Documentation/power/pm_qos_interface.txt PM / Sleep: Print last wakeup source on failed wakeup_count write PM / QoS: correct the valid range of pm_qos_class PM / wakeup: Adjust messaging for wake events during suspend PM / Runtime: Update .runtime_idle() callback documentation PM / Runtime: Rework the "runtime idle" helper routine PM / Hibernate: print physical addresses consistently with other parts of kernel
This commit is contained in:
commit
e52cff8bdd
|
@ -7,7 +7,7 @@ one of the parameters.
|
|||
Two different PM QoS frameworks are available:
|
||||
1. PM QoS classes for cpu_dma_latency, network_latency, network_throughput.
|
||||
2. the per-device PM QoS framework provides the API to manage the per-device latency
|
||||
constraints.
|
||||
constraints and PM QoS flags.
|
||||
|
||||
Each parameters have defined units:
|
||||
* latency: usec
|
||||
|
@ -86,13 +86,17 @@ To remove the user mode request for a target value simply close the device
|
|||
node.
|
||||
|
||||
|
||||
2. PM QoS per-device latency framework
|
||||
2. PM QoS per-device latency and flags framework
|
||||
|
||||
For each device, there are two lists of PM QoS requests. One is maintained
|
||||
along with the aggregated target of latency value and the other is for PM QoS
|
||||
flags. Values are updated in response to changes of the request list.
|
||||
|
||||
Target latency value is simply the minimum of the request values held in the
|
||||
parameter list elements. The PM QoS flags aggregate value is a gather (bitwise
|
||||
OR) of all list elements' values. Two device PM QoS flags are defined currently:
|
||||
PM_QOS_FLAG_NO_POWER_OFF and PM_QOS_FLAG_REMOTE_WAKEUP.
|
||||
|
||||
For each device a list of performance requests is maintained along with
|
||||
an aggregated target value. The aggregated target value is updated with
|
||||
changes to the request list or elements of the list. Typically the
|
||||
aggregated target value is simply the max or min of the request values held
|
||||
in the parameter list elements.
|
||||
Note: the aggregated target value is implemented as an atomic variable so that
|
||||
reading the aggregated value does not require any locking mechanism.
|
||||
|
||||
|
@ -119,6 +123,38 @@ the request.
|
|||
s32 dev_pm_qos_read_value(device):
|
||||
Returns the aggregated value for a given device's constraints list.
|
||||
|
||||
enum pm_qos_flags_status dev_pm_qos_flags(device, mask)
|
||||
Check PM QoS flags of the given device against the given mask of flags.
|
||||
The meaning of the return values is as follows:
|
||||
PM_QOS_FLAGS_ALL: All flags from the mask are set
|
||||
PM_QOS_FLAGS_SOME: Some flags from the mask are set
|
||||
PM_QOS_FLAGS_NONE: No flags from the mask are set
|
||||
PM_QOS_FLAGS_UNDEFINED: The device's PM QoS structure has not been
|
||||
initialized or the list of requests is empty.
|
||||
|
||||
int dev_pm_qos_add_ancestor_request(dev, handle, value)
|
||||
Add a PM QoS request for the first direct ancestor of the given device whose
|
||||
power.ignore_children flag is unset.
|
||||
|
||||
int dev_pm_qos_expose_latency_limit(device, value)
|
||||
Add a request to the device's PM QoS list of latency constraints and create
|
||||
a sysfs attribute pm_qos_resume_latency_us under the device's power directory
|
||||
allowing user space to manipulate that request.
|
||||
|
||||
void dev_pm_qos_hide_latency_limit(device)
|
||||
Drop the request added by dev_pm_qos_expose_latency_limit() from the device's
|
||||
PM QoS list of latency constraints and remove sysfs attribute pm_qos_resume_latency_us
|
||||
from the device's power directory.
|
||||
|
||||
int dev_pm_qos_expose_flags(device, value)
|
||||
Add a request to the device's PM QoS list of flags and create sysfs attributes
|
||||
pm_qos_no_power_off and pm_qos_remote_wakeup under the device's power directory
|
||||
allowing user space to change these flags' value.
|
||||
|
||||
void dev_pm_qos_hide_flags(device)
|
||||
Drop the request added by dev_pm_qos_expose_flags() from the device's PM QoS list
|
||||
of flags and remove sysfs attributes pm_qos_no_power_off and pm_qos_remote_wakeup
|
||||
under the device's power directory.
|
||||
|
||||
Notification mechanisms:
|
||||
The per-device PM QoS framework has 2 different and distinct notification trees:
|
||||
|
|
|
@ -144,8 +144,12 @@ The action performed by the idle callback is totally dependent on the subsystem
|
|||
(or driver) in question, but the expected and recommended action is to check
|
||||
if the device can be suspended (i.e. if all of the conditions necessary for
|
||||
suspending the device are satisfied) and to queue up a suspend request for the
|
||||
device in that case. The value returned by this callback is ignored by the PM
|
||||
core.
|
||||
device in that case. If there is no idle callback, or if the callback returns
|
||||
0, then the PM core will attempt to carry out a runtime suspend of the device;
|
||||
in essence, it will call pm_runtime_suspend() directly. To prevent this (for
|
||||
example, if the callback routine has started a delayed suspend), the routine
|
||||
should return a non-zero value. Negative error return codes are ignored by the
|
||||
PM core.
|
||||
|
||||
The helper functions provided by the PM core, described in Section 4, guarantee
|
||||
that the following constraints are met with respect to runtime PM callbacks for
|
||||
|
@ -301,9 +305,10 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h:
|
|||
removing the device from device hierarchy
|
||||
|
||||
int pm_runtime_idle(struct device *dev);
|
||||
- execute the subsystem-level idle callback for the device; returns 0 on
|
||||
success or error code on failure, where -EINPROGRESS means that
|
||||
->runtime_idle() is already being executed
|
||||
- execute the subsystem-level idle callback for the device; returns an
|
||||
error code on failure, where -EINPROGRESS means that ->runtime_idle() is
|
||||
already being executed; if there is no callback or the callback returns 0
|
||||
then run pm_runtime_suspend(dev) and return its result
|
||||
|
||||
int pm_runtime_suspend(struct device *dev);
|
||||
- execute the subsystem-level suspend callback for the device; returns 0 on
|
||||
|
@ -660,11 +665,6 @@ Subsystems may wish to conserve code space by using the set of generic power
|
|||
management callbacks provided by the PM core, defined in
|
||||
driver/base/power/generic_ops.c:
|
||||
|
||||
int pm_generic_runtime_idle(struct device *dev);
|
||||
- invoke the ->runtime_idle() callback provided by the driver of this
|
||||
device, if defined, and call pm_runtime_suspend() for this device if the
|
||||
return value is 0 or the callback is not defined
|
||||
|
||||
int pm_generic_runtime_suspend(struct device *dev);
|
||||
- invoke the ->runtime_suspend() callback provided by the driver of this
|
||||
device and return its result, or return -EINVAL if not defined
|
||||
|
|
|
@ -63,3 +63,34 @@ power_domain_target "%s state=%lu cpu_id=%lu"
|
|||
The first parameter gives the power domain name (e.g. "mpu_pwrdm").
|
||||
The second parameter is the power domain target state.
|
||||
|
||||
4. PM QoS events
|
||||
================
|
||||
The PM QoS events are used for QoS add/update/remove request and for
|
||||
target/flags update.
|
||||
|
||||
pm_qos_add_request "pm_qos_class=%s value=%d"
|
||||
pm_qos_update_request "pm_qos_class=%s value=%d"
|
||||
pm_qos_remove_request "pm_qos_class=%s value=%d"
|
||||
pm_qos_update_request_timeout "pm_qos_class=%s value=%d, timeout_us=%ld"
|
||||
|
||||
The first parameter gives the QoS class name (e.g. "CPU_DMA_LATENCY").
|
||||
The second parameter is value to be added/updated/removed.
|
||||
The third parameter is timeout value in usec.
|
||||
|
||||
pm_qos_update_target "action=%s prev_value=%d curr_value=%d"
|
||||
pm_qos_update_flags "action=%s prev_value=0x%x curr_value=0x%x"
|
||||
|
||||
The first parameter gives the QoS action name (e.g. "ADD_REQ").
|
||||
The second parameter is the previous QoS value.
|
||||
The third parameter is the current QoS value to update.
|
||||
|
||||
And, there are also events used for device PM QoS add/update/remove request.
|
||||
|
||||
dev_pm_qos_add_request "device=%s type=%s new_value=%d"
|
||||
dev_pm_qos_update_request "device=%s type=%s new_value=%d"
|
||||
dev_pm_qos_remove_request "device=%s type=%s new_value=%d"
|
||||
|
||||
The first parameter gives the device name which tries to add/update/remove
|
||||
QoS requests.
|
||||
The second parameter gives the request type (e.g. "DEV_PM_QOS_LATENCY").
|
||||
The third parameter is value to be added/updated/removed.
|
||||
|
|
|
@ -591,11 +591,6 @@ static int _od_runtime_suspend(struct device *dev)
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int _od_runtime_idle(struct device *dev)
|
||||
{
|
||||
return pm_generic_runtime_idle(dev);
|
||||
}
|
||||
|
||||
static int _od_runtime_resume(struct device *dev)
|
||||
{
|
||||
struct platform_device *pdev = to_platform_device(dev);
|
||||
|
@ -653,7 +648,7 @@ static int _od_resume_noirq(struct device *dev)
|
|||
struct dev_pm_domain omap_device_pm_domain = {
|
||||
.ops = {
|
||||
SET_RUNTIME_PM_OPS(_od_runtime_suspend, _od_runtime_resume,
|
||||
_od_runtime_idle)
|
||||
NULL)
|
||||
USE_PLATFORM_PM_SLEEP_OPS
|
||||
.suspend_noirq = _od_suspend_noirq,
|
||||
.resume_noirq = _od_resume_noirq,
|
||||
|
|
|
@ -933,7 +933,6 @@ static struct dev_pm_domain acpi_general_pm_domain = {
|
|||
#ifdef CONFIG_PM_RUNTIME
|
||||
.runtime_suspend = acpi_subsys_runtime_suspend,
|
||||
.runtime_resume = acpi_subsys_runtime_resume,
|
||||
.runtime_idle = pm_generic_runtime_idle,
|
||||
#endif
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
.prepare = acpi_subsys_prepare,
|
||||
|
|
|
@ -284,7 +284,7 @@ static const struct dev_pm_ops amba_pm = {
|
|||
SET_RUNTIME_PM_OPS(
|
||||
amba_pm_runtime_suspend,
|
||||
amba_pm_runtime_resume,
|
||||
pm_generic_runtime_idle
|
||||
NULL
|
||||
)
|
||||
};
|
||||
|
||||
|
|
|
@ -5436,7 +5436,7 @@ static int ata_port_runtime_idle(struct device *dev)
|
|||
return -EBUSY;
|
||||
}
|
||||
|
||||
return pm_runtime_suspend(dev);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int ata_port_runtime_suspend(struct device *dev)
|
||||
|
|
|
@ -888,7 +888,6 @@ int platform_pm_restore(struct device *dev)
|
|||
static const struct dev_pm_ops platform_dev_pm_ops = {
|
||||
.runtime_suspend = pm_generic_runtime_suspend,
|
||||
.runtime_resume = pm_generic_runtime_resume,
|
||||
.runtime_idle = pm_generic_runtime_idle,
|
||||
USE_PLATFORM_PM_SLEEP_OPS
|
||||
};
|
||||
|
||||
|
|
|
@ -2143,7 +2143,6 @@ void pm_genpd_init(struct generic_pm_domain *genpd,
|
|||
genpd->max_off_time_changed = true;
|
||||
genpd->domain.ops.runtime_suspend = pm_genpd_runtime_suspend;
|
||||
genpd->domain.ops.runtime_resume = pm_genpd_runtime_resume;
|
||||
genpd->domain.ops.runtime_idle = pm_generic_runtime_idle;
|
||||
genpd->domain.ops.prepare = pm_genpd_prepare;
|
||||
genpd->domain.ops.suspend = pm_genpd_suspend;
|
||||
genpd->domain.ops.suspend_late = pm_genpd_suspend_late;
|
||||
|
|
|
@ -11,29 +11,6 @@
|
|||
#include <linux/export.h>
|
||||
|
||||
#ifdef CONFIG_PM_RUNTIME
|
||||
/**
|
||||
* pm_generic_runtime_idle - Generic runtime idle callback for subsystems.
|
||||
* @dev: Device to handle.
|
||||
*
|
||||
* If PM operations are defined for the @dev's driver and they include
|
||||
* ->runtime_idle(), execute it and return its error code, if nonzero.
|
||||
* Otherwise, execute pm_runtime_suspend() for the device and return 0.
|
||||
*/
|
||||
int pm_generic_runtime_idle(struct device *dev)
|
||||
{
|
||||
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
|
||||
|
||||
if (pm && pm->runtime_idle) {
|
||||
int ret = pm->runtime_idle(dev);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
pm_runtime_suspend(dev);
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pm_generic_runtime_idle);
|
||||
|
||||
/**
|
||||
* pm_generic_runtime_suspend - Generic runtime suspend callback for subsystems.
|
||||
* @dev: Device to suspend.
|
||||
|
|
|
@ -42,6 +42,7 @@
|
|||
#include <linux/export.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
#include <linux/err.h>
|
||||
#include <trace/events/power.h>
|
||||
|
||||
#include "power.h"
|
||||
|
||||
|
@ -305,6 +306,7 @@ int dev_pm_qos_add_request(struct device *dev, struct dev_pm_qos_request *req,
|
|||
else if (!dev->power.qos)
|
||||
ret = dev_pm_qos_constraints_allocate(dev);
|
||||
|
||||
trace_dev_pm_qos_add_request(dev_name(dev), type, value);
|
||||
if (!ret) {
|
||||
req->dev = dev;
|
||||
req->type = type;
|
||||
|
@ -349,6 +351,8 @@ static int __dev_pm_qos_update_request(struct dev_pm_qos_request *req,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
trace_dev_pm_qos_update_request(dev_name(req->dev), req->type,
|
||||
new_value);
|
||||
if (curr_value != new_value)
|
||||
ret = apply_constraint(req, PM_QOS_UPDATE_REQ, new_value);
|
||||
|
||||
|
@ -398,6 +402,8 @@ static int __dev_pm_qos_remove_request(struct dev_pm_qos_request *req)
|
|||
if (IS_ERR_OR_NULL(req->dev->power.qos))
|
||||
return -ENODEV;
|
||||
|
||||
trace_dev_pm_qos_remove_request(dev_name(req->dev), req->type,
|
||||
PM_QOS_DEFAULT_VALUE);
|
||||
ret = apply_constraint(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE);
|
||||
memset(req, 0, sizeof(*req));
|
||||
return ret;
|
||||
|
|
|
@ -293,11 +293,8 @@ static int rpm_idle(struct device *dev, int rpmflags)
|
|||
/* Pending requests need to be canceled. */
|
||||
dev->power.request = RPM_REQ_NONE;
|
||||
|
||||
if (dev->power.no_callbacks) {
|
||||
/* Assume ->runtime_idle() callback would have suspended. */
|
||||
retval = rpm_suspend(dev, rpmflags);
|
||||
if (dev->power.no_callbacks)
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Carry out an asynchronous or a synchronous idle notification. */
|
||||
if (rpmflags & RPM_ASYNC) {
|
||||
|
@ -306,7 +303,8 @@ static int rpm_idle(struct device *dev, int rpmflags)
|
|||
dev->power.request_pending = true;
|
||||
queue_work(pm_wq, &dev->power.work);
|
||||
}
|
||||
goto out;
|
||||
trace_rpm_return_int(dev, _THIS_IP_, 0);
|
||||
return 0;
|
||||
}
|
||||
|
||||
dev->power.idle_notification = true;
|
||||
|
@ -326,14 +324,14 @@ static int rpm_idle(struct device *dev, int rpmflags)
|
|||
callback = dev->driver->pm->runtime_idle;
|
||||
|
||||
if (callback)
|
||||
__rpm_callback(callback, dev);
|
||||
retval = __rpm_callback(callback, dev);
|
||||
|
||||
dev->power.idle_notification = false;
|
||||
wake_up_all(&dev->power.wait_queue);
|
||||
|
||||
out:
|
||||
trace_rpm_return_int(dev, _THIS_IP_, retval);
|
||||
return retval;
|
||||
return retval ? retval : rpm_suspend(dev, rpmflags);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -659,7 +659,7 @@ void pm_wakeup_event(struct device *dev, unsigned int msec)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(pm_wakeup_event);
|
||||
|
||||
static void print_active_wakeup_sources(void)
|
||||
void pm_print_active_wakeup_sources(void)
|
||||
{
|
||||
struct wakeup_source *ws;
|
||||
int active = 0;
|
||||
|
@ -683,6 +683,7 @@ static void print_active_wakeup_sources(void)
|
|||
last_activity_ws->name);
|
||||
rcu_read_unlock();
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pm_print_active_wakeup_sources);
|
||||
|
||||
/**
|
||||
* pm_wakeup_pending - Check if power transition in progress should be aborted.
|
||||
|
@ -707,8 +708,10 @@ bool pm_wakeup_pending(void)
|
|||
}
|
||||
spin_unlock_irqrestore(&events_lock, flags);
|
||||
|
||||
if (ret)
|
||||
print_active_wakeup_sources();
|
||||
if (ret) {
|
||||
pr_info("PM: Wakeup pending, aborting suspend\n");
|
||||
pm_print_active_wakeup_sources();
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
|
|
@ -1405,7 +1405,7 @@ static int dma_runtime_idle(struct device *dev)
|
|||
return -EAGAIN;
|
||||
}
|
||||
|
||||
return pm_schedule_suspend(dev, 0);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/******************************************************************************
|
||||
|
|
|
@ -305,11 +305,7 @@ static const struct irq_domain_ops lnw_gpio_irq_ops = {
|
|||
|
||||
static int lnw_gpio_runtime_idle(struct device *dev)
|
||||
{
|
||||
int err = pm_schedule_suspend(dev, 500);
|
||||
|
||||
if (!err)
|
||||
return 0;
|
||||
|
||||
pm_schedule_suspend(dev, 500);
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
|
|
|
@ -435,7 +435,7 @@ static const struct dev_pm_ops i2c_device_pm_ops = {
|
|||
SET_RUNTIME_PM_OPS(
|
||||
pm_generic_runtime_suspend,
|
||||
pm_generic_runtime_resume,
|
||||
pm_generic_runtime_idle
|
||||
NULL
|
||||
)
|
||||
};
|
||||
|
||||
|
|
|
@ -886,12 +886,6 @@ static int ab8500_gpadc_runtime_resume(struct device *dev)
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int ab8500_gpadc_runtime_idle(struct device *dev)
|
||||
{
|
||||
pm_runtime_suspend(dev);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int ab8500_gpadc_suspend(struct device *dev)
|
||||
{
|
||||
struct ab8500_gpadc *gpadc = dev_get_drvdata(dev);
|
||||
|
@ -1039,7 +1033,7 @@ static int ab8500_gpadc_remove(struct platform_device *pdev)
|
|||
static const struct dev_pm_ops ab8500_gpadc_pm_ops = {
|
||||
SET_RUNTIME_PM_OPS(ab8500_gpadc_runtime_suspend,
|
||||
ab8500_gpadc_runtime_resume,
|
||||
ab8500_gpadc_runtime_idle)
|
||||
NULL)
|
||||
SET_SYSTEM_SLEEP_PM_OPS(ab8500_gpadc_suspend,
|
||||
ab8500_gpadc_resume)
|
||||
|
||||
|
|
|
@ -164,7 +164,7 @@ static int mmc_runtime_resume(struct device *dev)
|
|||
|
||||
static int mmc_runtime_idle(struct device *dev)
|
||||
{
|
||||
return pm_runtime_suspend(dev);
|
||||
return 0;
|
||||
}
|
||||
|
||||
#endif /* !CONFIG_PM_RUNTIME */
|
||||
|
|
|
@ -211,7 +211,7 @@ static const struct dev_pm_ops sdio_bus_pm_ops = {
|
|||
SET_RUNTIME_PM_OPS(
|
||||
pm_generic_runtime_suspend,
|
||||
pm_generic_runtime_resume,
|
||||
pm_generic_runtime_idle
|
||||
NULL
|
||||
)
|
||||
};
|
||||
|
||||
|
|
|
@ -1050,26 +1050,22 @@ static int pci_pm_runtime_idle(struct device *dev)
|
|||
{
|
||||
struct pci_dev *pci_dev = to_pci_dev(dev);
|
||||
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
|
||||
int ret = 0;
|
||||
|
||||
/*
|
||||
* If pci_dev->driver is not set (unbound), the device should
|
||||
* always remain in D0 regardless of the runtime PM status
|
||||
*/
|
||||
if (!pci_dev->driver)
|
||||
goto out;
|
||||
return 0;
|
||||
|
||||
if (!pm)
|
||||
return -ENOSYS;
|
||||
|
||||
if (pm->runtime_idle) {
|
||||
int ret = pm->runtime_idle(dev);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
if (pm->runtime_idle)
|
||||
ret = pm->runtime_idle(dev);
|
||||
|
||||
out:
|
||||
pm_runtime_suspend(dev);
|
||||
return 0;
|
||||
return ret;
|
||||
}
|
||||
|
||||
#else /* !CONFIG_PM_RUNTIME */
|
||||
|
|
|
@ -229,8 +229,6 @@ static int scsi_runtime_resume(struct device *dev)
|
|||
|
||||
static int scsi_runtime_idle(struct device *dev)
|
||||
{
|
||||
int err;
|
||||
|
||||
dev_dbg(dev, "scsi_runtime_idle\n");
|
||||
|
||||
/* Insert hooks here for targets, hosts, and transport classes */
|
||||
|
@ -240,14 +238,11 @@ static int scsi_runtime_idle(struct device *dev)
|
|||
|
||||
if (sdev->request_queue->dev) {
|
||||
pm_runtime_mark_last_busy(dev);
|
||||
err = pm_runtime_autosuspend(dev);
|
||||
} else {
|
||||
err = pm_runtime_suspend(dev);
|
||||
pm_runtime_autosuspend(dev);
|
||||
return -EBUSY;
|
||||
}
|
||||
} else {
|
||||
err = pm_runtime_suspend(dev);
|
||||
}
|
||||
return err;
|
||||
return 0;
|
||||
}
|
||||
|
||||
int scsi_autopm_get_device(struct scsi_device *sdev)
|
||||
|
|
|
@ -25,7 +25,7 @@
|
|||
static int default_platform_runtime_idle(struct device *dev)
|
||||
{
|
||||
/* suspend synchronously to disable clocks immediately */
|
||||
return pm_runtime_suspend(dev);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct dev_pm_domain default_pm_domain = {
|
||||
|
|
|
@ -223,7 +223,7 @@ static const struct dev_pm_ops spi_pm = {
|
|||
SET_RUNTIME_PM_OPS(
|
||||
pm_generic_runtime_suspend,
|
||||
pm_generic_runtime_resume,
|
||||
pm_generic_runtime_idle
|
||||
NULL
|
||||
)
|
||||
};
|
||||
|
||||
|
|
|
@ -1248,13 +1248,8 @@ static int serial_hsu_resume(struct pci_dev *pdev)
|
|||
#ifdef CONFIG_PM_RUNTIME
|
||||
static int serial_hsu_runtime_idle(struct device *dev)
|
||||
{
|
||||
int err;
|
||||
|
||||
err = pm_schedule_suspend(dev, 500);
|
||||
if (err)
|
||||
return -EBUSY;
|
||||
|
||||
return 0;
|
||||
pm_schedule_suspend(dev, 500);
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
static int serial_hsu_runtime_suspend(struct device *dev)
|
||||
|
|
|
@ -1765,7 +1765,8 @@ int usb_runtime_idle(struct device *dev)
|
|||
*/
|
||||
if (autosuspend_check(udev) == 0)
|
||||
pm_runtime_autosuspend(dev);
|
||||
return 0;
|
||||
/* Tell the core not to suspend it, though. */
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
int usb_set_usb2_hardware_lpm(struct usb_device *udev, int enable)
|
||||
|
|
|
@ -141,7 +141,6 @@ static const struct dev_pm_ops usb_port_pm_ops = {
|
|||
#ifdef CONFIG_PM_RUNTIME
|
||||
.runtime_suspend = usb_port_runtime_suspend,
|
||||
.runtime_resume = usb_port_runtime_resume,
|
||||
.runtime_idle = pm_generic_runtime_idle,
|
||||
#endif
|
||||
};
|
||||
|
||||
|
|
|
@ -37,7 +37,6 @@ extern void pm_runtime_enable(struct device *dev);
|
|||
extern void __pm_runtime_disable(struct device *dev, bool check_resume);
|
||||
extern void pm_runtime_allow(struct device *dev);
|
||||
extern void pm_runtime_forbid(struct device *dev);
|
||||
extern int pm_generic_runtime_idle(struct device *dev);
|
||||
extern int pm_generic_runtime_suspend(struct device *dev);
|
||||
extern int pm_generic_runtime_resume(struct device *dev);
|
||||
extern void pm_runtime_no_callbacks(struct device *dev);
|
||||
|
@ -143,7 +142,6 @@ static inline bool pm_runtime_active(struct device *dev) { return true; }
|
|||
static inline bool pm_runtime_status_suspended(struct device *dev) { return false; }
|
||||
static inline bool pm_runtime_enabled(struct device *dev) { return false; }
|
||||
|
||||
static inline int pm_generic_runtime_idle(struct device *dev) { return 0; }
|
||||
static inline int pm_generic_runtime_suspend(struct device *dev) { return 0; }
|
||||
static inline int pm_generic_runtime_resume(struct device *dev) { return 0; }
|
||||
static inline void pm_runtime_no_callbacks(struct device *dev) {}
|
||||
|
|
|
@ -363,6 +363,7 @@ extern bool pm_wakeup_pending(void);
|
|||
extern bool pm_get_wakeup_count(unsigned int *count, bool block);
|
||||
extern bool pm_save_wakeup_count(unsigned int count);
|
||||
extern void pm_wakep_autosleep_enabled(bool set);
|
||||
extern void pm_print_active_wakeup_sources(void);
|
||||
|
||||
static inline void lock_system_sleep(void)
|
||||
{
|
||||
|
|
|
@ -5,6 +5,7 @@
|
|||
#define _TRACE_POWER_H
|
||||
|
||||
#include <linux/ktime.h>
|
||||
#include <linux/pm_qos.h>
|
||||
#include <linux/tracepoint.h>
|
||||
|
||||
DECLARE_EVENT_CLASS(cpu,
|
||||
|
@ -177,6 +178,178 @@ DEFINE_EVENT(power_domain, power_domain_target,
|
|||
|
||||
TP_ARGS(name, state, cpu_id)
|
||||
);
|
||||
|
||||
/*
|
||||
* The pm qos events are used for pm qos update
|
||||
*/
|
||||
DECLARE_EVENT_CLASS(pm_qos_request,
|
||||
|
||||
TP_PROTO(int pm_qos_class, s32 value),
|
||||
|
||||
TP_ARGS(pm_qos_class, value),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field( int, pm_qos_class )
|
||||
__field( s32, value )
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->pm_qos_class = pm_qos_class;
|
||||
__entry->value = value;
|
||||
),
|
||||
|
||||
TP_printk("pm_qos_class=%s value=%d",
|
||||
__print_symbolic(__entry->pm_qos_class,
|
||||
{ PM_QOS_CPU_DMA_LATENCY, "CPU_DMA_LATENCY" },
|
||||
{ PM_QOS_NETWORK_LATENCY, "NETWORK_LATENCY" },
|
||||
{ PM_QOS_NETWORK_THROUGHPUT, "NETWORK_THROUGHPUT" }),
|
||||
__entry->value)
|
||||
);
|
||||
|
||||
DEFINE_EVENT(pm_qos_request, pm_qos_add_request,
|
||||
|
||||
TP_PROTO(int pm_qos_class, s32 value),
|
||||
|
||||
TP_ARGS(pm_qos_class, value)
|
||||
);
|
||||
|
||||
DEFINE_EVENT(pm_qos_request, pm_qos_update_request,
|
||||
|
||||
TP_PROTO(int pm_qos_class, s32 value),
|
||||
|
||||
TP_ARGS(pm_qos_class, value)
|
||||
);
|
||||
|
||||
DEFINE_EVENT(pm_qos_request, pm_qos_remove_request,
|
||||
|
||||
TP_PROTO(int pm_qos_class, s32 value),
|
||||
|
||||
TP_ARGS(pm_qos_class, value)
|
||||
);
|
||||
|
||||
TRACE_EVENT(pm_qos_update_request_timeout,
|
||||
|
||||
TP_PROTO(int pm_qos_class, s32 value, unsigned long timeout_us),
|
||||
|
||||
TP_ARGS(pm_qos_class, value, timeout_us),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field( int, pm_qos_class )
|
||||
__field( s32, value )
|
||||
__field( unsigned long, timeout_us )
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->pm_qos_class = pm_qos_class;
|
||||
__entry->value = value;
|
||||
__entry->timeout_us = timeout_us;
|
||||
),
|
||||
|
||||
TP_printk("pm_qos_class=%s value=%d, timeout_us=%ld",
|
||||
__print_symbolic(__entry->pm_qos_class,
|
||||
{ PM_QOS_CPU_DMA_LATENCY, "CPU_DMA_LATENCY" },
|
||||
{ PM_QOS_NETWORK_LATENCY, "NETWORK_LATENCY" },
|
||||
{ PM_QOS_NETWORK_THROUGHPUT, "NETWORK_THROUGHPUT" }),
|
||||
__entry->value, __entry->timeout_us)
|
||||
);
|
||||
|
||||
DECLARE_EVENT_CLASS(pm_qos_update,
|
||||
|
||||
TP_PROTO(enum pm_qos_req_action action, int prev_value, int curr_value),
|
||||
|
||||
TP_ARGS(action, prev_value, curr_value),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field( enum pm_qos_req_action, action )
|
||||
__field( int, prev_value )
|
||||
__field( int, curr_value )
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->action = action;
|
||||
__entry->prev_value = prev_value;
|
||||
__entry->curr_value = curr_value;
|
||||
),
|
||||
|
||||
TP_printk("action=%s prev_value=%d curr_value=%d",
|
||||
__print_symbolic(__entry->action,
|
||||
{ PM_QOS_ADD_REQ, "ADD_REQ" },
|
||||
{ PM_QOS_UPDATE_REQ, "UPDATE_REQ" },
|
||||
{ PM_QOS_REMOVE_REQ, "REMOVE_REQ" }),
|
||||
__entry->prev_value, __entry->curr_value)
|
||||
);
|
||||
|
||||
DEFINE_EVENT(pm_qos_update, pm_qos_update_target,
|
||||
|
||||
TP_PROTO(enum pm_qos_req_action action, int prev_value, int curr_value),
|
||||
|
||||
TP_ARGS(action, prev_value, curr_value)
|
||||
);
|
||||
|
||||
DEFINE_EVENT_PRINT(pm_qos_update, pm_qos_update_flags,
|
||||
|
||||
TP_PROTO(enum pm_qos_req_action action, int prev_value, int curr_value),
|
||||
|
||||
TP_ARGS(action, prev_value, curr_value),
|
||||
|
||||
TP_printk("action=%s prev_value=0x%x curr_value=0x%x",
|
||||
__print_symbolic(__entry->action,
|
||||
{ PM_QOS_ADD_REQ, "ADD_REQ" },
|
||||
{ PM_QOS_UPDATE_REQ, "UPDATE_REQ" },
|
||||
{ PM_QOS_REMOVE_REQ, "REMOVE_REQ" }),
|
||||
__entry->prev_value, __entry->curr_value)
|
||||
);
|
||||
|
||||
DECLARE_EVENT_CLASS(dev_pm_qos_request,
|
||||
|
||||
TP_PROTO(const char *name, enum dev_pm_qos_req_type type,
|
||||
s32 new_value),
|
||||
|
||||
TP_ARGS(name, type, new_value),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__string( name, name )
|
||||
__field( enum dev_pm_qos_req_type, type )
|
||||
__field( s32, new_value )
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__assign_str(name, name);
|
||||
__entry->type = type;
|
||||
__entry->new_value = new_value;
|
||||
),
|
||||
|
||||
TP_printk("device=%s type=%s new_value=%d",
|
||||
__get_str(name),
|
||||
__print_symbolic(__entry->type,
|
||||
{ DEV_PM_QOS_LATENCY, "DEV_PM_QOS_LATENCY" },
|
||||
{ DEV_PM_QOS_FLAGS, "DEV_PM_QOS_FLAGS" }),
|
||||
__entry->new_value)
|
||||
);
|
||||
|
||||
DEFINE_EVENT(dev_pm_qos_request, dev_pm_qos_add_request,
|
||||
|
||||
TP_PROTO(const char *name, enum dev_pm_qos_req_type type,
|
||||
s32 new_value),
|
||||
|
||||
TP_ARGS(name, type, new_value)
|
||||
);
|
||||
|
||||
DEFINE_EVENT(dev_pm_qos_request, dev_pm_qos_update_request,
|
||||
|
||||
TP_PROTO(const char *name, enum dev_pm_qos_req_type type,
|
||||
s32 new_value),
|
||||
|
||||
TP_ARGS(name, type, new_value)
|
||||
);
|
||||
|
||||
DEFINE_EVENT(dev_pm_qos_request, dev_pm_qos_remove_request,
|
||||
|
||||
TP_PROTO(const char *name, enum dev_pm_qos_req_type type,
|
||||
s32 new_value),
|
||||
|
||||
TP_ARGS(name, type, new_value)
|
||||
);
|
||||
#endif /* _TRACE_POWER_H */
|
||||
|
||||
/* This part must be outside protection */
|
||||
|
|
|
@ -424,6 +424,8 @@ static ssize_t wakeup_count_store(struct kobject *kobj,
|
|||
if (sscanf(buf, "%u", &val) == 1) {
|
||||
if (pm_save_wakeup_count(val))
|
||||
error = n;
|
||||
else
|
||||
pm_print_active_wakeup_sources();
|
||||
}
|
||||
|
||||
out:
|
||||
|
|
|
@ -44,6 +44,7 @@
|
|||
|
||||
#include <linux/uaccess.h>
|
||||
#include <linux/export.h>
|
||||
#include <trace/events/power.h>
|
||||
|
||||
/*
|
||||
* locking rule: all changes to constraints or notifiers lists
|
||||
|
@ -202,6 +203,7 @@ int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node,
|
|||
|
||||
spin_unlock_irqrestore(&pm_qos_lock, flags);
|
||||
|
||||
trace_pm_qos_update_target(action, prev_value, curr_value);
|
||||
if (prev_value != curr_value) {
|
||||
blocking_notifier_call_chain(c->notifiers,
|
||||
(unsigned long)curr_value,
|
||||
|
@ -272,6 +274,7 @@ bool pm_qos_update_flags(struct pm_qos_flags *pqf,
|
|||
|
||||
spin_unlock_irqrestore(&pm_qos_lock, irqflags);
|
||||
|
||||
trace_pm_qos_update_flags(action, prev_value, curr_value);
|
||||
return prev_value != curr_value;
|
||||
}
|
||||
|
||||
|
@ -333,6 +336,7 @@ void pm_qos_add_request(struct pm_qos_request *req,
|
|||
}
|
||||
req->pm_qos_class = pm_qos_class;
|
||||
INIT_DELAYED_WORK(&req->work, pm_qos_work_fn);
|
||||
trace_pm_qos_add_request(pm_qos_class, value);
|
||||
pm_qos_update_target(pm_qos_array[pm_qos_class]->constraints,
|
||||
&req->node, PM_QOS_ADD_REQ, value);
|
||||
}
|
||||
|
@ -361,6 +365,7 @@ void pm_qos_update_request(struct pm_qos_request *req,
|
|||
|
||||
cancel_delayed_work_sync(&req->work);
|
||||
|
||||
trace_pm_qos_update_request(req->pm_qos_class, new_value);
|
||||
if (new_value != req->node.prio)
|
||||
pm_qos_update_target(
|
||||
pm_qos_array[req->pm_qos_class]->constraints,
|
||||
|
@ -387,6 +392,8 @@ void pm_qos_update_request_timeout(struct pm_qos_request *req, s32 new_value,
|
|||
|
||||
cancel_delayed_work_sync(&req->work);
|
||||
|
||||
trace_pm_qos_update_request_timeout(req->pm_qos_class,
|
||||
new_value, timeout_us);
|
||||
if (new_value != req->node.prio)
|
||||
pm_qos_update_target(
|
||||
pm_qos_array[req->pm_qos_class]->constraints,
|
||||
|
@ -416,6 +423,7 @@ void pm_qos_remove_request(struct pm_qos_request *req)
|
|||
|
||||
cancel_delayed_work_sync(&req->work);
|
||||
|
||||
trace_pm_qos_remove_request(req->pm_qos_class, PM_QOS_DEFAULT_VALUE);
|
||||
pm_qos_update_target(pm_qos_array[req->pm_qos_class]->constraints,
|
||||
&req->node, PM_QOS_REMOVE_REQ,
|
||||
PM_QOS_DEFAULT_VALUE);
|
||||
|
@ -477,7 +485,7 @@ static int find_pm_qos_object_by_minor(int minor)
|
|||
{
|
||||
int pm_qos_class;
|
||||
|
||||
for (pm_qos_class = 0;
|
||||
for (pm_qos_class = PM_QOS_CPU_DMA_LATENCY;
|
||||
pm_qos_class < PM_QOS_NUM_CLASSES; pm_qos_class++) {
|
||||
if (minor ==
|
||||
pm_qos_array[pm_qos_class]->pm_qos_power_miscdev.minor)
|
||||
|
@ -491,7 +499,7 @@ static int pm_qos_power_open(struct inode *inode, struct file *filp)
|
|||
long pm_qos_class;
|
||||
|
||||
pm_qos_class = find_pm_qos_object_by_minor(iminor(inode));
|
||||
if (pm_qos_class >= 0) {
|
||||
if (pm_qos_class >= PM_QOS_CPU_DMA_LATENCY) {
|
||||
struct pm_qos_request *req = kzalloc(sizeof(*req), GFP_KERNEL);
|
||||
if (!req)
|
||||
return -ENOMEM;
|
||||
|
@ -584,7 +592,7 @@ static int __init pm_qos_power_init(void)
|
|||
|
||||
BUILD_BUG_ON(ARRAY_SIZE(pm_qos_array) != PM_QOS_NUM_CLASSES);
|
||||
|
||||
for (i = 1; i < PM_QOS_NUM_CLASSES; i++) {
|
||||
for (i = PM_QOS_CPU_DMA_LATENCY; i < PM_QOS_NUM_CLASSES; i++) {
|
||||
ret = register_pm_qos_misc(pm_qos_array[i]);
|
||||
if (ret < 0) {
|
||||
printk(KERN_ERR "pm_qos_param: %s setup failed\n",
|
||||
|
|
|
@ -642,8 +642,9 @@ __register_nosave_region(unsigned long start_pfn, unsigned long end_pfn,
|
|||
region->end_pfn = end_pfn;
|
||||
list_add_tail(®ion->list, &nosave_regions);
|
||||
Report:
|
||||
printk(KERN_INFO "PM: Registered nosave memory: %016lx - %016lx\n",
|
||||
start_pfn << PAGE_SHIFT, end_pfn << PAGE_SHIFT);
|
||||
printk(KERN_INFO "PM: Registered nosave memory: [mem %#010llx-%#010llx]\n",
|
||||
(unsigned long long) start_pfn << PAGE_SHIFT,
|
||||
((unsigned long long) end_pfn << PAGE_SHIFT) - 1);
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
|
@ -269,7 +269,7 @@ int suspend_devices_and_enter(suspend_state_t state)
|
|||
suspend_test_start();
|
||||
error = dpm_suspend_start(PMSG_SUSPEND);
|
||||
if (error) {
|
||||
printk(KERN_ERR "PM: Some devices failed to suspend\n");
|
||||
pr_err("PM: Some devices failed to suspend, or early wake event detected\n");
|
||||
goto Recover_platform;
|
||||
}
|
||||
suspend_test_finish("suspend devices");
|
||||
|
|
Loading…
Reference in New Issue
Block a user