forked from luck/tmp_suning_uos_patched
Merge branches 'pm-qos', 'pm-domains' and 'pm-drivers'
* pm-qos: PM / QoS: Add type to dev_pm_qos_add_ancestor_request() arguments ACPI / LPSS: Support for device latency tolerance PM QoS ACPI / scan: Add bind/unbind callbacks to struct acpi_scan_handler PM / QoS: Introcuce latency tolerance device PM QoS type PM / QoS: Add no_constraints_value field to struct pm_qos_constraints PM / QoS: Rename device resume latency QoS items * pm-domains: PM / domains: Turn latency warning into debug message * pm-drivers: PM: Add pm_runtime_suspend|resume_force functions PM / runtime: Fetch runtime PM callbacks using a macro
This commit is contained in:
commit
165f5fd04a
|
@ -187,7 +187,7 @@ Description:
|
|||
Not all drivers support this attribute. If it isn't supported,
|
||||
attempts to read or write it will yield I/O errors.
|
||||
|
||||
What: /sys/devices/.../power/pm_qos_latency_us
|
||||
What: /sys/devices/.../power/pm_qos_resume_latency_us
|
||||
Date: March 2012
|
||||
Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
|
||||
Description:
|
||||
|
@ -205,6 +205,31 @@ Description:
|
|||
This attribute has no effect on system-wide suspend/resume and
|
||||
hibernation.
|
||||
|
||||
What: /sys/devices/.../power/pm_qos_latency_tolerance_us
|
||||
Date: January 2014
|
||||
Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
|
||||
Description:
|
||||
The /sys/devices/.../power/pm_qos_latency_tolerance_us attribute
|
||||
contains the PM QoS active state latency tolerance limit for the
|
||||
given device in microseconds. That is the maximum memory access
|
||||
latency the device can suffer without any visible adverse
|
||||
effects on user space functionality. If that value is the
|
||||
string "any", the latency does not matter to user space at all,
|
||||
but hardware should not be allowed to set the latency tolerance
|
||||
for the device automatically.
|
||||
|
||||
Reading "auto" from this file means that the maximum memory
|
||||
access latency for the device may be determined automatically
|
||||
by the hardware as needed. Writing "auto" to it allows the
|
||||
hardware to be switched to this mode if there are no other
|
||||
latency tolerance requirements from the kernel side.
|
||||
|
||||
This attribute is only present if the feature controlled by it
|
||||
is supported by the hardware.
|
||||
|
||||
This attribute has no effect on runtime suspend and resume of
|
||||
devices and on system-wide suspend/resume and hibernation.
|
||||
|
||||
What: /sys/devices/.../power/pm_qos_no_power_off
|
||||
Date: September 2012
|
||||
Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
|
||||
|
|
|
@ -88,17 +88,19 @@ node.
|
|||
|
||||
2. PM QoS per-device latency and flags framework
|
||||
|
||||
For each device, there are two lists of PM QoS requests. One is maintained
|
||||
along with the aggregated target of latency value and the other is for PM QoS
|
||||
flags. Values are updated in response to changes of the request list.
|
||||
For each device, there are three lists of PM QoS requests. Two of them are
|
||||
maintained along with the aggregated targets of resume latency and active
|
||||
state latency tolerance (in microseconds) and the third one is for PM QoS flags.
|
||||
Values are updated in response to changes of the request list.
|
||||
|
||||
Target latency value is simply the minimum of the request values held in the
|
||||
parameter list elements. The PM QoS flags aggregate value is a gather (bitwise
|
||||
OR) of all list elements' values. Two device PM QoS flags are defined currently:
|
||||
PM_QOS_FLAG_NO_POWER_OFF and PM_QOS_FLAG_REMOTE_WAKEUP.
|
||||
The target values of resume latency and active state latency tolerance are
|
||||
simply the minimum of the request values held in the parameter list elements.
|
||||
The PM QoS flags aggregate value is a gather (bitwise OR) of all list elements'
|
||||
values. Two device PM QoS flags are defined currently: PM_QOS_FLAG_NO_POWER_OFF
|
||||
and PM_QOS_FLAG_REMOTE_WAKEUP.
|
||||
|
||||
Note: the aggregated target value is implemented as an atomic variable so that
|
||||
reading the aggregated value does not require any locking mechanism.
|
||||
Note: The aggregated target values are implemented in such a way that reading
|
||||
the aggregated value does not require any locking mechanism.
|
||||
|
||||
|
||||
From kernel mode the use of this interface is the following:
|
||||
|
@ -132,19 +134,21 @@ The meaning of the return values is as follows:
|
|||
PM_QOS_FLAGS_UNDEFINED: The device's PM QoS structure has not been
|
||||
initialized or the list of requests is empty.
|
||||
|
||||
int dev_pm_qos_add_ancestor_request(dev, handle, value)
|
||||
int dev_pm_qos_add_ancestor_request(dev, handle, type, value)
|
||||
Add a PM QoS request for the first direct ancestor of the given device whose
|
||||
power.ignore_children flag is unset.
|
||||
power.ignore_children flag is unset (for DEV_PM_QOS_RESUME_LATENCY requests)
|
||||
or whose power.set_latency_tolerance callback pointer is not NULL (for
|
||||
DEV_PM_QOS_LATENCY_TOLERANCE requests).
|
||||
|
||||
int dev_pm_qos_expose_latency_limit(device, value)
|
||||
Add a request to the device's PM QoS list of latency constraints and create
|
||||
a sysfs attribute pm_qos_resume_latency_us under the device's power directory
|
||||
allowing user space to manipulate that request.
|
||||
Add a request to the device's PM QoS list of resume latency constraints and
|
||||
create a sysfs attribute pm_qos_resume_latency_us under the device's power
|
||||
directory allowing user space to manipulate that request.
|
||||
|
||||
void dev_pm_qos_hide_latency_limit(device)
|
||||
Drop the request added by dev_pm_qos_expose_latency_limit() from the device's
|
||||
PM QoS list of latency constraints and remove sysfs attribute pm_qos_resume_latency_us
|
||||
from the device's power directory.
|
||||
PM QoS list of resume latency constraints and remove sysfs attribute
|
||||
pm_qos_resume_latency_us from the device's power directory.
|
||||
|
||||
int dev_pm_qos_expose_flags(device, value)
|
||||
Add a request to the device's PM QoS list of flags and create sysfs attributes
|
||||
|
@ -163,7 +167,7 @@ a per-device notification tree and a global notification tree.
|
|||
int dev_pm_qos_add_notifier(device, notifier):
|
||||
Adds a notification callback function for the device.
|
||||
The callback is called when the aggregated value of the device constraints list
|
||||
is changed.
|
||||
is changed (for resume latency device PM QoS only).
|
||||
|
||||
int dev_pm_qos_remove_notifier(device, notifier):
|
||||
Removes the notification callback function for the device.
|
||||
|
@ -171,14 +175,48 @@ Removes the notification callback function for the device.
|
|||
int dev_pm_qos_add_global_notifier(notifier):
|
||||
Adds a notification callback function in the global notification tree of the
|
||||
framework.
|
||||
The callback is called when the aggregated value for any device is changed.
|
||||
The callback is called when the aggregated value for any device is changed
|
||||
(for resume latency device PM QoS only).
|
||||
|
||||
int dev_pm_qos_remove_global_notifier(notifier):
|
||||
Removes the notification callback function from the global notification tree
|
||||
of the framework.
|
||||
|
||||
|
||||
From user mode:
|
||||
No API for user space access to the per-device latency constraints is provided
|
||||
yet - still under discussion.
|
||||
Active state latency tolerance
|
||||
|
||||
This device PM QoS type is used to support systems in which hardware may switch
|
||||
to energy-saving operation modes on the fly. In those systems, if the operation
|
||||
mode chosen by the hardware attempts to save energy in an overly aggressive way,
|
||||
it may cause excess latencies to be visible to software, causing it to miss
|
||||
certain protocol requirements or target frame or sample rates etc.
|
||||
|
||||
If there is a latency tolerance control mechanism for a given device available
|
||||
to software, the .set_latency_tolerance callback in that device's dev_pm_info
|
||||
structure should be populated. The routine pointed to by it is should implement
|
||||
whatever is necessary to transfer the effective requirement value to the
|
||||
hardware.
|
||||
|
||||
Whenever the effective latency tolerance changes for the device, its
|
||||
.set_latency_tolerance() callback will be executed and the effective value will
|
||||
be passed to it. If that value is negative, which means that the list of
|
||||
latency tolerance requirements for the device is empty, the callback is expected
|
||||
to switch the underlying hardware latency tolerance control mechanism to an
|
||||
autonomous mode if available. If that value is PM_QOS_LATENCY_ANY, in turn, and
|
||||
the hardware supports a special "no requirement" setting, the callback is
|
||||
expected to use it. That allows software to prevent the hardware from
|
||||
automatically updating the device's latency tolerance in response to its power
|
||||
state changes (e.g. during transitions from D3cold to D0), which generally may
|
||||
be done in the autonomous latency tolerance control mode.
|
||||
|
||||
If .set_latency_tolerance() is present for the device, sysfs attribute
|
||||
pm_qos_latency_tolerance_us will be present in the devivce's power directory.
|
||||
Then, user space can use that attribute to specify its latency tolerance
|
||||
requirement for the device, if any. Writing "any" to it means "no requirement,
|
||||
but do not let the hardware control latency tolerance" and writing "auto" to it
|
||||
allows the hardware to be switched to the autonomous mode if there are no other
|
||||
requirements from the kernel side in the device's list.
|
||||
|
||||
Kernel code can use the functions described above along with the
|
||||
DEV_PM_QOS_LATENCY_TOLERANCE device PM QoS type to add, remove and update
|
||||
latency tolerance requirements for devices.
|
||||
|
|
|
@ -92,5 +92,5 @@ dev_pm_qos_remove_request "device=%s type=%s new_value=%d"
|
|||
|
||||
The first parameter gives the device name which tries to add/update/remove
|
||||
QoS requests.
|
||||
The second parameter gives the request type (e.g. "DEV_PM_QOS_LATENCY").
|
||||
The second parameter gives the request type (e.g. "DEV_PM_QOS_RESUME_LATENCY").
|
||||
The third parameter is value to be added/updated/removed.
|
||||
|
|
|
@ -33,6 +33,13 @@ ACPI_MODULE_NAME("acpi_lpss");
|
|||
#define LPSS_GENERAL_UART_RTS_OVRD BIT(3)
|
||||
#define LPSS_SW_LTR 0x10
|
||||
#define LPSS_AUTO_LTR 0x14
|
||||
#define LPSS_LTR_SNOOP_REQ BIT(15)
|
||||
#define LPSS_LTR_SNOOP_MASK 0x0000FFFF
|
||||
#define LPSS_LTR_SNOOP_LAT_1US 0x800
|
||||
#define LPSS_LTR_SNOOP_LAT_32US 0xC00
|
||||
#define LPSS_LTR_SNOOP_LAT_SHIFT 5
|
||||
#define LPSS_LTR_SNOOP_LAT_CUTOFF 3000
|
||||
#define LPSS_LTR_MAX_VAL 0x3FF
|
||||
#define LPSS_TX_INT 0x20
|
||||
#define LPSS_TX_INT_MASK BIT(1)
|
||||
|
||||
|
@ -326,6 +333,17 @@ static int acpi_lpss_create_device(struct acpi_device *adev,
|
|||
return ret;
|
||||
}
|
||||
|
||||
static u32 __lpss_reg_read(struct lpss_private_data *pdata, unsigned int reg)
|
||||
{
|
||||
return readl(pdata->mmio_base + pdata->dev_desc->prv_offset + reg);
|
||||
}
|
||||
|
||||
static void __lpss_reg_write(u32 val, struct lpss_private_data *pdata,
|
||||
unsigned int reg)
|
||||
{
|
||||
writel(val, pdata->mmio_base + pdata->dev_desc->prv_offset + reg);
|
||||
}
|
||||
|
||||
static int lpss_reg_read(struct device *dev, unsigned int reg, u32 *val)
|
||||
{
|
||||
struct acpi_device *adev;
|
||||
|
@ -347,7 +365,7 @@ static int lpss_reg_read(struct device *dev, unsigned int reg, u32 *val)
|
|||
ret = -ENODEV;
|
||||
goto out;
|
||||
}
|
||||
*val = readl(pdata->mmio_base + pdata->dev_desc->prv_offset + reg);
|
||||
*val = __lpss_reg_read(pdata, reg);
|
||||
|
||||
out:
|
||||
spin_unlock_irqrestore(&dev->power.lock, flags);
|
||||
|
@ -400,6 +418,37 @@ static struct attribute_group lpss_attr_group = {
|
|||
.name = "lpss_ltr",
|
||||
};
|
||||
|
||||
static void acpi_lpss_set_ltr(struct device *dev, s32 val)
|
||||
{
|
||||
struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
|
||||
u32 ltr_mode, ltr_val;
|
||||
|
||||
ltr_mode = __lpss_reg_read(pdata, LPSS_GENERAL);
|
||||
if (val < 0) {
|
||||
if (ltr_mode & LPSS_GENERAL_LTR_MODE_SW) {
|
||||
ltr_mode &= ~LPSS_GENERAL_LTR_MODE_SW;
|
||||
__lpss_reg_write(ltr_mode, pdata, LPSS_GENERAL);
|
||||
}
|
||||
return;
|
||||
}
|
||||
ltr_val = __lpss_reg_read(pdata, LPSS_SW_LTR) & ~LPSS_LTR_SNOOP_MASK;
|
||||
if (val >= LPSS_LTR_SNOOP_LAT_CUTOFF) {
|
||||
ltr_val |= LPSS_LTR_SNOOP_LAT_32US;
|
||||
val = LPSS_LTR_MAX_VAL;
|
||||
} else if (val > LPSS_LTR_MAX_VAL) {
|
||||
ltr_val |= LPSS_LTR_SNOOP_LAT_32US | LPSS_LTR_SNOOP_REQ;
|
||||
val >>= LPSS_LTR_SNOOP_LAT_SHIFT;
|
||||
} else {
|
||||
ltr_val |= LPSS_LTR_SNOOP_LAT_1US | LPSS_LTR_SNOOP_REQ;
|
||||
}
|
||||
ltr_val |= val;
|
||||
__lpss_reg_write(ltr_val, pdata, LPSS_SW_LTR);
|
||||
if (!(ltr_mode & LPSS_GENERAL_LTR_MODE_SW)) {
|
||||
ltr_mode |= LPSS_GENERAL_LTR_MODE_SW;
|
||||
__lpss_reg_write(ltr_mode, pdata, LPSS_GENERAL);
|
||||
}
|
||||
}
|
||||
|
||||
static int acpi_lpss_platform_notify(struct notifier_block *nb,
|
||||
unsigned long action, void *data)
|
||||
{
|
||||
|
@ -437,9 +486,29 @@ static struct notifier_block acpi_lpss_nb = {
|
|||
.notifier_call = acpi_lpss_platform_notify,
|
||||
};
|
||||
|
||||
static void acpi_lpss_bind(struct device *dev)
|
||||
{
|
||||
struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
|
||||
|
||||
if (!pdata || !pdata->mmio_base || !pdata->dev_desc->ltr_required)
|
||||
return;
|
||||
|
||||
if (pdata->mmio_size >= pdata->dev_desc->prv_offset + LPSS_LTR_SIZE)
|
||||
dev->power.set_latency_tolerance = acpi_lpss_set_ltr;
|
||||
else
|
||||
dev_err(dev, "MMIO size insufficient to access LTR\n");
|
||||
}
|
||||
|
||||
static void acpi_lpss_unbind(struct device *dev)
|
||||
{
|
||||
dev->power.set_latency_tolerance = NULL;
|
||||
}
|
||||
|
||||
static struct acpi_scan_handler lpss_handler = {
|
||||
.ids = acpi_lpss_device_ids,
|
||||
.attach = acpi_lpss_create_device,
|
||||
.bind = acpi_lpss_bind,
|
||||
.unbind = acpi_lpss_unbind,
|
||||
};
|
||||
|
||||
void __init acpi_lpss_init(void)
|
||||
|
|
|
@ -287,6 +287,7 @@ EXPORT_SYMBOL_GPL(acpi_unbind_one);
|
|||
static int acpi_platform_notify(struct device *dev)
|
||||
{
|
||||
struct acpi_bus_type *type = acpi_get_bus_type(dev);
|
||||
struct acpi_device *adev;
|
||||
int ret;
|
||||
|
||||
ret = acpi_bind_one(dev, NULL);
|
||||
|
@ -303,9 +304,14 @@ static int acpi_platform_notify(struct device *dev)
|
|||
if (ret)
|
||||
goto out;
|
||||
}
|
||||
adev = ACPI_COMPANION(dev);
|
||||
if (!adev)
|
||||
goto out;
|
||||
|
||||
if (type && type->setup)
|
||||
type->setup(dev);
|
||||
else if (adev->handler && adev->handler->bind)
|
||||
adev->handler->bind(dev);
|
||||
|
||||
out:
|
||||
#if ACPI_GLUE_DEBUG
|
||||
|
@ -324,11 +330,17 @@ static int acpi_platform_notify(struct device *dev)
|
|||
|
||||
static int acpi_platform_notify_remove(struct device *dev)
|
||||
{
|
||||
struct acpi_device *adev = ACPI_COMPANION(dev);
|
||||
struct acpi_bus_type *type;
|
||||
|
||||
if (!adev)
|
||||
return 0;
|
||||
|
||||
type = acpi_get_bus_type(dev);
|
||||
if (type && type->cleanup)
|
||||
type->cleanup(dev);
|
||||
else if (adev->handler && adev->handler->unbind)
|
||||
adev->handler->unbind(dev);
|
||||
|
||||
acpi_unbind_one(dev);
|
||||
return 0;
|
||||
|
|
|
@ -2072,13 +2072,14 @@ static int acpi_scan_attach_handler(struct acpi_device *device)
|
|||
|
||||
handler = acpi_scan_match_handler(hwid->id, &devid);
|
||||
if (handler) {
|
||||
device->handler = handler;
|
||||
ret = handler->attach(device, devid);
|
||||
if (ret > 0) {
|
||||
device->handler = handler;
|
||||
if (ret > 0)
|
||||
break;
|
||||
} else if (ret < 0) {
|
||||
|
||||
device->handler = NULL;
|
||||
if (ret < 0)
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
return ret;
|
||||
|
|
|
@ -1,6 +1,5 @@
|
|||
obj-$(CONFIG_PM) += sysfs.o generic_ops.o common.o qos.o
|
||||
obj-$(CONFIG_PM) += sysfs.o generic_ops.o common.o qos.o runtime.o
|
||||
obj-$(CONFIG_PM_SLEEP) += main.o wakeup.o
|
||||
obj-$(CONFIG_PM_RUNTIME) += runtime.o
|
||||
obj-$(CONFIG_PM_TRACE_RTC) += trace.o
|
||||
obj-$(CONFIG_PM_OPP) += opp.o
|
||||
obj-$(CONFIG_PM_GENERIC_DOMAINS) += domain.o domain_governor.o
|
||||
|
|
|
@ -42,7 +42,7 @@
|
|||
struct gpd_timing_data *__td = &dev_gpd_data(dev)->td; \
|
||||
if (!__retval && __elapsed > __td->field) { \
|
||||
__td->field = __elapsed; \
|
||||
dev_warn(dev, name " latency exceeded, new value %lld ns\n", \
|
||||
dev_dbg(dev, name " latency exceeded, new value %lld ns\n", \
|
||||
__elapsed); \
|
||||
genpd->max_off_time_changed = true; \
|
||||
__td->constraint_changed = true; \
|
||||
|
|
|
@ -89,8 +89,8 @@ extern void dpm_sysfs_remove(struct device *dev);
|
|||
extern void rpm_sysfs_remove(struct device *dev);
|
||||
extern int wakeup_sysfs_add(struct device *dev);
|
||||
extern void wakeup_sysfs_remove(struct device *dev);
|
||||
extern int pm_qos_sysfs_add_latency(struct device *dev);
|
||||
extern void pm_qos_sysfs_remove_latency(struct device *dev);
|
||||
extern int pm_qos_sysfs_add_resume_latency(struct device *dev);
|
||||
extern void pm_qos_sysfs_remove_resume_latency(struct device *dev);
|
||||
extern int pm_qos_sysfs_add_flags(struct device *dev);
|
||||
extern void pm_qos_sysfs_remove_flags(struct device *dev);
|
||||
|
||||
|
|
|
@ -105,7 +105,7 @@ EXPORT_SYMBOL_GPL(dev_pm_qos_flags);
|
|||
s32 __dev_pm_qos_read_value(struct device *dev)
|
||||
{
|
||||
return IS_ERR_OR_NULL(dev->power.qos) ?
|
||||
0 : pm_qos_read_value(&dev->power.qos->latency);
|
||||
0 : pm_qos_read_value(&dev->power.qos->resume_latency);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -141,16 +141,24 @@ static int apply_constraint(struct dev_pm_qos_request *req,
|
|||
int ret;
|
||||
|
||||
switch(req->type) {
|
||||
case DEV_PM_QOS_LATENCY:
|
||||
ret = pm_qos_update_target(&qos->latency, &req->data.pnode,
|
||||
action, value);
|
||||
case DEV_PM_QOS_RESUME_LATENCY:
|
||||
ret = pm_qos_update_target(&qos->resume_latency,
|
||||
&req->data.pnode, action, value);
|
||||
if (ret) {
|
||||
value = pm_qos_read_value(&qos->latency);
|
||||
value = pm_qos_read_value(&qos->resume_latency);
|
||||
blocking_notifier_call_chain(&dev_pm_notifiers,
|
||||
(unsigned long)value,
|
||||
req);
|
||||
}
|
||||
break;
|
||||
case DEV_PM_QOS_LATENCY_TOLERANCE:
|
||||
ret = pm_qos_update_target(&qos->latency_tolerance,
|
||||
&req->data.pnode, action, value);
|
||||
if (ret) {
|
||||
value = pm_qos_read_value(&qos->latency_tolerance);
|
||||
req->dev->power.set_latency_tolerance(req->dev, value);
|
||||
}
|
||||
break;
|
||||
case DEV_PM_QOS_FLAGS:
|
||||
ret = pm_qos_update_flags(&qos->flags, &req->data.flr,
|
||||
action, value);
|
||||
|
@ -186,13 +194,21 @@ static int dev_pm_qos_constraints_allocate(struct device *dev)
|
|||
}
|
||||
BLOCKING_INIT_NOTIFIER_HEAD(n);
|
||||
|
||||
c = &qos->latency;
|
||||
c = &qos->resume_latency;
|
||||
plist_head_init(&c->list);
|
||||
c->target_value = PM_QOS_DEV_LAT_DEFAULT_VALUE;
|
||||
c->default_value = PM_QOS_DEV_LAT_DEFAULT_VALUE;
|
||||
c->target_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE;
|
||||
c->default_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE;
|
||||
c->no_constraint_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE;
|
||||
c->type = PM_QOS_MIN;
|
||||
c->notifiers = n;
|
||||
|
||||
c = &qos->latency_tolerance;
|
||||
plist_head_init(&c->list);
|
||||
c->target_value = PM_QOS_LATENCY_TOLERANCE_DEFAULT_VALUE;
|
||||
c->default_value = PM_QOS_LATENCY_TOLERANCE_DEFAULT_VALUE;
|
||||
c->no_constraint_value = PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT;
|
||||
c->type = PM_QOS_MIN;
|
||||
|
||||
INIT_LIST_HEAD(&qos->flags.list);
|
||||
|
||||
spin_lock_irq(&dev->power.lock);
|
||||
|
@ -224,7 +240,7 @@ void dev_pm_qos_constraints_destroy(struct device *dev)
|
|||
* If the device's PM QoS resume latency limit or PM QoS flags have been
|
||||
* exposed to user space, they have to be hidden at this point.
|
||||
*/
|
||||
pm_qos_sysfs_remove_latency(dev);
|
||||
pm_qos_sysfs_remove_resume_latency(dev);
|
||||
pm_qos_sysfs_remove_flags(dev);
|
||||
|
||||
mutex_lock(&dev_pm_qos_mtx);
|
||||
|
@ -237,7 +253,7 @@ void dev_pm_qos_constraints_destroy(struct device *dev)
|
|||
goto out;
|
||||
|
||||
/* Flush the constraints lists for the device. */
|
||||
c = &qos->latency;
|
||||
c = &qos->resume_latency;
|
||||
plist_for_each_entry_safe(req, tmp, &c->list, data.pnode) {
|
||||
/*
|
||||
* Update constraints list and call the notification
|
||||
|
@ -246,6 +262,11 @@ void dev_pm_qos_constraints_destroy(struct device *dev)
|
|||
apply_constraint(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE);
|
||||
memset(req, 0, sizeof(*req));
|
||||
}
|
||||
c = &qos->latency_tolerance;
|
||||
plist_for_each_entry_safe(req, tmp, &c->list, data.pnode) {
|
||||
apply_constraint(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE);
|
||||
memset(req, 0, sizeof(*req));
|
||||
}
|
||||
f = &qos->flags;
|
||||
list_for_each_entry_safe(req, tmp, &f->list, data.flr.node) {
|
||||
apply_constraint(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE);
|
||||
|
@ -265,6 +286,40 @@ void dev_pm_qos_constraints_destroy(struct device *dev)
|
|||
mutex_unlock(&dev_pm_qos_sysfs_mtx);
|
||||
}
|
||||
|
||||
static bool dev_pm_qos_invalid_request(struct device *dev,
|
||||
struct dev_pm_qos_request *req)
|
||||
{
|
||||
return !req || (req->type == DEV_PM_QOS_LATENCY_TOLERANCE
|
||||
&& !dev->power.set_latency_tolerance);
|
||||
}
|
||||
|
||||
static int __dev_pm_qos_add_request(struct device *dev,
|
||||
struct dev_pm_qos_request *req,
|
||||
enum dev_pm_qos_req_type type, s32 value)
|
||||
{
|
||||
int ret = 0;
|
||||
|
||||
if (!dev || dev_pm_qos_invalid_request(dev, req))
|
||||
return -EINVAL;
|
||||
|
||||
if (WARN(dev_pm_qos_request_active(req),
|
||||
"%s() called for already added request\n", __func__))
|
||||
return -EINVAL;
|
||||
|
||||
if (IS_ERR(dev->power.qos))
|
||||
ret = -ENODEV;
|
||||
else if (!dev->power.qos)
|
||||
ret = dev_pm_qos_constraints_allocate(dev);
|
||||
|
||||
trace_dev_pm_qos_add_request(dev_name(dev), type, value);
|
||||
if (!ret) {
|
||||
req->dev = dev;
|
||||
req->type = type;
|
||||
ret = apply_constraint(req, PM_QOS_ADD_REQ, value);
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* dev_pm_qos_add_request - inserts new qos request into the list
|
||||
* @dev: target device for the constraint
|
||||
|
@ -290,31 +345,11 @@ void dev_pm_qos_constraints_destroy(struct device *dev)
|
|||
int dev_pm_qos_add_request(struct device *dev, struct dev_pm_qos_request *req,
|
||||
enum dev_pm_qos_req_type type, s32 value)
|
||||
{
|
||||
int ret = 0;
|
||||
|
||||
if (!dev || !req) /*guard against callers passing in null */
|
||||
return -EINVAL;
|
||||
|
||||
if (WARN(dev_pm_qos_request_active(req),
|
||||
"%s() called for already added request\n", __func__))
|
||||
return -EINVAL;
|
||||
int ret;
|
||||
|
||||
mutex_lock(&dev_pm_qos_mtx);
|
||||
|
||||
if (IS_ERR(dev->power.qos))
|
||||
ret = -ENODEV;
|
||||
else if (!dev->power.qos)
|
||||
ret = dev_pm_qos_constraints_allocate(dev);
|
||||
|
||||
trace_dev_pm_qos_add_request(dev_name(dev), type, value);
|
||||
if (!ret) {
|
||||
req->dev = dev;
|
||||
req->type = type;
|
||||
ret = apply_constraint(req, PM_QOS_ADD_REQ, value);
|
||||
}
|
||||
|
||||
ret = __dev_pm_qos_add_request(dev, req, type, value);
|
||||
mutex_unlock(&dev_pm_qos_mtx);
|
||||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dev_pm_qos_add_request);
|
||||
|
@ -341,7 +376,8 @@ static int __dev_pm_qos_update_request(struct dev_pm_qos_request *req,
|
|||
return -ENODEV;
|
||||
|
||||
switch(req->type) {
|
||||
case DEV_PM_QOS_LATENCY:
|
||||
case DEV_PM_QOS_RESUME_LATENCY:
|
||||
case DEV_PM_QOS_LATENCY_TOLERANCE:
|
||||
curr_value = req->data.pnode.prio;
|
||||
break;
|
||||
case DEV_PM_QOS_FLAGS:
|
||||
|
@ -460,8 +496,8 @@ int dev_pm_qos_add_notifier(struct device *dev, struct notifier_block *notifier)
|
|||
ret = dev_pm_qos_constraints_allocate(dev);
|
||||
|
||||
if (!ret)
|
||||
ret = blocking_notifier_chain_register(
|
||||
dev->power.qos->latency.notifiers, notifier);
|
||||
ret = blocking_notifier_chain_register(dev->power.qos->resume_latency.notifiers,
|
||||
notifier);
|
||||
|
||||
mutex_unlock(&dev_pm_qos_mtx);
|
||||
return ret;
|
||||
|
@ -487,9 +523,8 @@ int dev_pm_qos_remove_notifier(struct device *dev,
|
|||
|
||||
/* Silently return if the constraints object is not present. */
|
||||
if (!IS_ERR_OR_NULL(dev->power.qos))
|
||||
retval = blocking_notifier_chain_unregister(
|
||||
dev->power.qos->latency.notifiers,
|
||||
notifier);
|
||||
retval = blocking_notifier_chain_unregister(dev->power.qos->resume_latency.notifiers,
|
||||
notifier);
|
||||
|
||||
mutex_unlock(&dev_pm_qos_mtx);
|
||||
return retval;
|
||||
|
@ -530,20 +565,32 @@ EXPORT_SYMBOL_GPL(dev_pm_qos_remove_global_notifier);
|
|||
* dev_pm_qos_add_ancestor_request - Add PM QoS request for device's ancestor.
|
||||
* @dev: Device whose ancestor to add the request for.
|
||||
* @req: Pointer to the preallocated handle.
|
||||
* @type: Type of the request.
|
||||
* @value: Constraint latency value.
|
||||
*/
|
||||
int dev_pm_qos_add_ancestor_request(struct device *dev,
|
||||
struct dev_pm_qos_request *req, s32 value)
|
||||
struct dev_pm_qos_request *req,
|
||||
enum dev_pm_qos_req_type type, s32 value)
|
||||
{
|
||||
struct device *ancestor = dev->parent;
|
||||
int ret = -ENODEV;
|
||||
|
||||
while (ancestor && !ancestor->power.ignore_children)
|
||||
ancestor = ancestor->parent;
|
||||
switch (type) {
|
||||
case DEV_PM_QOS_RESUME_LATENCY:
|
||||
while (ancestor && !ancestor->power.ignore_children)
|
||||
ancestor = ancestor->parent;
|
||||
|
||||
break;
|
||||
case DEV_PM_QOS_LATENCY_TOLERANCE:
|
||||
while (ancestor && !ancestor->power.set_latency_tolerance)
|
||||
ancestor = ancestor->parent;
|
||||
|
||||
break;
|
||||
default:
|
||||
ancestor = NULL;
|
||||
}
|
||||
if (ancestor)
|
||||
ret = dev_pm_qos_add_request(ancestor, req,
|
||||
DEV_PM_QOS_LATENCY, value);
|
||||
ret = dev_pm_qos_add_request(ancestor, req, type, value);
|
||||
|
||||
if (ret < 0)
|
||||
req->dev = NULL;
|
||||
|
@ -559,9 +606,13 @@ static void __dev_pm_qos_drop_user_request(struct device *dev,
|
|||
struct dev_pm_qos_request *req = NULL;
|
||||
|
||||
switch(type) {
|
||||
case DEV_PM_QOS_LATENCY:
|
||||
req = dev->power.qos->latency_req;
|
||||
dev->power.qos->latency_req = NULL;
|
||||
case DEV_PM_QOS_RESUME_LATENCY:
|
||||
req = dev->power.qos->resume_latency_req;
|
||||
dev->power.qos->resume_latency_req = NULL;
|
||||
break;
|
||||
case DEV_PM_QOS_LATENCY_TOLERANCE:
|
||||
req = dev->power.qos->latency_tolerance_req;
|
||||
dev->power.qos->latency_tolerance_req = NULL;
|
||||
break;
|
||||
case DEV_PM_QOS_FLAGS:
|
||||
req = dev->power.qos->flags_req;
|
||||
|
@ -597,7 +648,7 @@ int dev_pm_qos_expose_latency_limit(struct device *dev, s32 value)
|
|||
if (!req)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = dev_pm_qos_add_request(dev, req, DEV_PM_QOS_LATENCY, value);
|
||||
ret = dev_pm_qos_add_request(dev, req, DEV_PM_QOS_RESUME_LATENCY, value);
|
||||
if (ret < 0) {
|
||||
kfree(req);
|
||||
return ret;
|
||||
|
@ -609,7 +660,7 @@ int dev_pm_qos_expose_latency_limit(struct device *dev, s32 value)
|
|||
|
||||
if (IS_ERR_OR_NULL(dev->power.qos))
|
||||
ret = -ENODEV;
|
||||
else if (dev->power.qos->latency_req)
|
||||
else if (dev->power.qos->resume_latency_req)
|
||||
ret = -EEXIST;
|
||||
|
||||
if (ret < 0) {
|
||||
|
@ -618,13 +669,13 @@ int dev_pm_qos_expose_latency_limit(struct device *dev, s32 value)
|
|||
mutex_unlock(&dev_pm_qos_mtx);
|
||||
goto out;
|
||||
}
|
||||
dev->power.qos->latency_req = req;
|
||||
dev->power.qos->resume_latency_req = req;
|
||||
|
||||
mutex_unlock(&dev_pm_qos_mtx);
|
||||
|
||||
ret = pm_qos_sysfs_add_latency(dev);
|
||||
ret = pm_qos_sysfs_add_resume_latency(dev);
|
||||
if (ret)
|
||||
dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_LATENCY);
|
||||
dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_RESUME_LATENCY);
|
||||
|
||||
out:
|
||||
mutex_unlock(&dev_pm_qos_sysfs_mtx);
|
||||
|
@ -634,8 +685,8 @@ EXPORT_SYMBOL_GPL(dev_pm_qos_expose_latency_limit);
|
|||
|
||||
static void __dev_pm_qos_hide_latency_limit(struct device *dev)
|
||||
{
|
||||
if (!IS_ERR_OR_NULL(dev->power.qos) && dev->power.qos->latency_req)
|
||||
__dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_LATENCY);
|
||||
if (!IS_ERR_OR_NULL(dev->power.qos) && dev->power.qos->resume_latency_req)
|
||||
__dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_RESUME_LATENCY);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -646,7 +697,7 @@ void dev_pm_qos_hide_latency_limit(struct device *dev)
|
|||
{
|
||||
mutex_lock(&dev_pm_qos_sysfs_mtx);
|
||||
|
||||
pm_qos_sysfs_remove_latency(dev);
|
||||
pm_qos_sysfs_remove_resume_latency(dev);
|
||||
|
||||
mutex_lock(&dev_pm_qos_mtx);
|
||||
__dev_pm_qos_hide_latency_limit(dev);
|
||||
|
@ -768,6 +819,67 @@ int dev_pm_qos_update_flags(struct device *dev, s32 mask, bool set)
|
|||
pm_runtime_put(dev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* dev_pm_qos_get_user_latency_tolerance - Get user space latency tolerance.
|
||||
* @dev: Device to obtain the user space latency tolerance for.
|
||||
*/
|
||||
s32 dev_pm_qos_get_user_latency_tolerance(struct device *dev)
|
||||
{
|
||||
s32 ret;
|
||||
|
||||
mutex_lock(&dev_pm_qos_mtx);
|
||||
ret = IS_ERR_OR_NULL(dev->power.qos)
|
||||
|| !dev->power.qos->latency_tolerance_req ?
|
||||
PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT :
|
||||
dev->power.qos->latency_tolerance_req->data.pnode.prio;
|
||||
mutex_unlock(&dev_pm_qos_mtx);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* dev_pm_qos_update_user_latency_tolerance - Update user space latency tolerance.
|
||||
* @dev: Device to update the user space latency tolerance for.
|
||||
* @val: New user space latency tolerance for @dev (negative values disable).
|
||||
*/
|
||||
int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val)
|
||||
{
|
||||
int ret;
|
||||
|
||||
mutex_lock(&dev_pm_qos_mtx);
|
||||
|
||||
if (IS_ERR_OR_NULL(dev->power.qos)
|
||||
|| !dev->power.qos->latency_tolerance_req) {
|
||||
struct dev_pm_qos_request *req;
|
||||
|
||||
if (val < 0) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
req = kzalloc(sizeof(*req), GFP_KERNEL);
|
||||
if (!req) {
|
||||
ret = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
ret = __dev_pm_qos_add_request(dev, req, DEV_PM_QOS_LATENCY_TOLERANCE, val);
|
||||
if (ret < 0) {
|
||||
kfree(req);
|
||||
goto out;
|
||||
}
|
||||
dev->power.qos->latency_tolerance_req = req;
|
||||
} else {
|
||||
if (val < 0) {
|
||||
__dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_LATENCY_TOLERANCE);
|
||||
ret = 0;
|
||||
} else {
|
||||
ret = __dev_pm_qos_update_request(dev->power.qos->latency_tolerance_req, val);
|
||||
}
|
||||
}
|
||||
|
||||
out:
|
||||
mutex_unlock(&dev_pm_qos_mtx);
|
||||
return ret;
|
||||
}
|
||||
#else /* !CONFIG_PM_RUNTIME */
|
||||
static void __dev_pm_qos_hide_latency_limit(struct device *dev) {}
|
||||
static void __dev_pm_qos_hide_flags(struct device *dev) {}
|
||||
|
|
|
@ -13,6 +13,43 @@
|
|||
#include <trace/events/rpm.h>
|
||||
#include "power.h"
|
||||
|
||||
#define RPM_GET_CALLBACK(dev, cb) \
|
||||
({ \
|
||||
int (*__rpm_cb)(struct device *__d); \
|
||||
\
|
||||
if (dev->pm_domain) \
|
||||
__rpm_cb = dev->pm_domain->ops.cb; \
|
||||
else if (dev->type && dev->type->pm) \
|
||||
__rpm_cb = dev->type->pm->cb; \
|
||||
else if (dev->class && dev->class->pm) \
|
||||
__rpm_cb = dev->class->pm->cb; \
|
||||
else if (dev->bus && dev->bus->pm) \
|
||||
__rpm_cb = dev->bus->pm->cb; \
|
||||
else \
|
||||
__rpm_cb = NULL; \
|
||||
\
|
||||
if (!__rpm_cb && dev->driver && dev->driver->pm) \
|
||||
__rpm_cb = dev->driver->pm->cb; \
|
||||
\
|
||||
__rpm_cb; \
|
||||
})
|
||||
|
||||
static int (*rpm_get_suspend_cb(struct device *dev))(struct device *)
|
||||
{
|
||||
return RPM_GET_CALLBACK(dev, runtime_suspend);
|
||||
}
|
||||
|
||||
static int (*rpm_get_resume_cb(struct device *dev))(struct device *)
|
||||
{
|
||||
return RPM_GET_CALLBACK(dev, runtime_resume);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM_RUNTIME
|
||||
static int (*rpm_get_idle_cb(struct device *dev))(struct device *)
|
||||
{
|
||||
return RPM_GET_CALLBACK(dev, runtime_idle);
|
||||
}
|
||||
|
||||
static int rpm_resume(struct device *dev, int rpmflags);
|
||||
static int rpm_suspend(struct device *dev, int rpmflags);
|
||||
|
||||
|
@ -310,19 +347,7 @@ static int rpm_idle(struct device *dev, int rpmflags)
|
|||
|
||||
dev->power.idle_notification = true;
|
||||
|
||||
if (dev->pm_domain)
|
||||
callback = dev->pm_domain->ops.runtime_idle;
|
||||
else if (dev->type && dev->type->pm)
|
||||
callback = dev->type->pm->runtime_idle;
|
||||
else if (dev->class && dev->class->pm)
|
||||
callback = dev->class->pm->runtime_idle;
|
||||
else if (dev->bus && dev->bus->pm)
|
||||
callback = dev->bus->pm->runtime_idle;
|
||||
else
|
||||
callback = NULL;
|
||||
|
||||
if (!callback && dev->driver && dev->driver->pm)
|
||||
callback = dev->driver->pm->runtime_idle;
|
||||
callback = rpm_get_idle_cb(dev);
|
||||
|
||||
if (callback)
|
||||
retval = __rpm_callback(callback, dev);
|
||||
|
@ -492,19 +517,7 @@ static int rpm_suspend(struct device *dev, int rpmflags)
|
|||
|
||||
__update_runtime_status(dev, RPM_SUSPENDING);
|
||||
|
||||
if (dev->pm_domain)
|
||||
callback = dev->pm_domain->ops.runtime_suspend;
|
||||
else if (dev->type && dev->type->pm)
|
||||
callback = dev->type->pm->runtime_suspend;
|
||||
else if (dev->class && dev->class->pm)
|
||||
callback = dev->class->pm->runtime_suspend;
|
||||
else if (dev->bus && dev->bus->pm)
|
||||
callback = dev->bus->pm->runtime_suspend;
|
||||
else
|
||||
callback = NULL;
|
||||
|
||||
if (!callback && dev->driver && dev->driver->pm)
|
||||
callback = dev->driver->pm->runtime_suspend;
|
||||
callback = rpm_get_suspend_cb(dev);
|
||||
|
||||
retval = rpm_callback(callback, dev);
|
||||
if (retval)
|
||||
|
@ -724,19 +737,7 @@ static int rpm_resume(struct device *dev, int rpmflags)
|
|||
|
||||
__update_runtime_status(dev, RPM_RESUMING);
|
||||
|
||||
if (dev->pm_domain)
|
||||
callback = dev->pm_domain->ops.runtime_resume;
|
||||
else if (dev->type && dev->type->pm)
|
||||
callback = dev->type->pm->runtime_resume;
|
||||
else if (dev->class && dev->class->pm)
|
||||
callback = dev->class->pm->runtime_resume;
|
||||
else if (dev->bus && dev->bus->pm)
|
||||
callback = dev->bus->pm->runtime_resume;
|
||||
else
|
||||
callback = NULL;
|
||||
|
||||
if (!callback && dev->driver && dev->driver->pm)
|
||||
callback = dev->driver->pm->runtime_resume;
|
||||
callback = rpm_get_resume_cb(dev);
|
||||
|
||||
retval = rpm_callback(callback, dev);
|
||||
if (retval) {
|
||||
|
@ -1401,3 +1402,86 @@ void pm_runtime_remove(struct device *dev)
|
|||
if (dev->power.irq_safe && dev->parent)
|
||||
pm_runtime_put(dev->parent);
|
||||
}
|
||||
#endif
|
||||
|
||||
/**
|
||||
* pm_runtime_force_suspend - Force a device into suspend state if needed.
|
||||
* @dev: Device to suspend.
|
||||
*
|
||||
* Disable runtime PM so we safely can check the device's runtime PM status and
|
||||
* if it is active, invoke it's .runtime_suspend callback to bring it into
|
||||
* suspend state. Keep runtime PM disabled to preserve the state unless we
|
||||
* encounter errors.
|
||||
*
|
||||
* Typically this function may be invoked from a system suspend callback to make
|
||||
* sure the device is put into low power state.
|
||||
*/
|
||||
int pm_runtime_force_suspend(struct device *dev)
|
||||
{
|
||||
int (*callback)(struct device *);
|
||||
int ret = 0;
|
||||
|
||||
pm_runtime_disable(dev);
|
||||
|
||||
/*
|
||||
* Note that pm_runtime_status_suspended() returns false while
|
||||
* !CONFIG_PM_RUNTIME, which means the device will be put into low
|
||||
* power state.
|
||||
*/
|
||||
if (pm_runtime_status_suspended(dev))
|
||||
return 0;
|
||||
|
||||
callback = rpm_get_suspend_cb(dev);
|
||||
|
||||
if (!callback) {
|
||||
ret = -ENOSYS;
|
||||
goto err;
|
||||
}
|
||||
|
||||
ret = callback(dev);
|
||||
if (ret)
|
||||
goto err;
|
||||
|
||||
pm_runtime_set_suspended(dev);
|
||||
return 0;
|
||||
err:
|
||||
pm_runtime_enable(dev);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pm_runtime_force_suspend);
|
||||
|
||||
/**
|
||||
* pm_runtime_force_resume - Force a device into resume state.
|
||||
* @dev: Device to resume.
|
||||
*
|
||||
* Prior invoking this function we expect the user to have brought the device
|
||||
* into low power state by a call to pm_runtime_force_suspend(). Here we reverse
|
||||
* those actions and brings the device into full power. We update the runtime PM
|
||||
* status and re-enables runtime PM.
|
||||
*
|
||||
* Typically this function may be invoked from a system resume callback to make
|
||||
* sure the device is put into full power state.
|
||||
*/
|
||||
int pm_runtime_force_resume(struct device *dev)
|
||||
{
|
||||
int (*callback)(struct device *);
|
||||
int ret = 0;
|
||||
|
||||
callback = rpm_get_resume_cb(dev);
|
||||
|
||||
if (!callback) {
|
||||
ret = -ENOSYS;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = callback(dev);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
pm_runtime_set_active(dev);
|
||||
pm_runtime_mark_last_busy(dev);
|
||||
out:
|
||||
pm_runtime_enable(dev);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pm_runtime_force_resume);
|
||||
|
|
|
@ -218,15 +218,16 @@ static ssize_t autosuspend_delay_ms_store(struct device *dev,
|
|||
static DEVICE_ATTR(autosuspend_delay_ms, 0644, autosuspend_delay_ms_show,
|
||||
autosuspend_delay_ms_store);
|
||||
|
||||
static ssize_t pm_qos_latency_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
static ssize_t pm_qos_resume_latency_show(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
return sprintf(buf, "%d\n", dev_pm_qos_requested_latency(dev));
|
||||
return sprintf(buf, "%d\n", dev_pm_qos_requested_resume_latency(dev));
|
||||
}
|
||||
|
||||
static ssize_t pm_qos_latency_store(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
const char *buf, size_t n)
|
||||
static ssize_t pm_qos_resume_latency_store(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
const char *buf, size_t n)
|
||||
{
|
||||
s32 value;
|
||||
int ret;
|
||||
|
@ -237,12 +238,47 @@ static ssize_t pm_qos_latency_store(struct device *dev,
|
|||
if (value < 0)
|
||||
return -EINVAL;
|
||||
|
||||
ret = dev_pm_qos_update_request(dev->power.qos->latency_req, value);
|
||||
ret = dev_pm_qos_update_request(dev->power.qos->resume_latency_req,
|
||||
value);
|
||||
return ret < 0 ? ret : n;
|
||||
}
|
||||
|
||||
static DEVICE_ATTR(pm_qos_resume_latency_us, 0644,
|
||||
pm_qos_latency_show, pm_qos_latency_store);
|
||||
pm_qos_resume_latency_show, pm_qos_resume_latency_store);
|
||||
|
||||
static ssize_t pm_qos_latency_tolerance_show(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
s32 value = dev_pm_qos_get_user_latency_tolerance(dev);
|
||||
|
||||
if (value < 0)
|
||||
return sprintf(buf, "auto\n");
|
||||
else if (value == PM_QOS_LATENCY_ANY)
|
||||
return sprintf(buf, "any\n");
|
||||
|
||||
return sprintf(buf, "%d\n", value);
|
||||
}
|
||||
|
||||
static ssize_t pm_qos_latency_tolerance_store(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
const char *buf, size_t n)
|
||||
{
|
||||
s32 value;
|
||||
int ret;
|
||||
|
||||
if (kstrtos32(buf, 0, &value)) {
|
||||
if (!strcmp(buf, "auto") || !strcmp(buf, "auto\n"))
|
||||
value = PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT;
|
||||
else if (!strcmp(buf, "any") || !strcmp(buf, "any\n"))
|
||||
value = PM_QOS_LATENCY_ANY;
|
||||
}
|
||||
ret = dev_pm_qos_update_user_latency_tolerance(dev, value);
|
||||
return ret < 0 ? ret : n;
|
||||
}
|
||||
|
||||
static DEVICE_ATTR(pm_qos_latency_tolerance_us, 0644,
|
||||
pm_qos_latency_tolerance_show, pm_qos_latency_tolerance_store);
|
||||
|
||||
static ssize_t pm_qos_no_power_off_show(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
|
@ -618,15 +654,26 @@ static struct attribute_group pm_runtime_attr_group = {
|
|||
.attrs = runtime_attrs,
|
||||
};
|
||||
|
||||
static struct attribute *pm_qos_latency_attrs[] = {
|
||||
static struct attribute *pm_qos_resume_latency_attrs[] = {
|
||||
#ifdef CONFIG_PM_RUNTIME
|
||||
&dev_attr_pm_qos_resume_latency_us.attr,
|
||||
#endif /* CONFIG_PM_RUNTIME */
|
||||
NULL,
|
||||
};
|
||||
static struct attribute_group pm_qos_latency_attr_group = {
|
||||
static struct attribute_group pm_qos_resume_latency_attr_group = {
|
||||
.name = power_group_name,
|
||||
.attrs = pm_qos_latency_attrs,
|
||||
.attrs = pm_qos_resume_latency_attrs,
|
||||
};
|
||||
|
||||
static struct attribute *pm_qos_latency_tolerance_attrs[] = {
|
||||
#ifdef CONFIG_PM_RUNTIME
|
||||
&dev_attr_pm_qos_latency_tolerance_us.attr,
|
||||
#endif /* CONFIG_PM_RUNTIME */
|
||||
NULL,
|
||||
};
|
||||
static struct attribute_group pm_qos_latency_tolerance_attr_group = {
|
||||
.name = power_group_name,
|
||||
.attrs = pm_qos_latency_tolerance_attrs,
|
||||
};
|
||||
|
||||
static struct attribute *pm_qos_flags_attrs[] = {
|
||||
|
@ -654,18 +701,23 @@ int dpm_sysfs_add(struct device *dev)
|
|||
if (rc)
|
||||
goto err_out;
|
||||
}
|
||||
|
||||
if (device_can_wakeup(dev)) {
|
||||
rc = sysfs_merge_group(&dev->kobj, &pm_wakeup_attr_group);
|
||||
if (rc) {
|
||||
if (pm_runtime_callbacks_present(dev))
|
||||
sysfs_unmerge_group(&dev->kobj,
|
||||
&pm_runtime_attr_group);
|
||||
goto err_out;
|
||||
}
|
||||
if (rc)
|
||||
goto err_runtime;
|
||||
}
|
||||
if (dev->power.set_latency_tolerance) {
|
||||
rc = sysfs_merge_group(&dev->kobj,
|
||||
&pm_qos_latency_tolerance_attr_group);
|
||||
if (rc)
|
||||
goto err_wakeup;
|
||||
}
|
||||
return 0;
|
||||
|
||||
err_wakeup:
|
||||
sysfs_unmerge_group(&dev->kobj, &pm_wakeup_attr_group);
|
||||
err_runtime:
|
||||
sysfs_unmerge_group(&dev->kobj, &pm_runtime_attr_group);
|
||||
err_out:
|
||||
sysfs_remove_group(&dev->kobj, &pm_attr_group);
|
||||
return rc;
|
||||
|
@ -681,14 +733,14 @@ void wakeup_sysfs_remove(struct device *dev)
|
|||
sysfs_unmerge_group(&dev->kobj, &pm_wakeup_attr_group);
|
||||
}
|
||||
|
||||
int pm_qos_sysfs_add_latency(struct device *dev)
|
||||
int pm_qos_sysfs_add_resume_latency(struct device *dev)
|
||||
{
|
||||
return sysfs_merge_group(&dev->kobj, &pm_qos_latency_attr_group);
|
||||
return sysfs_merge_group(&dev->kobj, &pm_qos_resume_latency_attr_group);
|
||||
}
|
||||
|
||||
void pm_qos_sysfs_remove_latency(struct device *dev)
|
||||
void pm_qos_sysfs_remove_resume_latency(struct device *dev)
|
||||
{
|
||||
sysfs_unmerge_group(&dev->kobj, &pm_qos_latency_attr_group);
|
||||
sysfs_unmerge_group(&dev->kobj, &pm_qos_resume_latency_attr_group);
|
||||
}
|
||||
|
||||
int pm_qos_sysfs_add_flags(struct device *dev)
|
||||
|
@ -708,6 +760,7 @@ void rpm_sysfs_remove(struct device *dev)
|
|||
|
||||
void dpm_sysfs_remove(struct device *dev)
|
||||
{
|
||||
sysfs_unmerge_group(&dev->kobj, &pm_qos_latency_tolerance_attr_group);
|
||||
dev_pm_qos_constraints_destroy(dev);
|
||||
rpm_sysfs_remove(dev);
|
||||
sysfs_unmerge_group(&dev->kobj, &pm_wakeup_attr_group);
|
||||
|
|
|
@ -134,7 +134,8 @@ static irqreturn_t st1232_ts_irq_handler(int irq, void *dev_id)
|
|||
} else if (!ts->low_latency_req.dev) {
|
||||
/* First contact, request 100 us latency. */
|
||||
dev_pm_qos_add_ancestor_request(&ts->client->dev,
|
||||
&ts->low_latency_req, 100);
|
||||
&ts->low_latency_req,
|
||||
DEV_PM_QOS_RESUME_LATENCY, 100);
|
||||
}
|
||||
|
||||
/* SYN_REPORT */
|
||||
|
|
|
@ -897,7 +897,7 @@ static void flctl_select_chip(struct mtd_info *mtd, int chipnr)
|
|||
if (!flctl->qos_request) {
|
||||
ret = dev_pm_qos_add_request(&flctl->pdev->dev,
|
||||
&flctl->pm_qos,
|
||||
DEV_PM_QOS_LATENCY,
|
||||
DEV_PM_QOS_RESUME_LATENCY,
|
||||
100);
|
||||
if (ret < 0)
|
||||
dev_err(&flctl->pdev->dev,
|
||||
|
|
|
@ -133,6 +133,8 @@ struct acpi_scan_handler {
|
|||
struct list_head list_node;
|
||||
int (*attach)(struct acpi_device *dev, const struct acpi_device_id *id);
|
||||
void (*detach)(struct acpi_device *dev);
|
||||
void (*bind)(struct device *phys_dev);
|
||||
void (*unbind)(struct device *phys_dev);
|
||||
struct acpi_hotplug_profile hotplug;
|
||||
};
|
||||
|
||||
|
|
|
@ -582,6 +582,7 @@ struct dev_pm_info {
|
|||
unsigned long accounting_timestamp;
|
||||
#endif
|
||||
struct pm_subsys_data *subsys_data; /* Owned by the subsystem. */
|
||||
void (*set_latency_tolerance)(struct device *, s32);
|
||||
struct dev_pm_qos *qos;
|
||||
};
|
||||
|
||||
|
|
|
@ -32,7 +32,10 @@ enum pm_qos_flags_status {
|
|||
#define PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC)
|
||||
#define PM_QOS_NETWORK_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC)
|
||||
#define PM_QOS_NETWORK_THROUGHPUT_DEFAULT_VALUE 0
|
||||
#define PM_QOS_DEV_LAT_DEFAULT_VALUE 0
|
||||
#define PM_QOS_RESUME_LATENCY_DEFAULT_VALUE 0
|
||||
#define PM_QOS_LATENCY_TOLERANCE_DEFAULT_VALUE 0
|
||||
#define PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT (-1)
|
||||
#define PM_QOS_LATENCY_ANY ((s32)(~(__u32)0 >> 1))
|
||||
|
||||
#define PM_QOS_FLAG_NO_POWER_OFF (1 << 0)
|
||||
#define PM_QOS_FLAG_REMOTE_WAKEUP (1 << 1)
|
||||
|
@ -49,7 +52,8 @@ struct pm_qos_flags_request {
|
|||
};
|
||||
|
||||
enum dev_pm_qos_req_type {
|
||||
DEV_PM_QOS_LATENCY = 1,
|
||||
DEV_PM_QOS_RESUME_LATENCY = 1,
|
||||
DEV_PM_QOS_LATENCY_TOLERANCE,
|
||||
DEV_PM_QOS_FLAGS,
|
||||
};
|
||||
|
||||
|
@ -77,6 +81,7 @@ struct pm_qos_constraints {
|
|||
struct plist_head list;
|
||||
s32 target_value; /* Do not change to 64 bit */
|
||||
s32 default_value;
|
||||
s32 no_constraint_value;
|
||||
enum pm_qos_type type;
|
||||
struct blocking_notifier_head *notifiers;
|
||||
};
|
||||
|
@ -87,9 +92,11 @@ struct pm_qos_flags {
|
|||
};
|
||||
|
||||
struct dev_pm_qos {
|
||||
struct pm_qos_constraints latency;
|
||||
struct pm_qos_constraints resume_latency;
|
||||
struct pm_qos_constraints latency_tolerance;
|
||||
struct pm_qos_flags flags;
|
||||
struct dev_pm_qos_request *latency_req;
|
||||
struct dev_pm_qos_request *resume_latency_req;
|
||||
struct dev_pm_qos_request *latency_tolerance_req;
|
||||
struct dev_pm_qos_request *flags_req;
|
||||
};
|
||||
|
||||
|
@ -142,7 +149,8 @@ int dev_pm_qos_remove_global_notifier(struct notifier_block *notifier);
|
|||
void dev_pm_qos_constraints_init(struct device *dev);
|
||||
void dev_pm_qos_constraints_destroy(struct device *dev);
|
||||
int dev_pm_qos_add_ancestor_request(struct device *dev,
|
||||
struct dev_pm_qos_request *req, s32 value);
|
||||
struct dev_pm_qos_request *req,
|
||||
enum dev_pm_qos_req_type type, s32 value);
|
||||
#else
|
||||
static inline enum pm_qos_flags_status __dev_pm_qos_flags(struct device *dev,
|
||||
s32 mask)
|
||||
|
@ -185,7 +193,9 @@ static inline void dev_pm_qos_constraints_destroy(struct device *dev)
|
|||
dev->power.power_state = PMSG_INVALID;
|
||||
}
|
||||
static inline int dev_pm_qos_add_ancestor_request(struct device *dev,
|
||||
struct dev_pm_qos_request *req, s32 value)
|
||||
struct dev_pm_qos_request *req,
|
||||
enum dev_pm_qos_req_type type,
|
||||
s32 value)
|
||||
{ return 0; }
|
||||
#endif
|
||||
|
||||
|
@ -195,10 +205,12 @@ void dev_pm_qos_hide_latency_limit(struct device *dev);
|
|||
int dev_pm_qos_expose_flags(struct device *dev, s32 value);
|
||||
void dev_pm_qos_hide_flags(struct device *dev);
|
||||
int dev_pm_qos_update_flags(struct device *dev, s32 mask, bool set);
|
||||
s32 dev_pm_qos_get_user_latency_tolerance(struct device *dev);
|
||||
int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val);
|
||||
|
||||
static inline s32 dev_pm_qos_requested_latency(struct device *dev)
|
||||
static inline s32 dev_pm_qos_requested_resume_latency(struct device *dev)
|
||||
{
|
||||
return dev->power.qos->latency_req->data.pnode.prio;
|
||||
return dev->power.qos->resume_latency_req->data.pnode.prio;
|
||||
}
|
||||
|
||||
static inline s32 dev_pm_qos_requested_flags(struct device *dev)
|
||||
|
@ -214,8 +226,12 @@ static inline int dev_pm_qos_expose_flags(struct device *dev, s32 value)
|
|||
static inline void dev_pm_qos_hide_flags(struct device *dev) {}
|
||||
static inline int dev_pm_qos_update_flags(struct device *dev, s32 m, bool set)
|
||||
{ return 0; }
|
||||
static inline s32 dev_pm_qos_get_user_latency_tolerance(struct device *dev)
|
||||
{ return PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT; }
|
||||
static inline int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val)
|
||||
{ return 0; }
|
||||
|
||||
static inline s32 dev_pm_qos_requested_latency(struct device *dev) { return 0; }
|
||||
static inline s32 dev_pm_qos_requested_resume_latency(struct device *dev) { return 0; }
|
||||
static inline s32 dev_pm_qos_requested_flags(struct device *dev) { return 0; }
|
||||
#endif
|
||||
|
||||
|
|
|
@ -26,9 +26,13 @@
|
|||
#ifdef CONFIG_PM
|
||||
extern int pm_generic_runtime_suspend(struct device *dev);
|
||||
extern int pm_generic_runtime_resume(struct device *dev);
|
||||
extern int pm_runtime_force_suspend(struct device *dev);
|
||||
extern int pm_runtime_force_resume(struct device *dev);
|
||||
#else
|
||||
static inline int pm_generic_runtime_suspend(struct device *dev) { return 0; }
|
||||
static inline int pm_generic_runtime_resume(struct device *dev) { return 0; }
|
||||
static inline int pm_runtime_force_suspend(struct device *dev) { return 0; }
|
||||
static inline int pm_runtime_force_resume(struct device *dev) { return 0; }
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_PM_RUNTIME
|
||||
|
|
|
@ -407,8 +407,8 @@ DECLARE_EVENT_CLASS(dev_pm_qos_request,
|
|||
TP_printk("device=%s type=%s new_value=%d",
|
||||
__get_str(name),
|
||||
__print_symbolic(__entry->type,
|
||||
{ DEV_PM_QOS_LATENCY, "DEV_PM_QOS_LATENCY" },
|
||||
{ DEV_PM_QOS_FLAGS, "DEV_PM_QOS_FLAGS" }),
|
||||
{ DEV_PM_QOS_RESUME_LATENCY, "DEV_PM_QOS_RESUME_LATENCY" },
|
||||
{ DEV_PM_QOS_FLAGS, "DEV_PM_QOS_FLAGS" }),
|
||||
__entry->new_value)
|
||||
);
|
||||
|
||||
|
|
|
@ -66,6 +66,7 @@ static struct pm_qos_constraints cpu_dma_constraints = {
|
|||
.list = PLIST_HEAD_INIT(cpu_dma_constraints.list),
|
||||
.target_value = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE,
|
||||
.default_value = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE,
|
||||
.no_constraint_value = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE,
|
||||
.type = PM_QOS_MIN,
|
||||
.notifiers = &cpu_dma_lat_notifier,
|
||||
};
|
||||
|
@ -79,6 +80,7 @@ static struct pm_qos_constraints network_lat_constraints = {
|
|||
.list = PLIST_HEAD_INIT(network_lat_constraints.list),
|
||||
.target_value = PM_QOS_NETWORK_LAT_DEFAULT_VALUE,
|
||||
.default_value = PM_QOS_NETWORK_LAT_DEFAULT_VALUE,
|
||||
.no_constraint_value = PM_QOS_NETWORK_LAT_DEFAULT_VALUE,
|
||||
.type = PM_QOS_MIN,
|
||||
.notifiers = &network_lat_notifier,
|
||||
};
|
||||
|
@ -93,6 +95,7 @@ static struct pm_qos_constraints network_tput_constraints = {
|
|||
.list = PLIST_HEAD_INIT(network_tput_constraints.list),
|
||||
.target_value = PM_QOS_NETWORK_THROUGHPUT_DEFAULT_VALUE,
|
||||
.default_value = PM_QOS_NETWORK_THROUGHPUT_DEFAULT_VALUE,
|
||||
.no_constraint_value = PM_QOS_NETWORK_THROUGHPUT_DEFAULT_VALUE,
|
||||
.type = PM_QOS_MAX,
|
||||
.notifiers = &network_throughput_notifier,
|
||||
};
|
||||
|
@ -128,7 +131,7 @@ static const struct file_operations pm_qos_power_fops = {
|
|||
static inline int pm_qos_get_value(struct pm_qos_constraints *c)
|
||||
{
|
||||
if (plist_head_empty(&c->list))
|
||||
return c->default_value;
|
||||
return c->no_constraint_value;
|
||||
|
||||
switch (c->type) {
|
||||
case PM_QOS_MIN:
|
||||
|
@ -170,6 +173,7 @@ int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node,
|
|||
{
|
||||
unsigned long flags;
|
||||
int prev_value, curr_value, new_value;
|
||||
int ret;
|
||||
|
||||
spin_lock_irqsave(&pm_qos_lock, flags);
|
||||
prev_value = pm_qos_get_value(c);
|
||||
|
@ -205,13 +209,15 @@ int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node,
|
|||
|
||||
trace_pm_qos_update_target(action, prev_value, curr_value);
|
||||
if (prev_value != curr_value) {
|
||||
blocking_notifier_call_chain(c->notifiers,
|
||||
(unsigned long)curr_value,
|
||||
NULL);
|
||||
return 1;
|
||||
ret = 1;
|
||||
if (c->notifiers)
|
||||
blocking_notifier_call_chain(c->notifiers,
|
||||
(unsigned long)curr_value,
|
||||
NULL);
|
||||
} else {
|
||||
return 0;
|
||||
ret = 0;
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
Loading…
Reference in New Issue
Block a user