Merge branch 'pm-qos'

* pm-qos: (30 commits)
  PM: QoS: annotate data races in pm_qos_*_value()
  Documentation: power: fix pm_qos_interface.rst format warning
  PM: QoS: Make CPU latency QoS depend on CONFIG_CPU_IDLE
  Documentation: PM: QoS: Update to reflect previous code changes
  PM: QoS: Update file information comments
  PM: QoS: Drop PM_QOS_CPU_DMA_LATENCY and rename related functions
  sound: Call cpu_latency_qos_*() instead of pm_qos_*()
  drivers: usb: Call cpu_latency_qos_*() instead of pm_qos_*()
  drivers: tty: Call cpu_latency_qos_*() instead of pm_qos_*()
  drivers: spi: Call cpu_latency_qos_*() instead of pm_qos_*()
  drivers: net: Call cpu_latency_qos_*() instead of pm_qos_*()
  drivers: mmc: Call cpu_latency_qos_*() instead of pm_qos_*()
  drivers: media: Call cpu_latency_qos_*() instead of pm_qos_*()
  drivers: hsi: Call cpu_latency_qos_*() instead of pm_qos_*()
  drm: i915: Call cpu_latency_qos_*() instead of pm_qos_*()
  x86: platform: iosf_mbi: Call cpu_latency_qos_*() instead of pm_qos_*()
  cpuidle: Call cpu_latency_qos_limit() instead of pm_qos_request()
  PM: QoS: Add CPU latency QoS API wrappers
  PM: QoS: Adjust pm_qos_request() signature and reorder pm_qos.h
  PM: QoS: Simplify definitions of CPU latency QoS trace events
  ...
This commit is contained in:
Rafael J. Wysocki 2020-03-30 14:45:57 +02:00
commit 8f1073ed8c
29 changed files with 436 additions and 744 deletions

View File

@ -583,20 +583,17 @@ Power Management Quality of Service for CPUs
The power management quality of service (PM QoS) framework in the Linux kernel The power management quality of service (PM QoS) framework in the Linux kernel
allows kernel code and user space processes to set constraints on various allows kernel code and user space processes to set constraints on various
energy-efficiency features of the kernel to prevent performance from dropping energy-efficiency features of the kernel to prevent performance from dropping
below a required level. The PM QoS constraints can be set globally, in below a required level.
predefined categories referred to as PM QoS classes, or against individual
devices.
CPU idle time management can be affected by PM QoS in two ways, through the CPU idle time management can be affected by PM QoS in two ways, through the
global constraint in the ``PM_QOS_CPU_DMA_LATENCY`` class and through the global CPU latency limit and through the resume latency constraints for
resume latency constraints for individual CPUs. Kernel code (e.g. device individual CPUs. Kernel code (e.g. device drivers) can set both of them with
drivers) can set both of them with the help of special internal interfaces the help of special internal interfaces provided by the PM QoS framework. User
provided by the PM QoS framework. User space can modify the former by opening space can modify the former by opening the :file:`cpu_dma_latency` special
the :file:`cpu_dma_latency` special device file under :file:`/dev/` and writing device file under :file:`/dev/` and writing a binary value (interpreted as a
a binary value (interpreted as a signed 32-bit integer) to it. In turn, the signed 32-bit integer) to it. In turn, the resume latency constraint for a CPU
resume latency constraint for a CPU can be modified by user space by writing a can be modified from user space by writing a string (representing a signed
string (representing a signed 32-bit integer) to the 32-bit integer) to the :file:`power/pm_qos_resume_latency_us` file under
:file:`power/pm_qos_resume_latency_us` file under
:file:`/sys/devices/system/cpu/cpu<N>/` in ``sysfs``, where the CPU number :file:`/sys/devices/system/cpu/cpu<N>/` in ``sysfs``, where the CPU number
``<N>`` is allocated at the system initialization time. Negative values ``<N>`` is allocated at the system initialization time. Negative values
will be rejected in both cases and, also in both cases, the written integer will be rejected in both cases and, also in both cases, the written integer
@ -605,32 +602,34 @@ number will be interpreted as a requested PM QoS constraint in microseconds.
The requested value is not automatically applied as a new constraint, however, The requested value is not automatically applied as a new constraint, however,
as it may be less restrictive (greater in this particular case) than another as it may be less restrictive (greater in this particular case) than another
constraint previously requested by someone else. For this reason, the PM QoS constraint previously requested by someone else. For this reason, the PM QoS
framework maintains a list of requests that have been made so far in each framework maintains a list of requests that have been made so far for the
global class and for each device, aggregates them and applies the effective global CPU latency limit and for each individual CPU, aggregates them and
(minimum in this particular case) value as the new constraint. applies the effective (minimum in this particular case) value as the new
constraint.
In fact, opening the :file:`cpu_dma_latency` special device file causes a new In fact, opening the :file:`cpu_dma_latency` special device file causes a new
PM QoS request to be created and added to the priority list of requests in the PM QoS request to be created and added to a global priority list of CPU latency
``PM_QOS_CPU_DMA_LATENCY`` class and the file descriptor coming from the limit requests and the file descriptor coming from the "open" operation
"open" operation represents that request. If that file descriptor is then represents that request. If that file descriptor is then used for writing, the
used for writing, the number written to it will be associated with the PM QoS number written to it will be associated with the PM QoS request represented by
request represented by it as a new requested constraint value. Next, the it as a new requested limit value. Next, the priority list mechanism will be
priority list mechanism will be used to determine the new effective value of used to determine the new effective value of the entire list of requests and
the entire list of requests and that effective value will be set as a new that effective value will be set as a new CPU latency limit. Thus requesting a
constraint. Thus setting a new requested constraint value will only change the new limit value will only change the real limit if the effective "list" value is
real constraint if the effective "list" value is affected by it. In particular, affected by it, which is the case if it is the minimum of the requested values
for the ``PM_QOS_CPU_DMA_LATENCY`` class it only affects the real constraint if in the list.
it is the minimum of the requested constraints in the list. The process holding
a file descriptor obtained by opening the :file:`cpu_dma_latency` special device The process holding a file descriptor obtained by opening the
file controls the PM QoS request associated with that file descriptor, but it :file:`cpu_dma_latency` special device file controls the PM QoS request
controls this particular PM QoS request only. associated with that file descriptor, but it controls this particular PM QoS
request only.
Closing the :file:`cpu_dma_latency` special device file or, more precisely, the Closing the :file:`cpu_dma_latency` special device file or, more precisely, the
file descriptor obtained while opening it, causes the PM QoS request associated file descriptor obtained while opening it, causes the PM QoS request associated
with that file descriptor to be removed from the ``PM_QOS_CPU_DMA_LATENCY`` with that file descriptor to be removed from the global priority list of CPU
class priority list and destroyed. If that happens, the priority list mechanism latency limit requests and destroyed. If that happens, the priority list
will be used, again, to determine the new effective value for the whole list mechanism will be used again, to determine the new effective value for the whole
and that value will become the new real constraint. list and that value will become the new limit.
In turn, for each CPU there is one resume latency PM QoS request associated with In turn, for each CPU there is one resume latency PM QoS request associated with
the :file:`power/pm_qos_resume_latency_us` file under the :file:`power/pm_qos_resume_latency_us` file under
@ -647,10 +646,10 @@ CPU in question every time the list of requests is updated this way or another
(there may be other requests coming from kernel code in that list). (there may be other requests coming from kernel code in that list).
CPU idle time governors are expected to regard the minimum of the global CPU idle time governors are expected to regard the minimum of the global
effective ``PM_QOS_CPU_DMA_LATENCY`` class constraint and the effective (effective) CPU latency limit and the effective resume latency constraint for
resume latency constraint for the given CPU as the upper limit for the exit the given CPU as the upper limit for the exit latency of the idle states that
latency of the idle states they can select for that CPU. They should never they are allowed to select for that CPU. They should never select any idle
select any idle states with exit latency beyond that limit. states with exit latency beyond that limit.
Idle States Control Via Kernel Command Line Idle States Control Via Kernel Command Line

View File

@ -7,86 +7,78 @@ performance expectations by drivers, subsystems and user space applications on
one of the parameters. one of the parameters.
Two different PM QoS frameworks are available: Two different PM QoS frameworks are available:
1. PM QoS classes for cpu_dma_latency * CPU latency QoS.
2. The per-device PM QoS framework provides the API to manage the * The per-device PM QoS framework provides the API to manage the
per-device latency constraints and PM QoS flags. per-device latency constraints and PM QoS flags.
Each parameters have defined units: The latency unit used in the PM QoS framework is the microsecond (usec).
* latency: usec
* timeout: usec
* throughput: kbs (kilo bit / sec)
* memory bandwidth: mbs (mega bit / sec)
1. PM QoS framework 1. PM QoS framework
=================== ===================
The infrastructure exposes multiple misc device nodes one per implemented A global list of CPU latency QoS requests is maintained along with an aggregated
parameter. The set of parameters implement is defined by pm_qos_power_init() (effective) target value. The aggregated target value is updated with changes
and pm_qos_params.h. This is done because having the available parameters to the request list or elements of the list. For CPU latency QoS, the
being runtime configurable or changeable from a driver was seen as too easy to aggregated target value is simply the min of the request values held in the list
abuse. elements.
For each parameter a list of performance requests is maintained along with
an aggregated target value. The aggregated target value is updated with
changes to the request list or elements of the list. Typically the
aggregated target value is simply the max or min of the request values held
in the parameter list elements.
Note: the aggregated target value is implemented as an atomic variable so that Note: the aggregated target value is implemented as an atomic variable so that
reading the aggregated value does not require any locking mechanism. reading the aggregated value does not require any locking mechanism.
From kernel space the use of this interface is simple:
From kernel mode the use of this interface is simple: void cpu_latency_qos_add_request(handle, target_value):
Will insert an element into the CPU latency QoS list with the target value.
Upon change to this list the new target is recomputed and any registered
notifiers are called only if the target value is now different.
Clients of PM QoS need to save the returned handle for future use in other
PM QoS API functions.
void pm_qos_add_request(handle, param_class, target_value): void cpu_latency_qos_update_request(handle, new_target_value):
Will insert an element into the list for that identified PM QoS class with the
target value. Upon change to this list the new target is recomputed and any
registered notifiers are called only if the target value is now different.
Clients of pm_qos need to save the returned handle for future use in other
pm_qos API functions.
void pm_qos_update_request(handle, new_target_value):
Will update the list element pointed to by the handle with the new target Will update the list element pointed to by the handle with the new target
value and recompute the new aggregated target, calling the notification tree value and recompute the new aggregated target, calling the notification tree
if the target is changed. if the target is changed.
void pm_qos_remove_request(handle): void cpu_latency_qos_remove_request(handle):
Will remove the element. After removal it will update the aggregate target Will remove the element. After removal it will update the aggregate target
and call the notification tree if the target was changed as a result of and call the notification tree if the target was changed as a result of
removing the request. removing the request.
int pm_qos_request(param_class): int cpu_latency_qos_limit():
Returns the aggregated value for a given PM QoS class. Returns the aggregated value for the CPU latency QoS.
int pm_qos_request_active(handle): int cpu_latency_qos_request_active(handle):
Returns if the request is still active, i.e. it has not been removed from a Returns if the request is still active, i.e. it has not been removed from the
PM QoS class constraints list. CPU latency QoS list.
int pm_qos_add_notifier(param_class, notifier): int cpu_latency_qos_add_notifier(notifier):
Adds a notification callback function to the PM QoS class. The callback is Adds a notification callback function to the CPU latency QoS. The callback is
called when the aggregated value for the PM QoS class is changed. called when the aggregated value for the CPU latency QoS is changed.
int pm_qos_remove_notifier(int param_class, notifier): int cpu_latency_qos_remove_notifier(notifier):
Removes the notification callback function for the PM QoS class. Removes the notification callback function from the CPU latency QoS.
From user mode: From user space:
Only processes can register a pm_qos request. To provide for automatic The infrastructure exposes one device node, /dev/cpu_dma_latency, for the CPU
latency QoS.
Only processes can register a PM QoS request. To provide for automatic
cleanup of a process, the interface requires the process to register its cleanup of a process, the interface requires the process to register its
parameter requests in the following way: parameter requests as follows.
To register the default pm_qos target for the specific parameter, the process To register the default PM QoS target for the CPU latency QoS, the process must
must open /dev/cpu_dma_latency open /dev/cpu_dma_latency.
As long as the device node is held open that process has a registered As long as the device node is held open that process has a registered
request on the parameter. request on the parameter.
To change the requested target value the process needs to write an s32 value to To change the requested target value, the process needs to write an s32 value to
the open device node. Alternatively the user mode program could write a hex the open device node. Alternatively, it can write a hex string for the value
string for the value using 10 char long format e.g. "0x12345678". This using the 10 char long format e.g. "0x12345678". This translates to a
translates to a pm_qos_update_request call. cpu_latency_qos_update_request() call.
To remove the user mode request for a target value simply close the device To remove the user mode request for a target value simply close the device
node. node.

View File

@ -73,16 +73,6 @@ The second parameter is the power domain target state.
================ ================
The PM QoS events are used for QoS add/update/remove request and for The PM QoS events are used for QoS add/update/remove request and for
target/flags update. target/flags update.
::
pm_qos_add_request "pm_qos_class=%s value=%d"
pm_qos_update_request "pm_qos_class=%s value=%d"
pm_qos_remove_request "pm_qos_class=%s value=%d"
pm_qos_update_request_timeout "pm_qos_class=%s value=%d, timeout_us=%ld"
The first parameter gives the QoS class name (e.g. "CPU_DMA_LATENCY").
The second parameter is value to be added/updated/removed.
The third parameter is timeout value in usec.
:: ::
pm_qos_update_target "action=%s prev_value=%d curr_value=%d" pm_qos_update_target "action=%s prev_value=%d curr_value=%d"
@ -92,7 +82,7 @@ The first parameter gives the QoS action name (e.g. "ADD_REQ").
The second parameter is the previous QoS value. The second parameter is the previous QoS value.
The third parameter is the current QoS value to update. The third parameter is the current QoS value to update.
And, there are also events used for device PM QoS add/update/remove request. There are also events used for device PM QoS add/update/remove request.
:: ::
dev_pm_qos_add_request "device=%s type=%s new_value=%d" dev_pm_qos_add_request "device=%s type=%s new_value=%d"
@ -103,3 +93,12 @@ The first parameter gives the device name which tries to add/update/remove
QoS requests. QoS requests.
The second parameter gives the request type (e.g. "DEV_PM_QOS_RESUME_LATENCY"). The second parameter gives the request type (e.g. "DEV_PM_QOS_RESUME_LATENCY").
The third parameter is value to be added/updated/removed. The third parameter is value to be added/updated/removed.
And, there are events used for CPU latency QoS add/update/remove request.
::
pm_qos_add_request "value=%d"
pm_qos_update_request "value=%d"
pm_qos_remove_request "value=%d"
The parameter is the value to be added/updated/removed.

View File

@ -265,7 +265,7 @@ static void iosf_mbi_reset_semaphore(void)
iosf_mbi_sem_address, 0, PUNIT_SEMAPHORE_BIT)) iosf_mbi_sem_address, 0, PUNIT_SEMAPHORE_BIT))
dev_err(&mbi_pdev->dev, "Error P-Unit semaphore reset failed\n"); dev_err(&mbi_pdev->dev, "Error P-Unit semaphore reset failed\n");
pm_qos_update_request(&iosf_mbi_pm_qos, PM_QOS_DEFAULT_VALUE); cpu_latency_qos_update_request(&iosf_mbi_pm_qos, PM_QOS_DEFAULT_VALUE);
blocking_notifier_call_chain(&iosf_mbi_pmic_bus_access_notifier, blocking_notifier_call_chain(&iosf_mbi_pmic_bus_access_notifier,
MBI_PMIC_BUS_ACCESS_END, NULL); MBI_PMIC_BUS_ACCESS_END, NULL);
@ -301,8 +301,8 @@ static void iosf_mbi_reset_semaphore(void)
* 4) When CPU cores enter C6 or C7 the P-Unit needs to talk to the PMIC * 4) When CPU cores enter C6 or C7 the P-Unit needs to talk to the PMIC
* if this happens while the kernel itself is accessing the PMIC I2C bus * if this happens while the kernel itself is accessing the PMIC I2C bus
* the SoC hangs. * the SoC hangs.
* As the third step we call pm_qos_update_request() to disallow the CPU * As the third step we call cpu_latency_qos_update_request() to disallow the
* to enter C6 or C7. * CPU to enter C6 or C7.
* *
* 5) The P-Unit has a PMIC bus semaphore which we can request to stop * 5) The P-Unit has a PMIC bus semaphore which we can request to stop
* autonomous P-Unit tasks from accessing the PMIC I2C bus while we hold it. * autonomous P-Unit tasks from accessing the PMIC I2C bus while we hold it.
@ -338,7 +338,7 @@ int iosf_mbi_block_punit_i2c_access(void)
* requires the P-Unit to talk to the PMIC and if this happens while * requires the P-Unit to talk to the PMIC and if this happens while
* we're holding the semaphore, the SoC hangs. * we're holding the semaphore, the SoC hangs.
*/ */
pm_qos_update_request(&iosf_mbi_pm_qos, 0); cpu_latency_qos_update_request(&iosf_mbi_pm_qos, 0);
/* host driver writes to side band semaphore register */ /* host driver writes to side band semaphore register */
ret = iosf_mbi_write(BT_MBI_UNIT_PMC, MBI_REG_WRITE, ret = iosf_mbi_write(BT_MBI_UNIT_PMC, MBI_REG_WRITE,
@ -547,8 +547,7 @@ static int __init iosf_mbi_init(void)
{ {
iosf_debugfs_init(); iosf_debugfs_init();
pm_qos_add_request(&iosf_mbi_pm_qos, PM_QOS_CPU_DMA_LATENCY, cpu_latency_qos_add_request(&iosf_mbi_pm_qos, PM_QOS_DEFAULT_VALUE);
PM_QOS_DEFAULT_VALUE);
return pci_register_driver(&iosf_mbi_pci_driver); return pci_register_driver(&iosf_mbi_pci_driver);
} }
@ -561,7 +560,7 @@ static void __exit iosf_mbi_exit(void)
pci_dev_put(mbi_pdev); pci_dev_put(mbi_pdev);
mbi_pdev = NULL; mbi_pdev = NULL;
pm_qos_remove_request(&iosf_mbi_pm_qos); cpu_latency_qos_remove_request(&iosf_mbi_pm_qos);
} }
module_init(iosf_mbi_init); module_init(iosf_mbi_init);

View File

@ -736,53 +736,15 @@ int cpuidle_register(struct cpuidle_driver *drv,
} }
EXPORT_SYMBOL_GPL(cpuidle_register); EXPORT_SYMBOL_GPL(cpuidle_register);
#ifdef CONFIG_SMP
/*
* This function gets called when a part of the kernel has a new latency
* requirement. This means we need to get all processors out of their C-state,
* and then recalculate a new suitable C-state. Just do a cross-cpu IPI; that
* wakes them all right up.
*/
static int cpuidle_latency_notify(struct notifier_block *b,
unsigned long l, void *v)
{
wake_up_all_idle_cpus();
return NOTIFY_OK;
}
static struct notifier_block cpuidle_latency_notifier = {
.notifier_call = cpuidle_latency_notify,
};
static inline void latency_notifier_init(struct notifier_block *n)
{
pm_qos_add_notifier(PM_QOS_CPU_DMA_LATENCY, n);
}
#else /* CONFIG_SMP */
#define latency_notifier_init(x) do { } while (0)
#endif /* CONFIG_SMP */
/** /**
* cpuidle_init - core initializer * cpuidle_init - core initializer
*/ */
static int __init cpuidle_init(void) static int __init cpuidle_init(void)
{ {
int ret;
if (cpuidle_disabled()) if (cpuidle_disabled())
return -ENODEV; return -ENODEV;
ret = cpuidle_add_interface(cpu_subsys.dev_root); return cpuidle_add_interface(cpu_subsys.dev_root);
if (ret)
return ret;
latency_notifier_init(&cpuidle_latency_notifier);
return 0;
} }
module_param(off, int, 0444); module_param(off, int, 0444);

View File

@ -109,9 +109,9 @@ int cpuidle_register_governor(struct cpuidle_governor *gov)
*/ */
s64 cpuidle_governor_latency_req(unsigned int cpu) s64 cpuidle_governor_latency_req(unsigned int cpu)
{ {
int global_req = pm_qos_request(PM_QOS_CPU_DMA_LATENCY);
struct device *device = get_cpu_device(cpu); struct device *device = get_cpu_device(cpu);
int device_req = dev_pm_qos_raw_resume_latency(device); int device_req = dev_pm_qos_raw_resume_latency(device);
int global_req = cpu_latency_qos_limit();
if (device_req > global_req) if (device_req > global_req)
device_req = global_req; device_req = global_req;

View File

@ -1360,7 +1360,7 @@ intel_dp_aux_xfer(struct intel_dp *intel_dp,
* lowest possible wakeup latency and so prevent the cpu from going into * lowest possible wakeup latency and so prevent the cpu from going into
* deep sleep states. * deep sleep states.
*/ */
pm_qos_update_request(&i915->pm_qos, 0); cpu_latency_qos_update_request(&i915->pm_qos, 0);
intel_dp_check_edp(intel_dp); intel_dp_check_edp(intel_dp);
@ -1488,7 +1488,7 @@ intel_dp_aux_xfer(struct intel_dp *intel_dp,
ret = recv_bytes; ret = recv_bytes;
out: out:
pm_qos_update_request(&i915->pm_qos, PM_QOS_DEFAULT_VALUE); cpu_latency_qos_update_request(&i915->pm_qos, PM_QOS_DEFAULT_VALUE);
if (vdd) if (vdd)
edp_panel_vdd_off(intel_dp, false); edp_panel_vdd_off(intel_dp, false);

View File

@ -505,8 +505,7 @@ static int i915_driver_early_probe(struct drm_i915_private *dev_priv)
mutex_init(&dev_priv->backlight_lock); mutex_init(&dev_priv->backlight_lock);
mutex_init(&dev_priv->sb_lock); mutex_init(&dev_priv->sb_lock);
pm_qos_add_request(&dev_priv->sb_qos, cpu_latency_qos_add_request(&dev_priv->sb_qos, PM_QOS_DEFAULT_VALUE);
PM_QOS_CPU_DMA_LATENCY, PM_QOS_DEFAULT_VALUE);
mutex_init(&dev_priv->av_mutex); mutex_init(&dev_priv->av_mutex);
mutex_init(&dev_priv->wm.wm_mutex); mutex_init(&dev_priv->wm.wm_mutex);
@ -571,7 +570,7 @@ static void i915_driver_late_release(struct drm_i915_private *dev_priv)
vlv_free_s0ix_state(dev_priv); vlv_free_s0ix_state(dev_priv);
i915_workqueues_cleanup(dev_priv); i915_workqueues_cleanup(dev_priv);
pm_qos_remove_request(&dev_priv->sb_qos); cpu_latency_qos_remove_request(&dev_priv->sb_qos);
mutex_destroy(&dev_priv->sb_lock); mutex_destroy(&dev_priv->sb_lock);
} }
@ -1229,8 +1228,7 @@ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)
} }
} }
pm_qos_add_request(&dev_priv->pm_qos, PM_QOS_CPU_DMA_LATENCY, cpu_latency_qos_add_request(&dev_priv->pm_qos, PM_QOS_DEFAULT_VALUE);
PM_QOS_DEFAULT_VALUE);
intel_gt_init_workarounds(dev_priv); intel_gt_init_workarounds(dev_priv);
@ -1276,7 +1274,7 @@ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)
err_msi: err_msi:
if (pdev->msi_enabled) if (pdev->msi_enabled)
pci_disable_msi(pdev); pci_disable_msi(pdev);
pm_qos_remove_request(&dev_priv->pm_qos); cpu_latency_qos_remove_request(&dev_priv->pm_qos);
err_mem_regions: err_mem_regions:
intel_memory_regions_driver_release(dev_priv); intel_memory_regions_driver_release(dev_priv);
err_ggtt: err_ggtt:
@ -1299,7 +1297,7 @@ static void i915_driver_hw_remove(struct drm_i915_private *dev_priv)
if (pdev->msi_enabled) if (pdev->msi_enabled)
pci_disable_msi(pdev); pci_disable_msi(pdev);
pm_qos_remove_request(&dev_priv->pm_qos); cpu_latency_qos_remove_request(&dev_priv->pm_qos);
} }
/** /**

View File

@ -60,7 +60,7 @@ static void __vlv_punit_get(struct drm_i915_private *i915)
* to the Valleyview P-unit and not all sideband communications. * to the Valleyview P-unit and not all sideband communications.
*/ */
if (IS_VALLEYVIEW(i915)) { if (IS_VALLEYVIEW(i915)) {
pm_qos_update_request(&i915->sb_qos, 0); cpu_latency_qos_update_request(&i915->sb_qos, 0);
on_each_cpu(ping, NULL, 1); on_each_cpu(ping, NULL, 1);
} }
} }
@ -68,7 +68,8 @@ static void __vlv_punit_get(struct drm_i915_private *i915)
static void __vlv_punit_put(struct drm_i915_private *i915) static void __vlv_punit_put(struct drm_i915_private *i915)
{ {
if (IS_VALLEYVIEW(i915)) if (IS_VALLEYVIEW(i915))
pm_qos_update_request(&i915->sb_qos, PM_QOS_DEFAULT_VALUE); cpu_latency_qos_update_request(&i915->sb_qos,
PM_QOS_DEFAULT_VALUE);
iosf_mbi_punit_release(); iosf_mbi_punit_release();
} }

View File

@ -965,14 +965,13 @@ static int cs_hsi_buf_config(struct cs_hsi_iface *hi,
if (old_state != hi->iface_state) { if (old_state != hi->iface_state) {
if (hi->iface_state == CS_STATE_CONFIGURED) { if (hi->iface_state == CS_STATE_CONFIGURED) {
pm_qos_add_request(&hi->pm_qos_req, cpu_latency_qos_add_request(&hi->pm_qos_req,
PM_QOS_CPU_DMA_LATENCY,
CS_QOS_LATENCY_FOR_DATA_USEC); CS_QOS_LATENCY_FOR_DATA_USEC);
local_bh_disable(); local_bh_disable();
cs_hsi_read_on_data(hi); cs_hsi_read_on_data(hi);
local_bh_enable(); local_bh_enable();
} else if (old_state == CS_STATE_CONFIGURED) { } else if (old_state == CS_STATE_CONFIGURED) {
pm_qos_remove_request(&hi->pm_qos_req); cpu_latency_qos_remove_request(&hi->pm_qos_req);
} }
} }
return r; return r;
@ -1075,8 +1074,8 @@ static void cs_hsi_stop(struct cs_hsi_iface *hi)
WARN_ON(!cs_state_idle(hi->control_state)); WARN_ON(!cs_state_idle(hi->control_state));
WARN_ON(!cs_state_idle(hi->data_state)); WARN_ON(!cs_state_idle(hi->data_state));
if (pm_qos_request_active(&hi->pm_qos_req)) if (cpu_latency_qos_request_active(&hi->pm_qos_req))
pm_qos_remove_request(&hi->pm_qos_req); cpu_latency_qos_remove_request(&hi->pm_qos_req);
spin_lock_bh(&hi->lock); spin_lock_bh(&hi->lock);
cs_hsi_free_data(hi); cs_hsi_free_data(hi);

View File

@ -1008,8 +1008,7 @@ int saa7134_vb2_start_streaming(struct vb2_queue *vq, unsigned int count)
*/ */
if ((dmaq == &dev->video_q && !vb2_is_streaming(&dev->vbi_vbq)) || if ((dmaq == &dev->video_q && !vb2_is_streaming(&dev->vbi_vbq)) ||
(dmaq == &dev->vbi_q && !vb2_is_streaming(&dev->video_vbq))) (dmaq == &dev->vbi_q && !vb2_is_streaming(&dev->video_vbq)))
pm_qos_add_request(&dev->qos_request, cpu_latency_qos_add_request(&dev->qos_request, 20);
PM_QOS_CPU_DMA_LATENCY, 20);
dmaq->seq_nr = 0; dmaq->seq_nr = 0;
return 0; return 0;
@ -1024,7 +1023,7 @@ void saa7134_vb2_stop_streaming(struct vb2_queue *vq)
if ((dmaq == &dev->video_q && !vb2_is_streaming(&dev->vbi_vbq)) || if ((dmaq == &dev->video_q && !vb2_is_streaming(&dev->vbi_vbq)) ||
(dmaq == &dev->vbi_q && !vb2_is_streaming(&dev->video_vbq))) (dmaq == &dev->vbi_q && !vb2_is_streaming(&dev->video_vbq)))
pm_qos_remove_request(&dev->qos_request); cpu_latency_qos_remove_request(&dev->qos_request);
} }
static const struct vb2_ops vb2_qops = { static const struct vb2_ops vb2_qops = {

View File

@ -646,7 +646,7 @@ static int viacam_vb2_start_streaming(struct vb2_queue *vq, unsigned int count)
* requirement which will keep the CPU out of the deeper sleep * requirement which will keep the CPU out of the deeper sleep
* states. * states.
*/ */
pm_qos_add_request(&cam->qos_request, PM_QOS_CPU_DMA_LATENCY, 50); cpu_latency_qos_add_request(&cam->qos_request, 50);
viacam_start_engine(cam); viacam_start_engine(cam);
return 0; return 0;
out: out:
@ -662,7 +662,7 @@ static void viacam_vb2_stop_streaming(struct vb2_queue *vq)
struct via_camera *cam = vb2_get_drv_priv(vq); struct via_camera *cam = vb2_get_drv_priv(vq);
struct via_buffer *buf, *tmp; struct via_buffer *buf, *tmp;
pm_qos_remove_request(&cam->qos_request); cpu_latency_qos_remove_request(&cam->qos_request);
viacam_stop_engine(cam); viacam_stop_engine(cam);
list_for_each_entry_safe(buf, tmp, &cam->buffer_queue, queue) { list_for_each_entry_safe(buf, tmp, &cam->buffer_queue, queue) {

View File

@ -1452,8 +1452,7 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
pdev->id_entry->driver_data; pdev->id_entry->driver_data;
if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS) if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
pm_qos_add_request(&imx_data->pm_qos_req, cpu_latency_qos_add_request(&imx_data->pm_qos_req, 0);
PM_QOS_CPU_DMA_LATENCY, 0);
imx_data->clk_ipg = devm_clk_get(&pdev->dev, "ipg"); imx_data->clk_ipg = devm_clk_get(&pdev->dev, "ipg");
if (IS_ERR(imx_data->clk_ipg)) { if (IS_ERR(imx_data->clk_ipg)) {
@ -1572,7 +1571,7 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
clk_disable_unprepare(imx_data->clk_per); clk_disable_unprepare(imx_data->clk_per);
free_sdhci: free_sdhci:
if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS) if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
pm_qos_remove_request(&imx_data->pm_qos_req); cpu_latency_qos_remove_request(&imx_data->pm_qos_req);
sdhci_pltfm_free(pdev); sdhci_pltfm_free(pdev);
return err; return err;
} }
@ -1595,7 +1594,7 @@ static int sdhci_esdhc_imx_remove(struct platform_device *pdev)
clk_disable_unprepare(imx_data->clk_ahb); clk_disable_unprepare(imx_data->clk_ahb);
if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS) if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
pm_qos_remove_request(&imx_data->pm_qos_req); cpu_latency_qos_remove_request(&imx_data->pm_qos_req);
sdhci_pltfm_free(pdev); sdhci_pltfm_free(pdev);
@ -1667,7 +1666,7 @@ static int sdhci_esdhc_runtime_suspend(struct device *dev)
clk_disable_unprepare(imx_data->clk_ahb); clk_disable_unprepare(imx_data->clk_ahb);
if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS) if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
pm_qos_remove_request(&imx_data->pm_qos_req); cpu_latency_qos_remove_request(&imx_data->pm_qos_req);
return ret; return ret;
} }
@ -1680,8 +1679,7 @@ static int sdhci_esdhc_runtime_resume(struct device *dev)
int err; int err;
if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS) if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
pm_qos_add_request(&imx_data->pm_qos_req, cpu_latency_qos_add_request(&imx_data->pm_qos_req, 0);
PM_QOS_CPU_DMA_LATENCY, 0);
err = clk_prepare_enable(imx_data->clk_ahb); err = clk_prepare_enable(imx_data->clk_ahb);
if (err) if (err)
@ -1714,7 +1712,7 @@ static int sdhci_esdhc_runtime_resume(struct device *dev)
clk_disable_unprepare(imx_data->clk_ahb); clk_disable_unprepare(imx_data->clk_ahb);
remove_pm_qos_request: remove_pm_qos_request:
if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS) if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
pm_qos_remove_request(&imx_data->pm_qos_req); cpu_latency_qos_remove_request(&imx_data->pm_qos_req);
return err; return err;
} }
#endif #endif

View File

@ -3280,10 +3280,10 @@ static void e1000_configure_rx(struct e1000_adapter *adapter)
dev_info(&adapter->pdev->dev, dev_info(&adapter->pdev->dev,
"Some CPU C-states have been disabled in order to enable jumbo frames\n"); "Some CPU C-states have been disabled in order to enable jumbo frames\n");
pm_qos_update_request(&adapter->pm_qos_req, lat); cpu_latency_qos_update_request(&adapter->pm_qos_req, lat);
} else { } else {
pm_qos_update_request(&adapter->pm_qos_req, cpu_latency_qos_update_request(&adapter->pm_qos_req,
PM_QOS_DEFAULT_VALUE); PM_QOS_DEFAULT_VALUE);
} }
/* Enable Receives */ /* Enable Receives */
@ -4636,8 +4636,7 @@ int e1000e_open(struct net_device *netdev)
e1000_update_mng_vlan(adapter); e1000_update_mng_vlan(adapter);
/* DMA latency requirement to workaround jumbo issue */ /* DMA latency requirement to workaround jumbo issue */
pm_qos_add_request(&adapter->pm_qos_req, PM_QOS_CPU_DMA_LATENCY, cpu_latency_qos_add_request(&adapter->pm_qos_req, PM_QOS_DEFAULT_VALUE);
PM_QOS_DEFAULT_VALUE);
/* before we allocate an interrupt, we must be ready to handle it. /* before we allocate an interrupt, we must be ready to handle it.
* Setting DEBUG_SHIRQ in the kernel makes it fire an interrupt * Setting DEBUG_SHIRQ in the kernel makes it fire an interrupt
@ -4679,7 +4678,7 @@ int e1000e_open(struct net_device *netdev)
return 0; return 0;
err_req_irq: err_req_irq:
pm_qos_remove_request(&adapter->pm_qos_req); cpu_latency_qos_remove_request(&adapter->pm_qos_req);
e1000e_release_hw_control(adapter); e1000e_release_hw_control(adapter);
e1000_power_down_phy(adapter); e1000_power_down_phy(adapter);
e1000e_free_rx_resources(adapter->rx_ring); e1000e_free_rx_resources(adapter->rx_ring);
@ -4743,7 +4742,7 @@ int e1000e_close(struct net_device *netdev)
!test_bit(__E1000_TESTING, &adapter->state)) !test_bit(__E1000_TESTING, &adapter->state))
e1000e_release_hw_control(adapter); e1000e_release_hw_control(adapter);
pm_qos_remove_request(&adapter->pm_qos_req); cpu_latency_qos_remove_request(&adapter->pm_qos_req);
pm_runtime_put_sync(&pdev->dev); pm_runtime_put_sync(&pdev->dev);

View File

@ -1052,11 +1052,11 @@ static int ath10k_download_fw(struct ath10k *ar)
} }
memset(&latency_qos, 0, sizeof(latency_qos)); memset(&latency_qos, 0, sizeof(latency_qos));
pm_qos_add_request(&latency_qos, PM_QOS_CPU_DMA_LATENCY, 0); cpu_latency_qos_add_request(&latency_qos, 0);
ret = ath10k_bmi_fast_download(ar, address, data, data_len); ret = ath10k_bmi_fast_download(ar, address, data, data_len);
pm_qos_remove_request(&latency_qos); cpu_latency_qos_remove_request(&latency_qos);
return ret; return ret;
} }

View File

@ -1730,7 +1730,7 @@ static int ipw2100_up(struct ipw2100_priv *priv, int deferred)
/* the ipw2100 hardware really doesn't want power management delays /* the ipw2100 hardware really doesn't want power management delays
* longer than 175usec * longer than 175usec
*/ */
pm_qos_update_request(&ipw2100_pm_qos_req, 175); cpu_latency_qos_update_request(&ipw2100_pm_qos_req, 175);
/* If the interrupt is enabled, turn it off... */ /* If the interrupt is enabled, turn it off... */
spin_lock_irqsave(&priv->low_lock, flags); spin_lock_irqsave(&priv->low_lock, flags);
@ -1875,7 +1875,8 @@ static void ipw2100_down(struct ipw2100_priv *priv)
ipw2100_disable_interrupts(priv); ipw2100_disable_interrupts(priv);
spin_unlock_irqrestore(&priv->low_lock, flags); spin_unlock_irqrestore(&priv->low_lock, flags);
pm_qos_update_request(&ipw2100_pm_qos_req, PM_QOS_DEFAULT_VALUE); cpu_latency_qos_update_request(&ipw2100_pm_qos_req,
PM_QOS_DEFAULT_VALUE);
/* We have to signal any supplicant if we are disassociating */ /* We have to signal any supplicant if we are disassociating */
if (associated) if (associated)
@ -6566,8 +6567,7 @@ static int __init ipw2100_init(void)
printk(KERN_INFO DRV_NAME ": %s, %s\n", DRV_DESCRIPTION, DRV_VERSION); printk(KERN_INFO DRV_NAME ": %s, %s\n", DRV_DESCRIPTION, DRV_VERSION);
printk(KERN_INFO DRV_NAME ": %s\n", DRV_COPYRIGHT); printk(KERN_INFO DRV_NAME ": %s\n", DRV_COPYRIGHT);
pm_qos_add_request(&ipw2100_pm_qos_req, PM_QOS_CPU_DMA_LATENCY, cpu_latency_qos_add_request(&ipw2100_pm_qos_req, PM_QOS_DEFAULT_VALUE);
PM_QOS_DEFAULT_VALUE);
ret = pci_register_driver(&ipw2100_pci_driver); ret = pci_register_driver(&ipw2100_pci_driver);
if (ret) if (ret)
@ -6594,7 +6594,7 @@ static void __exit ipw2100_exit(void)
&driver_attr_debug_level); &driver_attr_debug_level);
#endif #endif
pci_unregister_driver(&ipw2100_pci_driver); pci_unregister_driver(&ipw2100_pci_driver);
pm_qos_remove_request(&ipw2100_pm_qos_req); cpu_latency_qos_remove_request(&ipw2100_pm_qos_req);
} }
module_init(ipw2100_init); module_init(ipw2100_init);

View File

@ -484,7 +484,7 @@ static int fsl_qspi_clk_prep_enable(struct fsl_qspi *q)
} }
if (needs_wakeup_wait_mode(q)) if (needs_wakeup_wait_mode(q))
pm_qos_add_request(&q->pm_qos_req, PM_QOS_CPU_DMA_LATENCY, 0); cpu_latency_qos_add_request(&q->pm_qos_req, 0);
return 0; return 0;
} }
@ -492,7 +492,7 @@ static int fsl_qspi_clk_prep_enable(struct fsl_qspi *q)
static void fsl_qspi_clk_disable_unprep(struct fsl_qspi *q) static void fsl_qspi_clk_disable_unprep(struct fsl_qspi *q)
{ {
if (needs_wakeup_wait_mode(q)) if (needs_wakeup_wait_mode(q))
pm_qos_remove_request(&q->pm_qos_req); cpu_latency_qos_remove_request(&q->pm_qos_req);
clk_disable_unprepare(q->clk); clk_disable_unprepare(q->clk);
clk_disable_unprepare(q->clk_en); clk_disable_unprepare(q->clk_en);

View File

@ -569,7 +569,7 @@ static void omap8250_uart_qos_work(struct work_struct *work)
struct omap8250_priv *priv; struct omap8250_priv *priv;
priv = container_of(work, struct omap8250_priv, qos_work); priv = container_of(work, struct omap8250_priv, qos_work);
pm_qos_update_request(&priv->pm_qos_request, priv->latency); cpu_latency_qos_update_request(&priv->pm_qos_request, priv->latency);
} }
#ifdef CONFIG_SERIAL_8250_DMA #ifdef CONFIG_SERIAL_8250_DMA
@ -1222,10 +1222,9 @@ static int omap8250_probe(struct platform_device *pdev)
DEFAULT_CLK_SPEED); DEFAULT_CLK_SPEED);
} }
priv->latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE; priv->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
priv->calc_latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE; priv->calc_latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
pm_qos_add_request(&priv->pm_qos_request, PM_QOS_CPU_DMA_LATENCY, cpu_latency_qos_add_request(&priv->pm_qos_request, priv->latency);
priv->latency);
INIT_WORK(&priv->qos_work, omap8250_uart_qos_work); INIT_WORK(&priv->qos_work, omap8250_uart_qos_work);
spin_lock_init(&priv->rx_dma_lock); spin_lock_init(&priv->rx_dma_lock);
@ -1295,7 +1294,7 @@ static int omap8250_remove(struct platform_device *pdev)
pm_runtime_put_sync(&pdev->dev); pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev); pm_runtime_disable(&pdev->dev);
serial8250_unregister_port(priv->line); serial8250_unregister_port(priv->line);
pm_qos_remove_request(&priv->pm_qos_request); cpu_latency_qos_remove_request(&priv->pm_qos_request);
device_init_wakeup(&pdev->dev, false); device_init_wakeup(&pdev->dev, false);
return 0; return 0;
} }
@ -1445,7 +1444,7 @@ static int omap8250_runtime_suspend(struct device *dev)
if (up->dma && up->dma->rxchan) if (up->dma && up->dma->rxchan)
omap_8250_rx_dma_flush(up); omap_8250_rx_dma_flush(up);
priv->latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE; priv->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
schedule_work(&priv->qos_work); schedule_work(&priv->qos_work);
return 0; return 0;

View File

@ -831,7 +831,7 @@ static void serial_omap_uart_qos_work(struct work_struct *work)
struct uart_omap_port *up = container_of(work, struct uart_omap_port, struct uart_omap_port *up = container_of(work, struct uart_omap_port,
qos_work); qos_work);
pm_qos_update_request(&up->pm_qos_request, up->latency); cpu_latency_qos_update_request(&up->pm_qos_request, up->latency);
} }
static void static void
@ -1722,10 +1722,9 @@ static int serial_omap_probe(struct platform_device *pdev)
DEFAULT_CLK_SPEED); DEFAULT_CLK_SPEED);
} }
up->latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE; up->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
up->calc_latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE; up->calc_latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
pm_qos_add_request(&up->pm_qos_request, cpu_latency_qos_add_request(&up->pm_qos_request, up->latency);
PM_QOS_CPU_DMA_LATENCY, up->latency);
INIT_WORK(&up->qos_work, serial_omap_uart_qos_work); INIT_WORK(&up->qos_work, serial_omap_uart_qos_work);
platform_set_drvdata(pdev, up); platform_set_drvdata(pdev, up);
@ -1759,7 +1758,7 @@ static int serial_omap_probe(struct platform_device *pdev)
pm_runtime_dont_use_autosuspend(&pdev->dev); pm_runtime_dont_use_autosuspend(&pdev->dev);
pm_runtime_put_sync(&pdev->dev); pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev); pm_runtime_disable(&pdev->dev);
pm_qos_remove_request(&up->pm_qos_request); cpu_latency_qos_remove_request(&up->pm_qos_request);
device_init_wakeup(up->dev, false); device_init_wakeup(up->dev, false);
err_rs485: err_rs485:
err_port_line: err_port_line:
@ -1777,7 +1776,7 @@ static int serial_omap_remove(struct platform_device *dev)
pm_runtime_dont_use_autosuspend(up->dev); pm_runtime_dont_use_autosuspend(up->dev);
pm_runtime_put_sync(up->dev); pm_runtime_put_sync(up->dev);
pm_runtime_disable(up->dev); pm_runtime_disable(up->dev);
pm_qos_remove_request(&up->pm_qos_request); cpu_latency_qos_remove_request(&up->pm_qos_request);
device_init_wakeup(&dev->dev, false); device_init_wakeup(&dev->dev, false);
return 0; return 0;
@ -1869,7 +1868,7 @@ static int serial_omap_runtime_suspend(struct device *dev)
serial_omap_enable_wakeup(up, true); serial_omap_enable_wakeup(up, true);
up->latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE; up->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
schedule_work(&up->qos_work); schedule_work(&up->qos_work);
return 0; return 0;

View File

@ -393,8 +393,7 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
} }
if (pdata.flags & CI_HDRC_PMQOS) if (pdata.flags & CI_HDRC_PMQOS)
pm_qos_add_request(&data->pm_qos_req, cpu_latency_qos_add_request(&data->pm_qos_req, 0);
PM_QOS_CPU_DMA_LATENCY, 0);
ret = imx_get_clks(dev); ret = imx_get_clks(dev);
if (ret) if (ret)
@ -478,7 +477,7 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
/* don't overwrite original ret (cf. EPROBE_DEFER) */ /* don't overwrite original ret (cf. EPROBE_DEFER) */
regulator_disable(data->hsic_pad_regulator); regulator_disable(data->hsic_pad_regulator);
if (pdata.flags & CI_HDRC_PMQOS) if (pdata.flags & CI_HDRC_PMQOS)
pm_qos_remove_request(&data->pm_qos_req); cpu_latency_qos_remove_request(&data->pm_qos_req);
data->ci_pdev = NULL; data->ci_pdev = NULL;
return ret; return ret;
} }
@ -499,7 +498,7 @@ static int ci_hdrc_imx_remove(struct platform_device *pdev)
if (data->ci_pdev) { if (data->ci_pdev) {
imx_disable_unprepare_clks(&pdev->dev); imx_disable_unprepare_clks(&pdev->dev);
if (data->plat_data->flags & CI_HDRC_PMQOS) if (data->plat_data->flags & CI_HDRC_PMQOS)
pm_qos_remove_request(&data->pm_qos_req); cpu_latency_qos_remove_request(&data->pm_qos_req);
if (data->hsic_pad_regulator) if (data->hsic_pad_regulator)
regulator_disable(data->hsic_pad_regulator); regulator_disable(data->hsic_pad_regulator);
} }
@ -527,7 +526,7 @@ static int __maybe_unused imx_controller_suspend(struct device *dev)
imx_disable_unprepare_clks(dev); imx_disable_unprepare_clks(dev);
if (data->plat_data->flags & CI_HDRC_PMQOS) if (data->plat_data->flags & CI_HDRC_PMQOS)
pm_qos_remove_request(&data->pm_qos_req); cpu_latency_qos_remove_request(&data->pm_qos_req);
data->in_lpm = true; data->in_lpm = true;
@ -547,8 +546,7 @@ static int __maybe_unused imx_controller_resume(struct device *dev)
} }
if (data->plat_data->flags & CI_HDRC_PMQOS) if (data->plat_data->flags & CI_HDRC_PMQOS)
pm_qos_add_request(&data->pm_qos_req, cpu_latency_qos_add_request(&data->pm_qos_req, 0);
PM_QOS_CPU_DMA_LATENCY, 0);
ret = imx_prepare_enable_clks(dev); ret = imx_prepare_enable_clks(dev);
if (ret) if (ret)

View File

@ -1,22 +1,20 @@
/* SPDX-License-Identifier: GPL-2.0 */ /* SPDX-License-Identifier: GPL-2.0 */
/*
* Definitions related to Power Management Quality of Service (PM QoS).
*
* Copyright (C) 2020 Intel Corporation
*
* Authors:
* Mark Gross <mgross@linux.intel.com>
* Rafael J. Wysocki <rafael.j.wysocki@intel.com>
*/
#ifndef _LINUX_PM_QOS_H #ifndef _LINUX_PM_QOS_H
#define _LINUX_PM_QOS_H #define _LINUX_PM_QOS_H
/* interface for the pm_qos_power infrastructure of the linux kernel.
*
* Mark Gross <mgross@linux.intel.com>
*/
#include <linux/plist.h> #include <linux/plist.h>
#include <linux/notifier.h> #include <linux/notifier.h>
#include <linux/device.h> #include <linux/device.h>
#include <linux/workqueue.h>
enum {
PM_QOS_RESERVED = 0,
PM_QOS_CPU_DMA_LATENCY,
/* insert new class ID */
PM_QOS_NUM_CLASSES,
};
enum pm_qos_flags_status { enum pm_qos_flags_status {
PM_QOS_FLAGS_UNDEFINED = -1, PM_QOS_FLAGS_UNDEFINED = -1,
@ -29,7 +27,7 @@ enum pm_qos_flags_status {
#define PM_QOS_LATENCY_ANY S32_MAX #define PM_QOS_LATENCY_ANY S32_MAX
#define PM_QOS_LATENCY_ANY_NS ((s64)PM_QOS_LATENCY_ANY * NSEC_PER_USEC) #define PM_QOS_LATENCY_ANY_NS ((s64)PM_QOS_LATENCY_ANY * NSEC_PER_USEC)
#define PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC) #define PM_QOS_CPU_LATENCY_DEFAULT_VALUE (2000 * USEC_PER_SEC)
#define PM_QOS_RESUME_LATENCY_DEFAULT_VALUE PM_QOS_LATENCY_ANY #define PM_QOS_RESUME_LATENCY_DEFAULT_VALUE PM_QOS_LATENCY_ANY
#define PM_QOS_RESUME_LATENCY_NO_CONSTRAINT PM_QOS_LATENCY_ANY #define PM_QOS_RESUME_LATENCY_NO_CONSTRAINT PM_QOS_LATENCY_ANY
#define PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS PM_QOS_LATENCY_ANY_NS #define PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS PM_QOS_LATENCY_ANY_NS
@ -40,22 +38,10 @@ enum pm_qos_flags_status {
#define PM_QOS_FLAG_NO_POWER_OFF (1 << 0) #define PM_QOS_FLAG_NO_POWER_OFF (1 << 0)
struct pm_qos_request {
struct plist_node node;
int pm_qos_class;
struct delayed_work work; /* for pm_qos_update_request_timeout */
};
struct pm_qos_flags_request {
struct list_head node;
s32 flags; /* Do not change to 64 bit */
};
enum pm_qos_type { enum pm_qos_type {
PM_QOS_UNITIALIZED, PM_QOS_UNITIALIZED,
PM_QOS_MAX, /* return the largest value */ PM_QOS_MAX, /* return the largest value */
PM_QOS_MIN, /* return the smallest value */ PM_QOS_MIN, /* return the smallest value */
PM_QOS_SUM /* return the sum */
}; };
/* /*
@ -72,6 +58,16 @@ struct pm_qos_constraints {
struct blocking_notifier_head *notifiers; struct blocking_notifier_head *notifiers;
}; };
struct pm_qos_request {
struct plist_node node;
struct pm_qos_constraints *qos;
};
struct pm_qos_flags_request {
struct list_head node;
s32 flags; /* Do not change to 64 bit */
};
struct pm_qos_flags { struct pm_qos_flags {
struct list_head list; struct list_head list;
s32 effective_flags; /* Do not change to 64 bit */ s32 effective_flags; /* Do not change to 64 bit */
@ -140,24 +136,31 @@ static inline int dev_pm_qos_request_active(struct dev_pm_qos_request *req)
return req->dev != NULL; return req->dev != NULL;
} }
s32 pm_qos_read_value(struct pm_qos_constraints *c);
int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node, int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node,
enum pm_qos_req_action action, int value); enum pm_qos_req_action action, int value);
bool pm_qos_update_flags(struct pm_qos_flags *pqf, bool pm_qos_update_flags(struct pm_qos_flags *pqf,
struct pm_qos_flags_request *req, struct pm_qos_flags_request *req,
enum pm_qos_req_action action, s32 val); enum pm_qos_req_action action, s32 val);
void pm_qos_add_request(struct pm_qos_request *req, int pm_qos_class,
s32 value);
void pm_qos_update_request(struct pm_qos_request *req,
s32 new_value);
void pm_qos_update_request_timeout(struct pm_qos_request *req,
s32 new_value, unsigned long timeout_us);
void pm_qos_remove_request(struct pm_qos_request *req);
int pm_qos_request(int pm_qos_class); #ifdef CONFIG_CPU_IDLE
int pm_qos_add_notifier(int pm_qos_class, struct notifier_block *notifier); s32 cpu_latency_qos_limit(void);
int pm_qos_remove_notifier(int pm_qos_class, struct notifier_block *notifier); bool cpu_latency_qos_request_active(struct pm_qos_request *req);
int pm_qos_request_active(struct pm_qos_request *req); void cpu_latency_qos_add_request(struct pm_qos_request *req, s32 value);
s32 pm_qos_read_value(struct pm_qos_constraints *c); void cpu_latency_qos_update_request(struct pm_qos_request *req, s32 new_value);
void cpu_latency_qos_remove_request(struct pm_qos_request *req);
#else
static inline s32 cpu_latency_qos_limit(void) { return INT_MAX; }
static inline bool cpu_latency_qos_request_active(struct pm_qos_request *req)
{
return false;
}
static inline void cpu_latency_qos_add_request(struct pm_qos_request *req,
s32 value) {}
static inline void cpu_latency_qos_update_request(struct pm_qos_request *req,
s32 new_value) {}
static inline void cpu_latency_qos_remove_request(struct pm_qos_request *req) {}
#endif
#ifdef CONFIG_PM #ifdef CONFIG_PM
enum pm_qos_flags_status __dev_pm_qos_flags(struct device *dev, s32 mask); enum pm_qos_flags_status __dev_pm_qos_flags(struct device *dev, s32 mask);

View File

@ -359,75 +359,50 @@ DEFINE_EVENT(power_domain, power_domain_target,
); );
/* /*
* The pm qos events are used for pm qos update * CPU latency QoS events used for global CPU latency QoS list updates
*/ */
DECLARE_EVENT_CLASS(pm_qos_request, DECLARE_EVENT_CLASS(cpu_latency_qos_request,
TP_PROTO(int pm_qos_class, s32 value), TP_PROTO(s32 value),
TP_ARGS(pm_qos_class, value), TP_ARGS(value),
TP_STRUCT__entry( TP_STRUCT__entry(
__field( int, pm_qos_class )
__field( s32, value ) __field( s32, value )
), ),
TP_fast_assign( TP_fast_assign(
__entry->pm_qos_class = pm_qos_class;
__entry->value = value; __entry->value = value;
), ),
TP_printk("pm_qos_class=%s value=%d", TP_printk("CPU_DMA_LATENCY value=%d",
__print_symbolic(__entry->pm_qos_class,
{ PM_QOS_CPU_DMA_LATENCY, "CPU_DMA_LATENCY" }),
__entry->value) __entry->value)
); );
DEFINE_EVENT(pm_qos_request, pm_qos_add_request, DEFINE_EVENT(cpu_latency_qos_request, pm_qos_add_request,
TP_PROTO(int pm_qos_class, s32 value), TP_PROTO(s32 value),
TP_ARGS(pm_qos_class, value) TP_ARGS(value)
); );
DEFINE_EVENT(pm_qos_request, pm_qos_update_request, DEFINE_EVENT(cpu_latency_qos_request, pm_qos_update_request,
TP_PROTO(int pm_qos_class, s32 value), TP_PROTO(s32 value),
TP_ARGS(pm_qos_class, value) TP_ARGS(value)
); );
DEFINE_EVENT(pm_qos_request, pm_qos_remove_request, DEFINE_EVENT(cpu_latency_qos_request, pm_qos_remove_request,
TP_PROTO(int pm_qos_class, s32 value), TP_PROTO(s32 value),
TP_ARGS(pm_qos_class, value) TP_ARGS(value)
);
TRACE_EVENT(pm_qos_update_request_timeout,
TP_PROTO(int pm_qos_class, s32 value, unsigned long timeout_us),
TP_ARGS(pm_qos_class, value, timeout_us),
TP_STRUCT__entry(
__field( int, pm_qos_class )
__field( s32, value )
__field( unsigned long, timeout_us )
),
TP_fast_assign(
__entry->pm_qos_class = pm_qos_class;
__entry->value = value;
__entry->timeout_us = timeout_us;
),
TP_printk("pm_qos_class=%s value=%d, timeout_us=%ld",
__print_symbolic(__entry->pm_qos_class,
{ PM_QOS_CPU_DMA_LATENCY, "CPU_DMA_LATENCY" }),
__entry->value, __entry->timeout_us)
); );
/*
* General PM QoS events used for updates of PM QoS request lists
*/
DECLARE_EVENT_CLASS(pm_qos_update, DECLARE_EVENT_CLASS(pm_qos_update,
TP_PROTO(enum pm_qos_req_action action, int prev_value, int curr_value), TP_PROTO(enum pm_qos_req_action action, int prev_value, int curr_value),

View File

@ -1,31 +1,21 @@
// SPDX-License-Identifier: GPL-2.0-only // SPDX-License-Identifier: GPL-2.0-only
/* /*
* This module exposes the interface to kernel space for specifying * Power Management Quality of Service (PM QoS) support base.
* QoS dependencies. It provides infrastructure for registration of:
* *
* Dependents on a QoS value : register requests * Copyright (C) 2020 Intel Corporation
* Watchers of QoS value : get notified when target QoS value changes
* *
* This QoS design is best effort based. Dependents register their QoS needs. * Authors:
* Watchers register to keep track of the current QoS needs of the system. * Mark Gross <mgross@linux.intel.com>
* Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* *
* There are 3 basic classes of QoS parameter: latency, timeout, throughput * Provided here is an interface for specifying PM QoS dependencies. It allows
* each have defined units: * entities depending on QoS constraints to register their requests which are
* latency: usec * aggregated as appropriate to produce effective constraints (target values)
* timeout: usec <-- currently not used. * that can be monitored by entities needing to respect them, either by polling
* throughput: kbs (kilo byte / sec) * or through a built-in notification mechanism.
* *
* There are lists of pm_qos_objects each one wrapping requests, notifiers * In addition to the basic functionality, more specific interfaces for managing
* * global CPU latency QoS requests and frequency QoS requests are provided.
* User mode requests on a QOS parameter register themselves to the
* subsystem by opening the device node /dev/... and writing there request to
* the node. As long as the process holds a file handle open to the node the
* client continues to be accounted for. Upon file release the usermode
* request is removed and a new qos target is computed. This way when the
* request that the application has is cleaned up when closes the file
* pointer or exits the pm_qos_object will get an opportunity to clean up.
*
* Mark Gross <mgross@linux.intel.com>
*/ */
/*#define DEBUG*/ /*#define DEBUG*/
@ -54,56 +44,19 @@
* or pm_qos_object list and pm_qos_objects need to happen with pm_qos_lock * or pm_qos_object list and pm_qos_objects need to happen with pm_qos_lock
* held, taken with _irqsave. One lock to rule them all * held, taken with _irqsave. One lock to rule them all
*/ */
struct pm_qos_object {
struct pm_qos_constraints *constraints;
struct miscdevice pm_qos_power_miscdev;
char *name;
};
static DEFINE_SPINLOCK(pm_qos_lock); static DEFINE_SPINLOCK(pm_qos_lock);
static struct pm_qos_object null_pm_qos; /**
* pm_qos_read_value - Return the current effective constraint value.
static BLOCKING_NOTIFIER_HEAD(cpu_dma_lat_notifier); * @c: List of PM QoS constraint requests.
static struct pm_qos_constraints cpu_dma_constraints = { */
.list = PLIST_HEAD_INIT(cpu_dma_constraints.list), s32 pm_qos_read_value(struct pm_qos_constraints *c)
.target_value = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE,
.default_value = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE,
.no_constraint_value = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE,
.type = PM_QOS_MIN,
.notifiers = &cpu_dma_lat_notifier,
};
static struct pm_qos_object cpu_dma_pm_qos = {
.constraints = &cpu_dma_constraints,
.name = "cpu_dma_latency",
};
static struct pm_qos_object *pm_qos_array[] = {
&null_pm_qos,
&cpu_dma_pm_qos,
};
static ssize_t pm_qos_power_write(struct file *filp, const char __user *buf,
size_t count, loff_t *f_pos);
static ssize_t pm_qos_power_read(struct file *filp, char __user *buf,
size_t count, loff_t *f_pos);
static int pm_qos_power_open(struct inode *inode, struct file *filp);
static int pm_qos_power_release(struct inode *inode, struct file *filp);
static const struct file_operations pm_qos_power_fops = {
.write = pm_qos_power_write,
.read = pm_qos_power_read,
.open = pm_qos_power_open,
.release = pm_qos_power_release,
.llseek = noop_llseek,
};
/* unlocked internal variant */
static inline int pm_qos_get_value(struct pm_qos_constraints *c)
{ {
struct plist_node *node; return READ_ONCE(c->target_value);
int total_value = 0; }
static int pm_qos_get_value(struct pm_qos_constraints *c)
{
if (plist_head_empty(&c->list)) if (plist_head_empty(&c->list))
return c->no_constraint_value; return c->no_constraint_value;
@ -114,111 +67,42 @@ static inline int pm_qos_get_value(struct pm_qos_constraints *c)
case PM_QOS_MAX: case PM_QOS_MAX:
return plist_last(&c->list)->prio; return plist_last(&c->list)->prio;
case PM_QOS_SUM:
plist_for_each(node, &c->list)
total_value += node->prio;
return total_value;
default: default:
/* runtime check for not using enum */ WARN(1, "Unknown PM QoS type in %s\n", __func__);
BUG();
return PM_QOS_DEFAULT_VALUE; return PM_QOS_DEFAULT_VALUE;
} }
} }
s32 pm_qos_read_value(struct pm_qos_constraints *c) static void pm_qos_set_value(struct pm_qos_constraints *c, s32 value)
{ {
return c->target_value; WRITE_ONCE(c->target_value, value);
} }
static inline void pm_qos_set_value(struct pm_qos_constraints *c, s32 value)
{
c->target_value = value;
}
static int pm_qos_debug_show(struct seq_file *s, void *unused)
{
struct pm_qos_object *qos = (struct pm_qos_object *)s->private;
struct pm_qos_constraints *c;
struct pm_qos_request *req;
char *type;
unsigned long flags;
int tot_reqs = 0;
int active_reqs = 0;
if (IS_ERR_OR_NULL(qos)) {
pr_err("%s: bad qos param!\n", __func__);
return -EINVAL;
}
c = qos->constraints;
if (IS_ERR_OR_NULL(c)) {
pr_err("%s: Bad constraints on qos?\n", __func__);
return -EINVAL;
}
/* Lock to ensure we have a snapshot */
spin_lock_irqsave(&pm_qos_lock, flags);
if (plist_head_empty(&c->list)) {
seq_puts(s, "Empty!\n");
goto out;
}
switch (c->type) {
case PM_QOS_MIN:
type = "Minimum";
break;
case PM_QOS_MAX:
type = "Maximum";
break;
case PM_QOS_SUM:
type = "Sum";
break;
default:
type = "Unknown";
}
plist_for_each_entry(req, &c->list, node) {
char *state = "Default";
if ((req->node).prio != c->default_value) {
active_reqs++;
state = "Active";
}
tot_reqs++;
seq_printf(s, "%d: %d: %s\n", tot_reqs,
(req->node).prio, state);
}
seq_printf(s, "Type=%s, Value=%d, Requests: active=%d / total=%d\n",
type, pm_qos_get_value(c), active_reqs, tot_reqs);
out:
spin_unlock_irqrestore(&pm_qos_lock, flags);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(pm_qos_debug);
/** /**
* pm_qos_update_target - manages the constraints list and calls the notifiers * pm_qos_update_target - Update a list of PM QoS constraint requests.
* if needed * @c: List of PM QoS requests.
* @c: constraints data struct * @node: Target list entry.
* @node: request to add to the list, to update or to remove * @action: Action to carry out (add, update or remove).
* @action: action to take on the constraints list * @value: New request value for the target list entry.
* @value: value of the request to add or update
* *
* This function returns 1 if the aggregated constraint value has changed, 0 * Update the given list of PM QoS constraint requests, @c, by carrying an
* otherwise. * @action involving the @node list entry and @value on it.
*
* The recognized values of @action are PM_QOS_ADD_REQ (store @value in @node
* and add it to the list), PM_QOS_UPDATE_REQ (remove @node from the list, store
* @value in it and add it to the list again), and PM_QOS_REMOVE_REQ (remove
* @node from the list, ignore @value).
*
* Return: 1 if the aggregate constraint value has changed, 0 otherwise.
*/ */
int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node, int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node,
enum pm_qos_req_action action, int value) enum pm_qos_req_action action, int value)
{ {
unsigned long flags;
int prev_value, curr_value, new_value; int prev_value, curr_value, new_value;
int ret; unsigned long flags;
spin_lock_irqsave(&pm_qos_lock, flags); spin_lock_irqsave(&pm_qos_lock, flags);
prev_value = pm_qos_get_value(c); prev_value = pm_qos_get_value(c);
if (value == PM_QOS_DEFAULT_VALUE) if (value == PM_QOS_DEFAULT_VALUE)
new_value = c->default_value; new_value = c->default_value;
@ -231,9 +115,8 @@ int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node,
break; break;
case PM_QOS_UPDATE_REQ: case PM_QOS_UPDATE_REQ:
/* /*
* to change the list, we atomically remove, reinit * To change the list, atomically remove, reinit with new value
* with new value and add, then see if the extremal * and add, then see if the aggregate has changed.
* changed
*/ */
plist_del(node, &c->list); plist_del(node, &c->list);
/* fall through */ /* fall through */
@ -252,16 +135,14 @@ int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node,
spin_unlock_irqrestore(&pm_qos_lock, flags); spin_unlock_irqrestore(&pm_qos_lock, flags);
trace_pm_qos_update_target(action, prev_value, curr_value); trace_pm_qos_update_target(action, prev_value, curr_value);
if (prev_value != curr_value) {
ret = 1; if (prev_value == curr_value)
if (c->notifiers) return 0;
blocking_notifier_call_chain(c->notifiers,
(unsigned long)curr_value, if (c->notifiers)
NULL); blocking_notifier_call_chain(c->notifiers, curr_value, NULL);
} else {
ret = 0; return 1;
}
return ret;
} }
/** /**
@ -283,14 +164,12 @@ static void pm_qos_flags_remove_req(struct pm_qos_flags *pqf,
/** /**
* pm_qos_update_flags - Update a set of PM QoS flags. * pm_qos_update_flags - Update a set of PM QoS flags.
* @pqf: Set of flags to update. * @pqf: Set of PM QoS flags to update.
* @req: Request to add to the set, to modify, or to remove from the set. * @req: Request to add to the set, to modify, or to remove from the set.
* @action: Action to take on the set. * @action: Action to take on the set.
* @val: Value of the request to add or modify. * @val: Value of the request to add or modify.
* *
* Update the given set of PM QoS flags and call notifiers if the aggregate * Return: 1 if the aggregate constraint value has changed, 0 otherwise.
* value has changed. Returns 1 if the aggregate constraint value has changed,
* 0 otherwise.
*/ */
bool pm_qos_update_flags(struct pm_qos_flags *pqf, bool pm_qos_update_flags(struct pm_qos_flags *pqf,
struct pm_qos_flags_request *req, struct pm_qos_flags_request *req,
@ -326,288 +205,180 @@ bool pm_qos_update_flags(struct pm_qos_flags *pqf,
spin_unlock_irqrestore(&pm_qos_lock, irqflags); spin_unlock_irqrestore(&pm_qos_lock, irqflags);
trace_pm_qos_update_flags(action, prev_value, curr_value); trace_pm_qos_update_flags(action, prev_value, curr_value);
return prev_value != curr_value; return prev_value != curr_value;
} }
#ifdef CONFIG_CPU_IDLE
/* Definitions related to the CPU latency QoS. */
static struct pm_qos_constraints cpu_latency_constraints = {
.list = PLIST_HEAD_INIT(cpu_latency_constraints.list),
.target_value = PM_QOS_CPU_LATENCY_DEFAULT_VALUE,
.default_value = PM_QOS_CPU_LATENCY_DEFAULT_VALUE,
.no_constraint_value = PM_QOS_CPU_LATENCY_DEFAULT_VALUE,
.type = PM_QOS_MIN,
};
/** /**
* pm_qos_request - returns current system wide qos expectation * cpu_latency_qos_limit - Return current system-wide CPU latency QoS limit.
* @pm_qos_class: identification of which qos value is requested
*
* This function returns the current target value.
*/ */
int pm_qos_request(int pm_qos_class) s32 cpu_latency_qos_limit(void)
{ {
return pm_qos_read_value(pm_qos_array[pm_qos_class]->constraints); return pm_qos_read_value(&cpu_latency_constraints);
}
EXPORT_SYMBOL_GPL(pm_qos_request);
int pm_qos_request_active(struct pm_qos_request *req)
{
return req->pm_qos_class != 0;
}
EXPORT_SYMBOL_GPL(pm_qos_request_active);
static void __pm_qos_update_request(struct pm_qos_request *req,
s32 new_value)
{
trace_pm_qos_update_request(req->pm_qos_class, new_value);
if (new_value != req->node.prio)
pm_qos_update_target(
pm_qos_array[req->pm_qos_class]->constraints,
&req->node, PM_QOS_UPDATE_REQ, new_value);
} }
/** /**
* pm_qos_work_fn - the timeout handler of pm_qos_update_request_timeout * cpu_latency_qos_request_active - Check the given PM QoS request.
* @work: work struct for the delayed work (timeout) * @req: PM QoS request to check.
* *
* This cancels the timeout request by falling back to the default at timeout. * Return: 'true' if @req has been added to the CPU latency QoS list, 'false'
* otherwise.
*/ */
static void pm_qos_work_fn(struct work_struct *work) bool cpu_latency_qos_request_active(struct pm_qos_request *req)
{ {
struct pm_qos_request *req = container_of(to_delayed_work(work), return req->qos == &cpu_latency_constraints;
struct pm_qos_request, }
work); EXPORT_SYMBOL_GPL(cpu_latency_qos_request_active);
__pm_qos_update_request(req, PM_QOS_DEFAULT_VALUE); static void cpu_latency_qos_apply(struct pm_qos_request *req,
enum pm_qos_req_action action, s32 value)
{
int ret = pm_qos_update_target(req->qos, &req->node, action, value);
if (ret > 0)
wake_up_all_idle_cpus();
} }
/** /**
* pm_qos_add_request - inserts new qos request into the list * cpu_latency_qos_add_request - Add new CPU latency QoS request.
* @req: pointer to a preallocated handle * @req: Pointer to a preallocated handle.
* @pm_qos_class: identifies which list of qos request to use * @value: Requested constraint value.
* @value: defines the qos request
* *
* This function inserts a new entry in the pm_qos_class list of requested qos * Use @value to initialize the request handle pointed to by @req, insert it as
* performance characteristics. It recomputes the aggregate QoS expectations * a new entry to the CPU latency QoS list and recompute the effective QoS
* for the pm_qos_class of parameters and initializes the pm_qos_request * constraint for that list.
* handle. Caller needs to save this handle for later use in updates and *
* removal. * Callers need to save the handle for later use in updates and removal of the
* QoS request represented by it.
*/ */
void cpu_latency_qos_add_request(struct pm_qos_request *req, s32 value)
void pm_qos_add_request(struct pm_qos_request *req,
int pm_qos_class, s32 value)
{
if (!req) /*guard against callers passing in null */
return;
if (pm_qos_request_active(req)) {
WARN(1, KERN_ERR "pm_qos_add_request() called for already added request\n");
return;
}
req->pm_qos_class = pm_qos_class;
INIT_DELAYED_WORK(&req->work, pm_qos_work_fn);
trace_pm_qos_add_request(pm_qos_class, value);
pm_qos_update_target(pm_qos_array[pm_qos_class]->constraints,
&req->node, PM_QOS_ADD_REQ, value);
}
EXPORT_SYMBOL_GPL(pm_qos_add_request);
/**
* pm_qos_update_request - modifies an existing qos request
* @req : handle to list element holding a pm_qos request to use
* @value: defines the qos request
*
* Updates an existing qos request for the pm_qos_class of parameters along
* with updating the target pm_qos_class value.
*
* Attempts are made to make this code callable on hot code paths.
*/
void pm_qos_update_request(struct pm_qos_request *req,
s32 new_value)
{
if (!req) /*guard against callers passing in null */
return;
if (!pm_qos_request_active(req)) {
WARN(1, KERN_ERR "pm_qos_update_request() called for unknown object\n");
return;
}
cancel_delayed_work_sync(&req->work);
__pm_qos_update_request(req, new_value);
}
EXPORT_SYMBOL_GPL(pm_qos_update_request);
/**
* pm_qos_update_request_timeout - modifies an existing qos request temporarily.
* @req : handle to list element holding a pm_qos request to use
* @new_value: defines the temporal qos request
* @timeout_us: the effective duration of this qos request in usecs.
*
* After timeout_us, this qos request is cancelled automatically.
*/
void pm_qos_update_request_timeout(struct pm_qos_request *req, s32 new_value,
unsigned long timeout_us)
{ {
if (!req) if (!req)
return; return;
if (WARN(!pm_qos_request_active(req),
"%s called for unknown object.", __func__))
return;
cancel_delayed_work_sync(&req->work); if (cpu_latency_qos_request_active(req)) {
WARN(1, KERN_ERR "%s called for already added request\n", __func__);
trace_pm_qos_update_request_timeout(req->pm_qos_class,
new_value, timeout_us);
if (new_value != req->node.prio)
pm_qos_update_target(
pm_qos_array[req->pm_qos_class]->constraints,
&req->node, PM_QOS_UPDATE_REQ, new_value);
schedule_delayed_work(&req->work, usecs_to_jiffies(timeout_us));
}
/**
* pm_qos_remove_request - modifies an existing qos request
* @req: handle to request list element
*
* Will remove pm qos request from the list of constraints and
* recompute the current target value for the pm_qos_class. Call this
* on slow code paths.
*/
void pm_qos_remove_request(struct pm_qos_request *req)
{
if (!req) /*guard against callers passing in null */
return;
/* silent return to keep pcm code cleaner */
if (!pm_qos_request_active(req)) {
WARN(1, KERN_ERR "pm_qos_remove_request() called for unknown object\n");
return; return;
} }
cancel_delayed_work_sync(&req->work); trace_pm_qos_add_request(value);
trace_pm_qos_remove_request(req->pm_qos_class, PM_QOS_DEFAULT_VALUE); req->qos = &cpu_latency_constraints;
pm_qos_update_target(pm_qos_array[req->pm_qos_class]->constraints, cpu_latency_qos_apply(req, PM_QOS_ADD_REQ, value);
&req->node, PM_QOS_REMOVE_REQ, }
PM_QOS_DEFAULT_VALUE); EXPORT_SYMBOL_GPL(cpu_latency_qos_add_request);
/**
* cpu_latency_qos_update_request - Modify existing CPU latency QoS request.
* @req : QoS request to update.
* @new_value: New requested constraint value.
*
* Use @new_value to update the QoS request represented by @req in the CPU
* latency QoS list along with updating the effective constraint value for that
* list.
*/
void cpu_latency_qos_update_request(struct pm_qos_request *req, s32 new_value)
{
if (!req)
return;
if (!cpu_latency_qos_request_active(req)) {
WARN(1, KERN_ERR "%s called for unknown object\n", __func__);
return;
}
trace_pm_qos_update_request(new_value);
if (new_value == req->node.prio)
return;
cpu_latency_qos_apply(req, PM_QOS_UPDATE_REQ, new_value);
}
EXPORT_SYMBOL_GPL(cpu_latency_qos_update_request);
/**
* cpu_latency_qos_remove_request - Remove existing CPU latency QoS request.
* @req: QoS request to remove.
*
* Remove the CPU latency QoS request represented by @req from the CPU latency
* QoS list along with updating the effective constraint value for that list.
*/
void cpu_latency_qos_remove_request(struct pm_qos_request *req)
{
if (!req)
return;
if (!cpu_latency_qos_request_active(req)) {
WARN(1, KERN_ERR "%s called for unknown object\n", __func__);
return;
}
trace_pm_qos_remove_request(PM_QOS_DEFAULT_VALUE);
cpu_latency_qos_apply(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE);
memset(req, 0, sizeof(*req)); memset(req, 0, sizeof(*req));
} }
EXPORT_SYMBOL_GPL(pm_qos_remove_request); EXPORT_SYMBOL_GPL(cpu_latency_qos_remove_request);
/** /* User space interface to the CPU latency QoS via misc device. */
* pm_qos_add_notifier - sets notification entry for changes to target value
* @pm_qos_class: identifies which qos target changes should be notified.
* @notifier: notifier block managed by caller.
*
* will register the notifier into a notification chain that gets called
* upon changes to the pm_qos_class target value.
*/
int pm_qos_add_notifier(int pm_qos_class, struct notifier_block *notifier)
{
int retval;
retval = blocking_notifier_chain_register( static int cpu_latency_qos_open(struct inode *inode, struct file *filp)
pm_qos_array[pm_qos_class]->constraints->notifiers,
notifier);
return retval;
}
EXPORT_SYMBOL_GPL(pm_qos_add_notifier);
/**
* pm_qos_remove_notifier - deletes notification entry from chain.
* @pm_qos_class: identifies which qos target changes are notified.
* @notifier: notifier block to be removed.
*
* will remove the notifier from the notification chain that gets called
* upon changes to the pm_qos_class target value.
*/
int pm_qos_remove_notifier(int pm_qos_class, struct notifier_block *notifier)
{
int retval;
retval = blocking_notifier_chain_unregister(
pm_qos_array[pm_qos_class]->constraints->notifiers,
notifier);
return retval;
}
EXPORT_SYMBOL_GPL(pm_qos_remove_notifier);
/* User space interface to PM QoS classes via misc devices */
static int register_pm_qos_misc(struct pm_qos_object *qos, struct dentry *d)
{
qos->pm_qos_power_miscdev.minor = MISC_DYNAMIC_MINOR;
qos->pm_qos_power_miscdev.name = qos->name;
qos->pm_qos_power_miscdev.fops = &pm_qos_power_fops;
debugfs_create_file(qos->name, S_IRUGO, d, (void *)qos,
&pm_qos_debug_fops);
return misc_register(&qos->pm_qos_power_miscdev);
}
static int find_pm_qos_object_by_minor(int minor)
{
int pm_qos_class;
for (pm_qos_class = PM_QOS_CPU_DMA_LATENCY;
pm_qos_class < PM_QOS_NUM_CLASSES; pm_qos_class++) {
if (minor ==
pm_qos_array[pm_qos_class]->pm_qos_power_miscdev.minor)
return pm_qos_class;
}
return -1;
}
static int pm_qos_power_open(struct inode *inode, struct file *filp)
{
long pm_qos_class;
pm_qos_class = find_pm_qos_object_by_minor(iminor(inode));
if (pm_qos_class >= PM_QOS_CPU_DMA_LATENCY) {
struct pm_qos_request *req = kzalloc(sizeof(*req), GFP_KERNEL);
if (!req)
return -ENOMEM;
pm_qos_add_request(req, pm_qos_class, PM_QOS_DEFAULT_VALUE);
filp->private_data = req;
return 0;
}
return -EPERM;
}
static int pm_qos_power_release(struct inode *inode, struct file *filp)
{ {
struct pm_qos_request *req; struct pm_qos_request *req;
req = filp->private_data; req = kzalloc(sizeof(*req), GFP_KERNEL);
pm_qos_remove_request(req); if (!req)
return -ENOMEM;
cpu_latency_qos_add_request(req, PM_QOS_DEFAULT_VALUE);
filp->private_data = req;
return 0;
}
static int cpu_latency_qos_release(struct inode *inode, struct file *filp)
{
struct pm_qos_request *req = filp->private_data;
filp->private_data = NULL;
cpu_latency_qos_remove_request(req);
kfree(req); kfree(req);
return 0; return 0;
} }
static ssize_t cpu_latency_qos_read(struct file *filp, char __user *buf,
static ssize_t pm_qos_power_read(struct file *filp, char __user *buf, size_t count, loff_t *f_pos)
size_t count, loff_t *f_pos)
{ {
s32 value;
unsigned long flags;
struct pm_qos_request *req = filp->private_data; struct pm_qos_request *req = filp->private_data;
unsigned long flags;
s32 value;
if (!req) if (!req || !cpu_latency_qos_request_active(req))
return -EINVAL;
if (!pm_qos_request_active(req))
return -EINVAL; return -EINVAL;
spin_lock_irqsave(&pm_qos_lock, flags); spin_lock_irqsave(&pm_qos_lock, flags);
value = pm_qos_get_value(pm_qos_array[req->pm_qos_class]->constraints); value = pm_qos_get_value(&cpu_latency_constraints);
spin_unlock_irqrestore(&pm_qos_lock, flags); spin_unlock_irqrestore(&pm_qos_lock, flags);
return simple_read_from_buffer(buf, count, f_pos, &value, sizeof(s32)); return simple_read_from_buffer(buf, count, f_pos, &value, sizeof(s32));
} }
static ssize_t pm_qos_power_write(struct file *filp, const char __user *buf, static ssize_t cpu_latency_qos_write(struct file *filp, const char __user *buf,
size_t count, loff_t *f_pos) size_t count, loff_t *f_pos)
{ {
s32 value; s32 value;
struct pm_qos_request *req;
if (count == sizeof(s32)) { if (count == sizeof(s32)) {
if (copy_from_user(&value, buf, sizeof(s32))) if (copy_from_user(&value, buf, sizeof(s32)))
@ -620,36 +391,38 @@ static ssize_t pm_qos_power_write(struct file *filp, const char __user *buf,
return ret; return ret;
} }
req = filp->private_data; cpu_latency_qos_update_request(filp->private_data, value);
pm_qos_update_request(req, value);
return count; return count;
} }
static const struct file_operations cpu_latency_qos_fops = {
.write = cpu_latency_qos_write,
.read = cpu_latency_qos_read,
.open = cpu_latency_qos_open,
.release = cpu_latency_qos_release,
.llseek = noop_llseek,
};
static int __init pm_qos_power_init(void) static struct miscdevice cpu_latency_qos_miscdev = {
.minor = MISC_DYNAMIC_MINOR,
.name = "cpu_dma_latency",
.fops = &cpu_latency_qos_fops,
};
static int __init cpu_latency_qos_init(void)
{ {
int ret = 0; int ret;
int i;
struct dentry *d;
BUILD_BUG_ON(ARRAY_SIZE(pm_qos_array) != PM_QOS_NUM_CLASSES); ret = misc_register(&cpu_latency_qos_miscdev);
if (ret < 0)
d = debugfs_create_dir("pm_qos", NULL); pr_err("%s: %s setup failed\n", __func__,
cpu_latency_qos_miscdev.name);
for (i = PM_QOS_CPU_DMA_LATENCY; i < PM_QOS_NUM_CLASSES; i++) {
ret = register_pm_qos_misc(pm_qos_array[i], d);
if (ret < 0) {
pr_err("%s: %s setup failed\n",
__func__, pm_qos_array[i]->name);
return ret;
}
}
return ret; return ret;
} }
late_initcall(cpu_latency_qos_init);
late_initcall(pm_qos_power_init); #endif /* CONFIG_CPU_IDLE */
/* Definitions related to the frequency QoS below. */ /* Definitions related to the frequency QoS below. */

View File

@ -748,11 +748,11 @@ static int snd_pcm_hw_params(struct snd_pcm_substream *substream,
snd_pcm_timer_resolution_change(substream); snd_pcm_timer_resolution_change(substream);
snd_pcm_set_state(substream, SNDRV_PCM_STATE_SETUP); snd_pcm_set_state(substream, SNDRV_PCM_STATE_SETUP);
if (pm_qos_request_active(&substream->latency_pm_qos_req)) if (cpu_latency_qos_request_active(&substream->latency_pm_qos_req))
pm_qos_remove_request(&substream->latency_pm_qos_req); cpu_latency_qos_remove_request(&substream->latency_pm_qos_req);
if ((usecs = period_to_usecs(runtime)) >= 0) if ((usecs = period_to_usecs(runtime)) >= 0)
pm_qos_add_request(&substream->latency_pm_qos_req, cpu_latency_qos_add_request(&substream->latency_pm_qos_req,
PM_QOS_CPU_DMA_LATENCY, usecs); usecs);
return 0; return 0;
_error: _error:
/* hardware might be unusable from this time, /* hardware might be unusable from this time,
@ -821,7 +821,7 @@ static int snd_pcm_hw_free(struct snd_pcm_substream *substream)
return -EBADFD; return -EBADFD;
result = do_hw_free(substream); result = do_hw_free(substream);
snd_pcm_set_state(substream, SNDRV_PCM_STATE_OPEN); snd_pcm_set_state(substream, SNDRV_PCM_STATE_OPEN);
pm_qos_remove_request(&substream->latency_pm_qos_req); cpu_latency_qos_remove_request(&substream->latency_pm_qos_req);
return result; return result;
} }
@ -2599,8 +2599,8 @@ void snd_pcm_release_substream(struct snd_pcm_substream *substream)
substream->ops->close(substream); substream->ops->close(substream);
substream->hw_opened = 0; substream->hw_opened = 0;
} }
if (pm_qos_request_active(&substream->latency_pm_qos_req)) if (cpu_latency_qos_request_active(&substream->latency_pm_qos_req))
pm_qos_remove_request(&substream->latency_pm_qos_req); cpu_latency_qos_remove_request(&substream->latency_pm_qos_req);
if (substream->pcm_release) { if (substream->pcm_release) {
substream->pcm_release(substream); substream->pcm_release(substream);
substream->pcm_release = NULL; substream->pcm_release = NULL;

View File

@ -325,8 +325,7 @@ int sst_context_init(struct intel_sst_drv *ctx)
ret = -ENOMEM; ret = -ENOMEM;
goto do_free_mem; goto do_free_mem;
} }
pm_qos_add_request(ctx->qos, PM_QOS_CPU_DMA_LATENCY, cpu_latency_qos_add_request(ctx->qos, PM_QOS_DEFAULT_VALUE);
PM_QOS_DEFAULT_VALUE);
dev_dbg(ctx->dev, "Requesting FW %s now...\n", ctx->firmware_name); dev_dbg(ctx->dev, "Requesting FW %s now...\n", ctx->firmware_name);
ret = request_firmware_nowait(THIS_MODULE, true, ctx->firmware_name, ret = request_firmware_nowait(THIS_MODULE, true, ctx->firmware_name,
@ -364,7 +363,7 @@ void sst_context_cleanup(struct intel_sst_drv *ctx)
sysfs_remove_group(&ctx->dev->kobj, &sst_fw_version_attr_group); sysfs_remove_group(&ctx->dev->kobj, &sst_fw_version_attr_group);
flush_scheduled_work(); flush_scheduled_work();
destroy_workqueue(ctx->post_msg_wq); destroy_workqueue(ctx->post_msg_wq);
pm_qos_remove_request(ctx->qos); cpu_latency_qos_remove_request(ctx->qos);
kfree(ctx->fw_sg_list.src); kfree(ctx->fw_sg_list.src);
kfree(ctx->fw_sg_list.dst); kfree(ctx->fw_sg_list.dst);
ctx->fw_sg_list.list_len = 0; ctx->fw_sg_list.list_len = 0;

View File

@ -412,7 +412,7 @@ int sst_load_fw(struct intel_sst_drv *sst_drv_ctx)
return -ENOMEM; return -ENOMEM;
/* Prevent C-states beyond C6 */ /* Prevent C-states beyond C6 */
pm_qos_update_request(sst_drv_ctx->qos, 0); cpu_latency_qos_update_request(sst_drv_ctx->qos, 0);
sst_drv_ctx->sst_state = SST_FW_LOADING; sst_drv_ctx->sst_state = SST_FW_LOADING;
@ -442,7 +442,7 @@ int sst_load_fw(struct intel_sst_drv *sst_drv_ctx)
restore: restore:
/* Re-enable Deeper C-states beyond C6 */ /* Re-enable Deeper C-states beyond C6 */
pm_qos_update_request(sst_drv_ctx->qos, PM_QOS_DEFAULT_VALUE); cpu_latency_qos_update_request(sst_drv_ctx->qos, PM_QOS_DEFAULT_VALUE);
sst_free_block(sst_drv_ctx, block); sst_free_block(sst_drv_ctx, block);
dev_dbg(sst_drv_ctx->dev, "fw load successful!!!\n"); dev_dbg(sst_drv_ctx->dev, "fw load successful!!!\n");

View File

@ -112,7 +112,7 @@ static void omap_dmic_dai_shutdown(struct snd_pcm_substream *substream,
mutex_lock(&dmic->mutex); mutex_lock(&dmic->mutex);
pm_qos_remove_request(&dmic->pm_qos_req); cpu_latency_qos_remove_request(&dmic->pm_qos_req);
if (!dai->active) if (!dai->active)
dmic->active = 0; dmic->active = 0;
@ -230,8 +230,9 @@ static int omap_dmic_dai_prepare(struct snd_pcm_substream *substream,
struct omap_dmic *dmic = snd_soc_dai_get_drvdata(dai); struct omap_dmic *dmic = snd_soc_dai_get_drvdata(dai);
u32 ctrl; u32 ctrl;
if (pm_qos_request_active(&dmic->pm_qos_req)) if (cpu_latency_qos_request_active(&dmic->pm_qos_req))
pm_qos_update_request(&dmic->pm_qos_req, dmic->latency); cpu_latency_qos_update_request(&dmic->pm_qos_req,
dmic->latency);
/* Configure uplink threshold */ /* Configure uplink threshold */
omap_dmic_write(dmic, OMAP_DMIC_FIFO_CTRL_REG, dmic->threshold); omap_dmic_write(dmic, OMAP_DMIC_FIFO_CTRL_REG, dmic->threshold);

View File

@ -836,10 +836,10 @@ static void omap_mcbsp_dai_shutdown(struct snd_pcm_substream *substream,
int stream2 = tx ? SNDRV_PCM_STREAM_CAPTURE : SNDRV_PCM_STREAM_PLAYBACK; int stream2 = tx ? SNDRV_PCM_STREAM_CAPTURE : SNDRV_PCM_STREAM_PLAYBACK;
if (mcbsp->latency[stream2]) if (mcbsp->latency[stream2])
pm_qos_update_request(&mcbsp->pm_qos_req, cpu_latency_qos_update_request(&mcbsp->pm_qos_req,
mcbsp->latency[stream2]); mcbsp->latency[stream2]);
else if (mcbsp->latency[stream1]) else if (mcbsp->latency[stream1])
pm_qos_remove_request(&mcbsp->pm_qos_req); cpu_latency_qos_remove_request(&mcbsp->pm_qos_req);
mcbsp->latency[stream1] = 0; mcbsp->latency[stream1] = 0;
@ -863,10 +863,10 @@ static int omap_mcbsp_dai_prepare(struct snd_pcm_substream *substream,
if (!latency || mcbsp->latency[stream1] < latency) if (!latency || mcbsp->latency[stream1] < latency)
latency = mcbsp->latency[stream1]; latency = mcbsp->latency[stream1];
if (pm_qos_request_active(pm_qos_req)) if (cpu_latency_qos_request_active(pm_qos_req))
pm_qos_update_request(pm_qos_req, latency); cpu_latency_qos_update_request(pm_qos_req, latency);
else if (latency) else if (latency)
pm_qos_add_request(pm_qos_req, PM_QOS_CPU_DMA_LATENCY, latency); cpu_latency_qos_add_request(pm_qos_req, latency);
return 0; return 0;
} }
@ -1434,8 +1434,8 @@ static int asoc_mcbsp_remove(struct platform_device *pdev)
if (mcbsp->pdata->ops && mcbsp->pdata->ops->free) if (mcbsp->pdata->ops && mcbsp->pdata->ops->free)
mcbsp->pdata->ops->free(mcbsp->id); mcbsp->pdata->ops->free(mcbsp->id);
if (pm_qos_request_active(&mcbsp->pm_qos_req)) if (cpu_latency_qos_request_active(&mcbsp->pm_qos_req))
pm_qos_remove_request(&mcbsp->pm_qos_req); cpu_latency_qos_remove_request(&mcbsp->pm_qos_req);
if (mcbsp->pdata->buffer_size) if (mcbsp->pdata->buffer_size)
sysfs_remove_group(&mcbsp->dev->kobj, &additional_attr_group); sysfs_remove_group(&mcbsp->dev->kobj, &additional_attr_group);

View File

@ -281,10 +281,10 @@ static void omap_mcpdm_dai_shutdown(struct snd_pcm_substream *substream,
} }
if (mcpdm->latency[stream2]) if (mcpdm->latency[stream2])
pm_qos_update_request(&mcpdm->pm_qos_req, cpu_latency_qos_update_request(&mcpdm->pm_qos_req,
mcpdm->latency[stream2]); mcpdm->latency[stream2]);
else if (mcpdm->latency[stream1]) else if (mcpdm->latency[stream1])
pm_qos_remove_request(&mcpdm->pm_qos_req); cpu_latency_qos_remove_request(&mcpdm->pm_qos_req);
mcpdm->latency[stream1] = 0; mcpdm->latency[stream1] = 0;
@ -386,10 +386,10 @@ static int omap_mcpdm_prepare(struct snd_pcm_substream *substream,
if (!latency || mcpdm->latency[stream1] < latency) if (!latency || mcpdm->latency[stream1] < latency)
latency = mcpdm->latency[stream1]; latency = mcpdm->latency[stream1];
if (pm_qos_request_active(pm_qos_req)) if (cpu_latency_qos_request_active(pm_qos_req))
pm_qos_update_request(pm_qos_req, latency); cpu_latency_qos_update_request(pm_qos_req, latency);
else if (latency) else if (latency)
pm_qos_add_request(pm_qos_req, PM_QOS_CPU_DMA_LATENCY, latency); cpu_latency_qos_add_request(pm_qos_req, latency);
if (!omap_mcpdm_active(mcpdm)) { if (!omap_mcpdm_active(mcpdm)) {
omap_mcpdm_start(mcpdm); omap_mcpdm_start(mcpdm);
@ -451,8 +451,8 @@ static int omap_mcpdm_remove(struct snd_soc_dai *dai)
free_irq(mcpdm->irq, (void *)mcpdm); free_irq(mcpdm->irq, (void *)mcpdm);
pm_runtime_disable(mcpdm->dev); pm_runtime_disable(mcpdm->dev);
if (pm_qos_request_active(&mcpdm->pm_qos_req)) if (cpu_latency_qos_request_active(&mcpdm->pm_qos_req))
pm_qos_remove_request(&mcpdm->pm_qos_req); cpu_latency_qos_remove_request(&mcpdm->pm_qos_req);
return 0; return 0;
} }