ACPI and power management updates for 3.19-rc1

This time we have some more new material than we used to have during
 the last couple of development cycles.
 
 The most important part of it to me is the introduction of a unified
 interface for accessing device properties provided by platform
 firmware.  It works with Device Trees and ACPI in a uniform way and
 drivers using it need not worry about where the properties come
 from as long as the platform firmware (either DT or ACPI) makes
 them available.  It covers both devices and "bare" device node
 objects without struct device representation as that turns out to
 be necessary in some cases.  This has been in the works for quite
 a few months (and development cycles) and has been approved by
 all of the relevant maintainers.
 
 On top of that, some drivers are switched over to the new interface
 (at25, leds-gpio, gpio_keys_polled) and some additional changes are
 made to the core GPIO subsystem to allow device drivers to manipulate
 GPIOs in the "canonical" way on platforms that provide GPIO information
 in their ACPI tables, but don't assign names to GPIO lines (in which
 case the driver needs to do that on the basis of what it knows about
 the device in question).  That also has been approved by the GPIO
 core maintainers and the rfkill driver is now going to use it.
 
 Second is support for hardware P-states in the intel_pstate driver.
 It uses CPUID to detect whether or not the feature is supported by
 the processor in which case it will be enabled by default.  However,
 it can be disabled entirely from the kernel command line if necessary.
 
 Next is support for a platform firmware interface based on ACPI
 operation regions used by the PMIC (Power Management Integrated
 Circuit) chips on the Intel Baytrail-T and Baytrail-T-CR platforms.
 That interface is used for manipulating power resources and for
 thermal management: sensor temperature reporting, trip point setting
 and so on.
 
 Also the ACPI core is now going to support the _DEP configuration
 information in a limited way.  Basically, _DEP it supposed to reflect
 off-the-hierarchy dependencies between devices which may be very
 indirect, like when AML for one device accesses locations in an
 operation region handled by another device's driver (usually, the
 device depended on this way is a serial bus or GPIO controller).
 The support added this time is sufficient to make the ACPI battery
 driver work on Asus T100A, but it is general enough to be able to
 cover some other use cases in the future.
 
 Finally, we have a new cpufreq driver for the Loongson1B processor.
 
 In addition to the above, there are fixes and cleanups all over the
 place as usual and a traditional ACPICA update to a recent upstream
 release.
 
 As far as the fixes go, the ACPI LPSS (Low-power Subsystem) driver
 for Intel platforms should be able to handle power management of
 the DMA engine correctly, the cpufreq-dt driver should interact
 with the thermal subsystem in a better way and the ACPI backlight
 driver should handle some more corner cases, among other things.
 
 On top of the ACPICA update there are fixes for race conditions
 in the ACPICA's interrupt handling code which might lead to some
 random and strange looking failures on some systems.
 
 In the cleanups department the most visible part is the series
 of commits targeted at getting rid of the CONFIG_PM_RUNTIME
 configuration option.  That was triggered by a discussion
 regarding the generic power domains code during which we realized
 that trying to support certain combinations of PM config options
 was painful and not really worth it, because nobody would use them
 in production anyway.  For this reason, we decided to make
 CONFIG_PM_SLEEP select CONFIG_PM_RUNTIME and that lead to the
 conclusion that the latter became redundant and CONFIG_PM could
 be used instead of it.  The material here makes that replacement
 in a major part of the tree, but there will be at least one more
 batch of that in the second part of the merge window.
 
 Specifics:
 
  - Support for retrieving device properties information from ACPI
    _DSD device configuration objects and a unified device properties
    interface for device drivers (and subsystems) on top of that.
    As stated above, this works with Device Trees and ACPI and allows
    device drivers to be written in a platform firmware (DT or ACPI)
    agnostic way.  The at25, leds-gpio and gpio_keys_polled drivers
    are now going to use this new interface and the GPIO subsystem
    is additionally modified to allow device drivers to assign names
    to GPIO resources returned by ACPI _CRS objects (in case _DSD is
    not present or does not provide the expected data).  The changes
    in this set are mostly from Mika Westerberg, Rafael J Wysocki,
    Aaron Lu, and Darren Hart with some fixes from others (Fabio Estevam,
    Geert Uytterhoeven).
 
  - Support for Hardware Managed Performance States (HWP) as described
    in Volume 3, section 14.4, of the Intel SDM in the intel_pstate
    driver.  CPUID is used to detect whether or not the feature is
    supported by the processor.  If supported, it will be enabled
    automatically unless the intel_pstate=no_hwp switch is present in
    the kernel command line.  From Dirk Brandewie.
 
  - New Intel Broadwell-H ID for intel_pstate (Dirk Brandewie).
 
  - Support for firmware interface based on ACPI operation regions
    used by the PMIC chips on the Intel Baytrail-T and Baytrail-T-CR
    platforms for power resource control and thermal management
    (Aaron Lu).
 
  - Limited support for retrieving off-the-hierarchy dependencies
    between devices from ACPI _DEP device configuration objects
    and deferred probing support for the ACPI battery driver based
    on the _DEP information to make that driver work on Asus T100A
    (Lan Tianyu).
 
  - New cpufreq driver for the Loongson1B processor (Kelvin Cheung).
 
  - ACPICA update to upstream revision 20141107 which only affects
    tools (Bob Moore).
 
  - Fixes for race conditions in the ACPICA's interrupt handling
    code and in the ACPI code related to system suspend and resume
    (Lv Zheng and Rafael J Wysocki).
 
  - ACPI core fix for an RCU-related issue in the ioremap() regions
    management code that slowed down significantly after CPUs had
    been allowed to enter idle states even if they'd had RCU callbakcs
    queued and triggered some problems in certain proprietary graphics
    driver (and elsewhere).  The fix replaces synchronize_rcu() in
    that code with synchronize_rcu_expedited() which makes the issue
    go away.  From Konstantin Khlebnikov.
 
  - ACPI LPSS (Low-Power Subsystem) driver fix to handle power
    management of the DMA engine included into the LPSS correctly.
    The problem is that the DMA engine doesn't have ACPI PM support
    of its own and it simply is turned off when the last LPSS device
    having ACPI PM support goes into D3cold.  To work around that,
    the PM domain used by the ACPI LPSS driver is redesigned so at
    least one device with ACPI PM support will be on as long as the
    DMA engine is in use.  From Andy Shevchenko.
 
  - ACPI backlight driver fix to avoid using it on "Win8-compatible"
    systems where it doesn't work and where it was used by default by
    mistake (Aaron Lu).
 
  - Assorted minor ACPI core fixes and cleanups from Tomasz Nowicki,
    Sudeep Holla, Huang Rui, Hanjun Guo, Fabian Frederick, and
    Ashwin Chaugule (mostly related to the upcoming ARM64 support).
 
  - Intel RAPL (Running Average Power Limit) power capping driver
    fixes and improvements including new processor IDs (Jacob Pan).
 
  - Generic power domains modification to power up domains after
    attaching devices to them to meet the expectations of device
    drivers and bus types assuming devices to be accessible at
    probe time (Ulf Hansson).
 
  - Preliminary support for controlling device clocks from the
    generic power domains core code and modifications of the
    ARM/shmobile platform to use that feature (Ulf Hansson).
 
  - Assorted minor fixes and cleanups of the generic power
    domains core code (Ulf Hansson, Geert Uytterhoeven).
 
  - Assorted minor fixes and cleanups of the device clocks control
    code in the PM core (Geert Uytterhoeven, Grygorii Strashko).
 
  - Consolidation of device power management Kconfig options by making
    CONFIG_PM_SLEEP select CONFIG_PM_RUNTIME and removing the latter
    which is now redundant (Rafael J Wysocki and Kevin Hilman).  That
    is the first batch of the changes needed for this purpose.
 
  - Core device runtime power management support code cleanup related
    to the execution of callbacks (Andrzej Hajda).
 
  - cpuidle ARM support improvements (Lorenzo Pieralisi).
 
  - cpuidle cleanup related to the CPUIDLE_FLAG_TIME_VALID flag and
    a new MAINTAINERS entry for ARM Exynos cpuidle (Daniel Lezcano and
    Bartlomiej Zolnierkiewicz).
 
  - New cpufreq driver callback (->ready) to be executed when the
    cpufreq core is ready to use a given policy object and cpufreq-dt
    driver modification to use that callback for cooling device
    registration (Viresh Kumar).
 
  - cpufreq core fixes and cleanups (Viresh Kumar, Vince Hsu,
    James Geboski, Tomeu Vizoso).
 
  - Assorted fixes and cleanups in the cpufreq-pcc, intel_pstate,
    cpufreq-dt, pxa2xx cpufreq drivers (Lenny Szubowicz, Ethan Zhao,
    Stefan Wahren, Petr Cvek).
 
  - OPP (Operating Performance Points) framework modification to
    allow OPPs to be removed too and update of a few cpufreq drivers
    (cpufreq-dt, exynos5440, imx6q, cpufreq) to remove OPPs (added
    during initialization) on driver removal (Viresh Kumar).
 
  - Hibernation core fixes and cleanups (Tina Ruchandani and
    Markus Elfring).
 
  - PM Kconfig fix related to CPU power management (Pankaj Dubey).
 
  - cpupower tool fix (Prarit Bhargava).
 
 /
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQIcBAABCAAGBQJUhj6JAAoJEILEb/54YlRxTM4P/j5g5SfqvY0QKsn7sR7MGZ6v
 nsgCBhJAqTw3ocNC7EAs8z9h2GWy1KbKpakKYWAh9Fs1yZoey7tFSlcv/Rgjlp70
 uU5sDQHtpE9mHKiymdsowiQuWgpl962L4k+k8hUslhlvgk1PvVbpajR6OqG8G+pD
 asuIW9eh1APNkLyXmRJ3ZPomzs0VmRdZJ0NEs0lKX9mJskqEvxPIwdaxq3iaJq9B
 Fo0J345zUDcJnxWblDRdHlOigCimglElfN5qJwaC4KpwUKuBvLRKbp4f69+wfT0c
 kYFiR29X5KjJ2kLfP/wKsLyuDCYYXRq3tCia5M1tAqOjZ+UA89H/GDftx/5lntmv
 qUlBa35VfdS1SX4HyApZitOHiLgo+It/hl8Z9bJnhyVw66NxmMQ8JYN2imb8Lhqh
 XCLR7BxLTah82AapLJuQ0ZDHPzZqMPG2veC2vAzRMYzVijict/p4Y2+qBqONltER
 4rs9uRVn+hamX33lCLg8BEN8zqlnT3rJFIgGaKjq/wXHAU/zpE9CjOrKMQcAg9+s
 t51XMNPwypHMAYyGVhEL89ImjXnXxBkLRuquhlmEpvQchIhR+mR3dLsarGn7da44
 WPIQJXzcsojXczcwwfqsJCR4I1FTFyQIW+UNh02GkDRgRovQqo+Jk762U7vQwqH+
 LBdhvVaS1VW4v+FWXEoZ
 =5dox
 -----END PGP SIGNATURE-----

Merge tag 'pm+acpi-3.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull ACPI and power management updates from Rafael Wysocki:
 "This time we have some more new material than we used to have during
  the last couple of development cycles.

  The most important part of it to me is the introduction of a unified
  interface for accessing device properties provided by platform
  firmware.  It works with Device Trees and ACPI in a uniform way and
  drivers using it need not worry about where the properties come from
  as long as the platform firmware (either DT or ACPI) makes them
  available.  It covers both devices and "bare" device node objects
  without struct device representation as that turns out to be necessary
  in some cases.  This has been in the works for quite a few months (and
  development cycles) and has been approved by all of the relevant
  maintainers.

  On top of that, some drivers are switched over to the new interface
  (at25, leds-gpio, gpio_keys_polled) and some additional changes are
  made to the core GPIO subsystem to allow device drivers to manipulate
  GPIOs in the "canonical" way on platforms that provide GPIO
  information in their ACPI tables, but don't assign names to GPIO lines
  (in which case the driver needs to do that on the basis of what it
  knows about the device in question).  That also has been approved by
  the GPIO core maintainers and the rfkill driver is now going to use
  it.

  Second is support for hardware P-states in the intel_pstate driver.
  It uses CPUID to detect whether or not the feature is supported by the
  processor in which case it will be enabled by default.  However, it
  can be disabled entirely from the kernel command line if necessary.

  Next is support for a platform firmware interface based on ACPI
  operation regions used by the PMIC (Power Management Integrated
  Circuit) chips on the Intel Baytrail-T and Baytrail-T-CR platforms.
  That interface is used for manipulating power resources and for
  thermal management: sensor temperature reporting, trip point setting
  and so on.

  Also the ACPI core is now going to support the _DEP configuration
  information in a limited way.  Basically, _DEP it supposed to reflect
  off-the-hierarchy dependencies between devices which may be very
  indirect, like when AML for one device accesses locations in an
  operation region handled by another device's driver (usually, the
  device depended on this way is a serial bus or GPIO controller).  The
  support added this time is sufficient to make the ACPI battery driver
  work on Asus T100A, but it is general enough to be able to cover some
  other use cases in the future.

  Finally, we have a new cpufreq driver for the Loongson1B processor.

  In addition to the above, there are fixes and cleanups all over the
  place as usual and a traditional ACPICA update to a recent upstream
  release.

  As far as the fixes go, the ACPI LPSS (Low-power Subsystem) driver for
  Intel platforms should be able to handle power management of the DMA
  engine correctly, the cpufreq-dt driver should interact with the
  thermal subsystem in a better way and the ACPI backlight driver should
  handle some more corner cases, among other things.

  On top of the ACPICA update there are fixes for race conditions in the
  ACPICA's interrupt handling code which might lead to some random and
  strange looking failures on some systems.

  In the cleanups department the most visible part is the series of
  commits targeted at getting rid of the CONFIG_PM_RUNTIME configuration
  option.  That was triggered by a discussion regarding the generic
  power domains code during which we realized that trying to support
  certain combinations of PM config options was painful and not really
  worth it, because nobody would use them in production anyway.  For
  this reason, we decided to make CONFIG_PM_SLEEP select
  CONFIG_PM_RUNTIME and that lead to the conclusion that the latter
  became redundant and CONFIG_PM could be used instead of it.  The
  material here makes that replacement in a major part of the tree, but
  there will be at least one more batch of that in the second part of
  the merge window.

  Specifics:

   - Support for retrieving device properties information from ACPI _DSD
     device configuration objects and a unified device properties
     interface for device drivers (and subsystems) on top of that.  As
     stated above, this works with Device Trees and ACPI and allows
     device drivers to be written in a platform firmware (DT or ACPI)
     agnostic way.  The at25, leds-gpio and gpio_keys_polled drivers are
     now going to use this new interface and the GPIO subsystem is
     additionally modified to allow device drivers to assign names to
     GPIO resources returned by ACPI _CRS objects (in case _DSD is not
     present or does not provide the expected data).  The changes in
     this set are mostly from Mika Westerberg, Rafael J Wysocki, Aaron
     Lu, and Darren Hart with some fixes from others (Fabio Estevam,
     Geert Uytterhoeven).

   - Support for Hardware Managed Performance States (HWP) as described
     in Volume 3, section 14.4, of the Intel SDM in the intel_pstate
     driver.  CPUID is used to detect whether or not the feature is
     supported by the processor.  If supported, it will be enabled
     automatically unless the intel_pstate=no_hwp switch is present in
     the kernel command line.  From Dirk Brandewie.

   - New Intel Broadwell-H ID for intel_pstate (Dirk Brandewie).

   - Support for firmware interface based on ACPI operation regions used
     by the PMIC chips on the Intel Baytrail-T and Baytrail-T-CR
     platforms for power resource control and thermal management (Aaron
     Lu).

   - Limited support for retrieving off-the-hierarchy dependencies
     between devices from ACPI _DEP device configuration objects and
     deferred probing support for the ACPI battery driver based on the
     _DEP information to make that driver work on Asus T100A (Lan
     Tianyu).

   - New cpufreq driver for the Loongson1B processor (Kelvin Cheung).

   - ACPICA update to upstream revision 20141107 which only affects
     tools (Bob Moore).

   - Fixes for race conditions in the ACPICA's interrupt handling code
     and in the ACPI code related to system suspend and resume (Lv Zheng
     and Rafael J Wysocki).

   - ACPI core fix for an RCU-related issue in the ioremap() regions
     management code that slowed down significantly after CPUs had been
     allowed to enter idle states even if they'd had RCU callbakcs
     queued and triggered some problems in certain proprietary graphics
     driver (and elsewhere).  The fix replaces synchronize_rcu() in that
     code with synchronize_rcu_expedited() which makes the issue go
     away.  From Konstantin Khlebnikov.

   - ACPI LPSS (Low-Power Subsystem) driver fix to handle power
     management of the DMA engine included into the LPSS correctly.  The
     problem is that the DMA engine doesn't have ACPI PM support of its
     own and it simply is turned off when the last LPSS device having
     ACPI PM support goes into D3cold.  To work around that, the PM
     domain used by the ACPI LPSS driver is redesigned so at least one
     device with ACPI PM support will be on as long as the DMA engine is
     in use.  From Andy Shevchenko.

   - ACPI backlight driver fix to avoid using it on "Win8-compatible"
     systems where it doesn't work and where it was used by default by
     mistake (Aaron Lu).

   - Assorted minor ACPI core fixes and cleanups from Tomasz Nowicki,
     Sudeep Holla, Huang Rui, Hanjun Guo, Fabian Frederick, and Ashwin
     Chaugule (mostly related to the upcoming ARM64 support).

   - Intel RAPL (Running Average Power Limit) power capping driver fixes
     and improvements including new processor IDs (Jacob Pan).

   - Generic power domains modification to power up domains after
     attaching devices to them to meet the expectations of device
     drivers and bus types assuming devices to be accessible at probe
     time (Ulf Hansson).

   - Preliminary support for controlling device clocks from the generic
     power domains core code and modifications of the ARM/shmobile
     platform to use that feature (Ulf Hansson).

   - Assorted minor fixes and cleanups of the generic power domains core
     code (Ulf Hansson, Geert Uytterhoeven).

   - Assorted minor fixes and cleanups of the device clocks control code
     in the PM core (Geert Uytterhoeven, Grygorii Strashko).

   - Consolidation of device power management Kconfig options by making
     CONFIG_PM_SLEEP select CONFIG_PM_RUNTIME and removing the latter
     which is now redundant (Rafael J Wysocki and Kevin Hilman).  That
     is the first batch of the changes needed for this purpose.

   - Core device runtime power management support code cleanup related
     to the execution of callbacks (Andrzej Hajda).

   - cpuidle ARM support improvements (Lorenzo Pieralisi).

   - cpuidle cleanup related to the CPUIDLE_FLAG_TIME_VALID flag and a
     new MAINTAINERS entry for ARM Exynos cpuidle (Daniel Lezcano and
     Bartlomiej Zolnierkiewicz).

   - New cpufreq driver callback (->ready) to be executed when the
     cpufreq core is ready to use a given policy object and cpufreq-dt
     driver modification to use that callback for cooling device
     registration (Viresh Kumar).

   - cpufreq core fixes and cleanups (Viresh Kumar, Vince Hsu, James
     Geboski, Tomeu Vizoso).

   - Assorted fixes and cleanups in the cpufreq-pcc, intel_pstate,
     cpufreq-dt, pxa2xx cpufreq drivers (Lenny Szubowicz, Ethan Zhao,
     Stefan Wahren, Petr Cvek).

   - OPP (Operating Performance Points) framework modification to allow
     OPPs to be removed too and update of a few cpufreq drivers
     (cpufreq-dt, exynos5440, imx6q, cpufreq) to remove OPPs (added
     during initialization) on driver removal (Viresh Kumar).

   - Hibernation core fixes and cleanups (Tina Ruchandani and Markus
     Elfring).

   - PM Kconfig fix related to CPU power management (Pankaj Dubey).

   - cpupower tool fix (Prarit Bhargava)"

* tag 'pm+acpi-3.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (120 commits)
  i2c-omap / PM: Drop CONFIG_PM_RUNTIME from i2c-omap.c
  dmaengine / PM: Replace CONFIG_PM_RUNTIME with CONFIG_PM
  tools: cpupower: fix return checks for sysfs_get_idlestate_count()
  drivers: sh / PM: Replace CONFIG_PM_RUNTIME with CONFIG_PM
  e1000e / igb / PM: Eliminate CONFIG_PM_RUNTIME
  MMC / PM: Replace CONFIG_PM_RUNTIME with CONFIG_PM
  MFD / PM: Replace CONFIG_PM_RUNTIME with CONFIG_PM
  misc / PM: Replace CONFIG_PM_RUNTIME with CONFIG_PM
  media / PM: Replace CONFIG_PM_RUNTIME with CONFIG_PM
  input / PM: Replace CONFIG_PM_RUNTIME with CONFIG_PM
  leds: leds-gpio: Fix multiple instances registration without 'label' property
  iio / PM: Replace CONFIG_PM_RUNTIME with CONFIG_PM
  hsi / OMAP / PM: Replace CONFIG_PM_RUNTIME with CONFIG_PM
  i2c-hid / PM: Replace CONFIG_PM_RUNTIME with CONFIG_PM
  drm / exynos / PM: Replace CONFIG_PM_RUNTIME with CONFIG_PM
  gpio / PM: Replace CONFIG_PM_RUNTIME with CONFIG_PM
  hwrandom / exynos / PM: Use CONFIG_PM in #ifdef
  block / PM: Replace CONFIG_PM_RUNTIME with CONFIG_PM
  USB / PM: Drop CONFIG_PM_RUNTIME from the USB core
  PM: Merge the SET*_RUNTIME_PM_OPS() macros
  ...
This commit is contained in:
Linus Torvalds 2014-12-10 21:17:00 -08:00
commit 92a578b064
220 changed files with 4878 additions and 1527 deletions

View File

@ -32,10 +32,9 @@ Date: January 2008
KernelVersion: 2.6.25
Contact: Sarah Sharp <sarah.a.sharp@intel.com>
Description:
If CONFIG_PM_RUNTIME is enabled then this file
is present. When read, it returns the total time (in msec)
that the USB device has been connected to the machine. This
file is read-only.
If CONFIG_PM is enabled, then this file is present. When read,
it returns the total time (in msec) that the USB device has been
connected to the machine. This file is read-only.
Users:
PowerTOP <powertop@lists.01.org>
https://01.org/powertop/
@ -45,10 +44,9 @@ Date: January 2008
KernelVersion: 2.6.25
Contact: Sarah Sharp <sarah.a.sharp@intel.com>
Description:
If CONFIG_PM_RUNTIME is enabled then this file
is present. When read, it returns the total time (in msec)
that the USB device has been active, i.e. not in a suspended
state. This file is read-only.
If CONFIG_PM is enabled, then this file is present. When read,
it returns the total time (in msec) that the USB device has been
active, i.e. not in a suspended state. This file is read-only.
Tools can use this file and the connected_duration file to
compute the percentage of time that a device has been active.

View File

@ -104,16 +104,15 @@ What: /sys/bus/usb/devices/.../power/usb2_hardware_lpm
Date: September 2011
Contact: Andiry Xu <andiry.xu@amd.com>
Description:
If CONFIG_PM_RUNTIME is set and a USB 2.0 lpm-capable device
is plugged in to a xHCI host which support link PM, it will
perform a LPM test; if the test is passed and host supports
USB2 hardware LPM (xHCI 1.0 feature), USB2 hardware LPM will
be enabled for the device and the USB device directory will
contain a file named power/usb2_hardware_lpm. The file holds
a string value (enable or disable) indicating whether or not
USB2 hardware LPM is enabled for the device. Developer can
write y/Y/1 or n/N/0 to the file to enable/disable the
feature.
If CONFIG_PM is set and a USB 2.0 lpm-capable device is plugged
in to a xHCI host which support link PM, it will perform a LPM
test; if the test is passed and host supports USB2 hardware LPM
(xHCI 1.0 feature), USB2 hardware LPM will be enabled for the
device and the USB device directory will contain a file named
power/usb2_hardware_lpm. The file holds a string value (enable
or disable) indicating whether or not USB2 hardware LPM is
enabled for the device. Developer can write y/Y/1 or n/N/0 to
the file to enable/disable the feature.
What: /sys/bus/usb/devices/.../removable
Date: February 2012

View File

@ -0,0 +1,96 @@
_DSD Device Properties Related to GPIO
--------------------------------------
With the release of ACPI 5.1 and the _DSD configuration objecte names
can finally be given to GPIOs (and other things as well) returned by
_CRS. Previously, we were only able to use an integer index to find
the corresponding GPIO, which is pretty error prone (it depends on
the _CRS output ordering, for example).
With _DSD we can now query GPIOs using a name instead of an integer
index, like the ASL example below shows:
// Bluetooth device with reset and shutdown GPIOs
Device (BTH)
{
Name (_HID, ...)
Name (_CRS, ResourceTemplate ()
{
GpioIo (Exclusive, PullUp, 0, 0, IoRestrictionInputOnly,
"\\_SB.GPO0", 0, ResourceConsumer) {15}
GpioIo (Exclusive, PullUp, 0, 0, IoRestrictionInputOnly,
"\\_SB.GPO0", 0, ResourceConsumer) {27, 31}
})
Name (_DSD, Package ()
{
ToUUID("daffd814-6eba-4d8c-8a91-bc9bbf4aa301"),
Package ()
{
Package () {"reset-gpio", Package() {^BTH, 1, 1, 0 }},
Package () {"shutdown-gpio", Package() {^BTH, 0, 0, 0 }},
}
})
}
The format of the supported GPIO property is:
Package () { "name", Package () { ref, index, pin, active_low }}
ref - The device that has _CRS containing GpioIo()/GpioInt() resources,
typically this is the device itself (BTH in our case).
index - Index of the GpioIo()/GpioInt() resource in _CRS starting from zero.
pin - Pin in the GpioIo()/GpioInt() resource. Typically this is zero.
active_low - If 1 the GPIO is marked as active_low.
Since ACPI GpioIo() resource does not have a field saying whether it is
active low or high, the "active_low" argument can be used here. Setting
it to 1 marks the GPIO as active low.
In our Bluetooth example the "reset-gpio" refers to the second GpioIo()
resource, second pin in that resource with the GPIO number of 31.
ACPI GPIO Mappings Provided by Drivers
--------------------------------------
There are systems in which the ACPI tables do not contain _DSD but provide _CRS
with GpioIo()/GpioInt() resources and device drivers still need to work with
them.
In those cases ACPI device identification objects, _HID, _CID, _CLS, _SUB, _HRV,
available to the driver can be used to identify the device and that is supposed
to be sufficient to determine the meaning and purpose of all of the GPIO lines
listed by the GpioIo()/GpioInt() resources returned by _CRS. In other words,
the driver is supposed to know what to use the GpioIo()/GpioInt() resources for
once it has identified the device. Having done that, it can simply assign names
to the GPIO lines it is going to use and provide the GPIO subsystem with a
mapping between those names and the ACPI GPIO resources corresponding to them.
To do that, the driver needs to define a mapping table as a NULL-terminated
array of struct acpi_gpio_mapping objects that each contain a name, a pointer
to an array of line data (struct acpi_gpio_params) objects and the size of that
array. Each struct acpi_gpio_params object consists of three fields,
crs_entry_index, line_index, active_low, representing the index of the target
GpioIo()/GpioInt() resource in _CRS starting from zero, the index of the target
line in that resource starting from zero, and the active-low flag for that line,
respectively, in analogy with the _DSD GPIO property format specified above.
For the example Bluetooth device discussed previously the data structures in
question would look like this:
static const struct acpi_gpio_params reset_gpio = { 1, 1, false };
static const struct acpi_gpio_params shutdown_gpio = { 0, 0, false };
static const struct acpi_gpio_mapping bluetooth_acpi_gpios[] = {
{ "reset-gpio", &reset_gpio, 1 },
{ "shutdown-gpio", &shutdown_gpio, 1 },
{ },
};
Next, the mapping table needs to be passed as the second argument to
acpi_dev_add_driver_gpios() that will register it with the ACPI device object
pointed to by its first argument. That should be done in the driver's .probe()
routine. On removal, the driver should unregister its GPIO mapping table by
calling acpi_dev_remove_driver_gpios() on the ACPI device object where that
table was previously registered.

View File

@ -1,17 +1,28 @@
Intel P-state driver
--------------------
This driver implements a scaling driver with an internal governor for
Intel Core processors. The driver follows the same model as the
Transmeta scaling driver (longrun.c) and implements the setpolicy()
instead of target(). Scaling drivers that implement setpolicy() are
assumed to implement internal governors by the cpufreq core. All the
logic for selecting the current P state is contained within the
driver; no external governor is used by the cpufreq core.
This driver provides an interface to control the P state selection for
SandyBridge+ Intel processors. The driver can operate two different
modes based on the processor model legacy and Hardware P state (HWP)
mode.
Intel SandyBridge+ processors are supported.
In legacy mode the driver implements a scaling driver with an internal
governor for Intel Core processors. The driver follows the same model
as the Transmeta scaling driver (longrun.c) and implements the
setpolicy() instead of target(). Scaling drivers that implement
setpolicy() are assumed to implement internal governors by the cpufreq
core. All the logic for selecting the current P state is contained
within the driver; no external governor is used by the cpufreq core.
New sysfs files for controlling P state selection have been added to
In HWP mode P state selection is implemented in the processor
itself. The driver provides the interfaces between the cpufreq core and
the processor to control P state selection based on user preferences
and reporting frequency to the cpufreq core. In this mode the
internal governor code is disabled.
In addtion to the interfaces provided by the cpufreq core for
controlling frequency the driver provides sysfs files for
controlling P state selection. These files have been added to
/sys/devices/system/cpu/intel_pstate/
max_perf_pct: limits the maximum P state that will be requested by
@ -33,7 +44,9 @@ frequency is fiction for Intel Core processors. Even if the scaling
driver selects a single P state the actual frequency the processor
will run at is selected by the processor itself.
New debugfs files have also been added to /sys/kernel/debug/pstate_snb/
For legacy mode debugfs files have also been added to allow tuning of
the internal governor algorythm. These files are located at
/sys/kernel/debug/pstate_snb/ These files are NOT present in HWP mode.
deadband
d_gain_pct

View File

@ -317,6 +317,26 @@ follows:
In such systems entry-latency-us + exit-latency-us
will exceed wakeup-latency-us by this duration.
- status:
Usage: Optional
Value type: <string>
Definition: A standard device tree property [5] that indicates
the operational status of an idle-state.
If present, it shall be:
"okay": to indicate that the idle state is
operational.
"disabled": to indicate that the idle state has
been disabled in firmware so it is not
operational.
If the property is not present the idle-state must
be considered operational.
- idle-state-name:
Usage: Optional
Value type: <string>
Definition: A string used as a descriptive name for the idle
state.
In addition to the properties listed above, a state node may require
additional properties specifics to the entry-method defined in the
idle-states node, please refer to the entry-method bindings

View File

@ -219,6 +219,24 @@ part of the IRQ interface, e.g. IRQF_TRIGGER_FALLING, as are system wakeup
capabilities.
GPIOs and ACPI
==============
On ACPI systems, GPIOs are described by GpioIo()/GpioInt() resources listed by
the _CRS configuration objects of devices. Those resources do not provide
connection IDs (names) for GPIOs, so it is necessary to use an additional
mechanism for this purpose.
Systems compliant with ACPI 5.1 or newer may provide a _DSD configuration object
which, among other things, may be used to provide connection IDs for specific
GPIOs described by the GpioIo()/GpioInt() resources in _CRS. If that is the
case, it will be handled by the GPIO subsystem automatically. However, if the
_DSD is not present, the mappings between GpioIo()/GpioInt() resources and GPIO
connection IDs need to be provided by device drivers.
For details refer to Documentation/acpi/gpio-properties.txt
Interacting With the Legacy GPIO Subsystem
==========================================
Many kernel subsystems still handle GPIOs using the legacy integer-based

View File

@ -1446,6 +1446,9 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
disable
Do not enable intel_pstate as the default
scaling driver for the supported processors
no_hwp
Do not enable hardware P state control (HWP)
if available.
intremap= [X86-64, Intel-IOMMU]
on enable Interrupt Remapping (default)

View File

@ -47,14 +47,15 @@ dynamic PM is implemented in the USB subsystem, although system PM is
covered to some extent (see Documentation/power/*.txt for more
information about system PM).
Note: Dynamic PM support for USB is present only if the kernel was
built with CONFIG_USB_SUSPEND enabled (which depends on
CONFIG_PM_RUNTIME). System PM support is present only if the kernel
was built with CONFIG_SUSPEND or CONFIG_HIBERNATION enabled.
System PM support is present only if the kernel was built with CONFIG_SUSPEND
or CONFIG_HIBERNATION enabled. Dynamic PM support for USB is present whenever
the kernel was built with CONFIG_PM enabled.
(Starting with the 3.10 kernel release, dynamic PM support for USB is
present whenever the kernel was built with CONFIG_PM_RUNTIME enabled.
The CONFIG_USB_SUSPEND option has been eliminated.)
[Historically, dynamic PM support for USB was present only if the
kernel had been built with CONFIG_USB_SUSPEND enabled (which depended on
CONFIG_PM_RUNTIME). Starting with the 3.10 kernel release, dynamic PM support
for USB was present whenever the kernel was built with CONFIG_PM_RUNTIME
enabled. The CONFIG_USB_SUSPEND option had been eliminated.]
What is Remote Wakeup?

View File

@ -2655,6 +2655,16 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git
S: Maintained
F: drivers/cpuidle/cpuidle-big_little.c
CPUIDLE DRIVER - ARM EXYNOS
M: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
M: Daniel Lezcano <daniel.lezcano@linaro.org>
M: Kukjin Kim <kgene@kernel.org>
L: linux-pm@vger.kernel.org
L: linux-samsung-soc@vger.kernel.org
S: Supported
F: drivers/cpuidle/cpuidle-exynos.c
F: arch/arm/mach-exynos/pm.c
CPUIDLE DRIVERS
M: Rafael J. Wysocki <rjw@rjwysocki.net>
M: Daniel Lezcano <daniel.lezcano@linaro.org>

View File

@ -15,7 +15,6 @@ static inline int arm_cpuidle_simple_enter(struct cpuidle_device *dev,
.exit_latency = 1,\
.target_residency = 1,\
.power_usage = p,\
.flags = CPUIDLE_FLAG_TIME_VALID,\
.name = "WFI",\
.desc = "ARM WFI",\
}

View File

@ -66,7 +66,6 @@ static struct cpuidle_driver davinci_idle_driver = {
.enter = davinci_enter_idle,
.exit_latency = 10,
.target_residency = 10000,
.flags = CPUIDLE_FLAG_TIME_VALID,
.name = "DDR SR",
.desc = "WFI and DDR Self Refresh",
},

View File

@ -24,7 +24,6 @@ static struct cpuidle_driver imx5_cpuidle_driver = {
.enter = imx5_cpuidle_enter,
.exit_latency = 2,
.target_residency = 1,
.flags = CPUIDLE_FLAG_TIME_VALID,
.name = "IMX5 SRPG",
.desc = "CPU state retained,powered off",
},

View File

@ -53,8 +53,7 @@ static struct cpuidle_driver imx6q_cpuidle_driver = {
{
.exit_latency = 50,
.target_residency = 75,
.flags = CPUIDLE_FLAG_TIME_VALID |
CPUIDLE_FLAG_TIMER_STOP,
.flags = CPUIDLE_FLAG_TIMER_STOP,
.enter = imx6q_enter_wait,
.name = "WAIT",
.desc = "Clock off",

View File

@ -40,8 +40,7 @@ static struct cpuidle_driver imx6sl_cpuidle_driver = {
{
.exit_latency = 50,
.target_residency = 75,
.flags = CPUIDLE_FLAG_TIME_VALID |
CPUIDLE_FLAG_TIMER_STOP,
.flags = CPUIDLE_FLAG_TIMER_STOP,
.enter = imx6sl_enter_wait,
.name = "WAIT",
.desc = "Clock off",

View File

@ -265,7 +265,6 @@ static struct cpuidle_driver omap3_idle_driver = {
.enter = omap3_enter_idle_bm,
.exit_latency = 2 + 2,
.target_residency = 5,
.flags = CPUIDLE_FLAG_TIME_VALID,
.name = "C1",
.desc = "MPU ON + CORE ON",
},
@ -273,7 +272,6 @@ static struct cpuidle_driver omap3_idle_driver = {
.enter = omap3_enter_idle_bm,
.exit_latency = 10 + 10,
.target_residency = 30,
.flags = CPUIDLE_FLAG_TIME_VALID,
.name = "C2",
.desc = "MPU ON + CORE ON",
},
@ -281,7 +279,6 @@ static struct cpuidle_driver omap3_idle_driver = {
.enter = omap3_enter_idle_bm,
.exit_latency = 50 + 50,
.target_residency = 300,
.flags = CPUIDLE_FLAG_TIME_VALID,
.name = "C3",
.desc = "MPU RET + CORE ON",
},
@ -289,7 +286,6 @@ static struct cpuidle_driver omap3_idle_driver = {
.enter = omap3_enter_idle_bm,
.exit_latency = 1500 + 1800,
.target_residency = 4000,
.flags = CPUIDLE_FLAG_TIME_VALID,
.name = "C4",
.desc = "MPU OFF + CORE ON",
},
@ -297,7 +293,6 @@ static struct cpuidle_driver omap3_idle_driver = {
.enter = omap3_enter_idle_bm,
.exit_latency = 2500 + 7500,
.target_residency = 12000,
.flags = CPUIDLE_FLAG_TIME_VALID,
.name = "C5",
.desc = "MPU RET + CORE RET",
},
@ -305,7 +300,6 @@ static struct cpuidle_driver omap3_idle_driver = {
.enter = omap3_enter_idle_bm,
.exit_latency = 3000 + 8500,
.target_residency = 15000,
.flags = CPUIDLE_FLAG_TIME_VALID,
.name = "C6",
.desc = "MPU OFF + CORE RET",
},
@ -313,7 +307,6 @@ static struct cpuidle_driver omap3_idle_driver = {
.enter = omap3_enter_idle_bm,
.exit_latency = 10000 + 30000,
.target_residency = 30000,
.flags = CPUIDLE_FLAG_TIME_VALID,
.name = "C7",
.desc = "MPU OFF + CORE OFF",
},

View File

@ -196,7 +196,6 @@ static struct cpuidle_driver omap4_idle_driver = {
/* C1 - CPU0 ON + CPU1 ON + MPU ON */
.exit_latency = 2 + 2,
.target_residency = 5,
.flags = CPUIDLE_FLAG_TIME_VALID,
.enter = omap_enter_idle_simple,
.name = "C1",
.desc = "CPUx ON, MPUSS ON"
@ -205,7 +204,7 @@ static struct cpuidle_driver omap4_idle_driver = {
/* C2 - CPU0 OFF + CPU1 OFF + MPU CSWR */
.exit_latency = 328 + 440,
.target_residency = 960,
.flags = CPUIDLE_FLAG_TIME_VALID | CPUIDLE_FLAG_COUPLED,
.flags = CPUIDLE_FLAG_COUPLED,
.enter = omap_enter_idle_coupled,
.name = "C2",
.desc = "CPUx OFF, MPUSS CSWR",
@ -214,7 +213,7 @@ static struct cpuidle_driver omap4_idle_driver = {
/* C3 - CPU0 OFF + CPU1 OFF + MPU OSWR */
.exit_latency = 460 + 518,
.target_residency = 1100,
.flags = CPUIDLE_FLAG_TIME_VALID | CPUIDLE_FLAG_COUPLED,
.flags = CPUIDLE_FLAG_COUPLED,
.enter = omap_enter_idle_coupled,
.name = "C3",
.desc = "CPUx OFF, MPUSS OSWR",

View File

@ -41,7 +41,7 @@ static void h1940bt_enable(int on)
mdelay(10);
gpio_set_value(S3C2410_GPH(1), 0);
h1940_led_blink_set(-EINVAL, GPIO_LED_BLINK, NULL, NULL);
h1940_led_blink_set(NULL, GPIO_LED_BLINK, NULL, NULL);
}
else {
gpio_set_value(S3C2410_GPH(1), 1);
@ -50,7 +50,7 @@ static void h1940bt_enable(int on)
mdelay(10);
gpio_set_value(H1940_LATCH_BLUETOOTH_POWER, 0);
h1940_led_blink_set(-EINVAL, GPIO_LED_NO_BLINK_LOW, NULL, NULL);
h1940_led_blink_set(NULL, GPIO_LED_NO_BLINK_LOW, NULL, NULL);
}
}

View File

@ -19,8 +19,10 @@
#define H1940_SUSPEND_RESUMEAT (0x30081000)
#define H1940_SUSPEND_CHECK (0x30080000)
struct gpio_desc;
extern void h1940_pm_return(void);
extern int h1940_led_blink_set(unsigned gpio, int state,
extern int h1940_led_blink_set(struct gpio_desc *desc, int state,
unsigned long *delay_on,
unsigned long *delay_off);

View File

@ -359,10 +359,11 @@ static struct platform_device h1940_battery = {
static DEFINE_SPINLOCK(h1940_blink_spin);
int h1940_led_blink_set(unsigned gpio, int state,
int h1940_led_blink_set(struct gpio_desc *desc, int state,
unsigned long *delay_on, unsigned long *delay_off)
{
int blink_gpio, check_gpio1, check_gpio2;
int gpio = desc ? desc_to_gpio(desc) : -EINVAL;
switch (gpio) {
case H1940_LATCH_LED_GREEN:

View File

@ -250,9 +250,10 @@ static void rx1950_disable_charger(void)
static DEFINE_SPINLOCK(rx1950_blink_spin);
static int rx1950_led_blink_set(unsigned gpio, int state,
static int rx1950_led_blink_set(struct gpio_desc *desc, int state,
unsigned long *delay_on, unsigned long *delay_off)
{
int gpio = desc_to_gpio(desc);
int blink_gpio, check_gpio;
switch (gpio) {

View File

@ -48,7 +48,6 @@ static struct cpuidle_driver s3c64xx_cpuidle_driver = {
.enter = s3c64xx_enter_idle,
.exit_latency = 1,
.target_residency = 1,
.flags = CPUIDLE_FLAG_TIME_VALID,
.name = "IDLE",
.desc = "System active, ARM gated",
},

View File

@ -83,9 +83,8 @@ static void r8a7779_init_pm_domain(struct r8a7779_pm_domain *r8a7779_pd)
{
struct generic_pm_domain *genpd = &r8a7779_pd->genpd;
genpd->flags = GENPD_FLAG_PM_CLK;
pm_genpd_init(genpd, NULL, false);
genpd->dev_ops.stop = pm_clk_suspend;
genpd->dev_ops.start = pm_clk_resume;
genpd->dev_ops.active_wakeup = pd_active_wakeup;
genpd->power_off = pd_power_down;
genpd->power_on = pd_power_up;

View File

@ -106,9 +106,8 @@ static void rmobile_init_pm_domain(struct rmobile_pm_domain *rmobile_pd)
struct generic_pm_domain *genpd = &rmobile_pd->genpd;
struct dev_power_governor *gov = rmobile_pd->gov;
genpd->flags = GENPD_FLAG_PM_CLK;
pm_genpd_init(genpd, gov ? : &simple_qos_governor, false);
genpd->dev_ops.stop = pm_clk_suspend;
genpd->dev_ops.start = pm_clk_resume;
genpd->dev_ops.active_wakeup = rmobile_pd_active_wakeup;
genpd->power_off = rmobile_pd_power_down;
genpd->power_on = rmobile_pd_power_up;

View File

@ -423,7 +423,6 @@ static struct cpuidle_driver sh7372_cpuidle_driver = {
.desc = "Core Standby Mode",
.exit_latency = 10,
.target_residency = 20 + 10,
.flags = CPUIDLE_FLAG_TIME_VALID,
.enter = sh7372_enter_core_standby,
},
.states[2] = {
@ -431,7 +430,6 @@ static struct cpuidle_driver sh7372_cpuidle_driver = {
.desc = "A3SM PLL ON",
.exit_latency = 20,
.target_residency = 30 + 20,
.flags = CPUIDLE_FLAG_TIME_VALID,
.enter = sh7372_enter_a3sm_pll_on,
},
.states[3] = {
@ -439,7 +437,6 @@ static struct cpuidle_driver sh7372_cpuidle_driver = {
.desc = "A3SM PLL OFF",
.exit_latency = 120,
.target_residency = 30 + 120,
.flags = CPUIDLE_FLAG_TIME_VALID,
.enter = sh7372_enter_a3sm_pll_off,
},
.states[4] = {
@ -447,7 +444,6 @@ static struct cpuidle_driver sh7372_cpuidle_driver = {
.desc = "A4S PLL OFF",
.exit_latency = 240,
.target_residency = 30 + 240,
.flags = CPUIDLE_FLAG_TIME_VALID,
.enter = sh7372_enter_a4s,
.disabled = true,
},

View File

@ -75,7 +75,6 @@ static struct cpuidle_driver tegra_idle_driver = {
.exit_latency = 500,
.target_residency = 1000,
.power_usage = 0,
.flags = CPUIDLE_FLAG_TIME_VALID,
.name = "powered-down",
.desc = "CPU power gated",
},

View File

@ -59,8 +59,7 @@ static struct cpuidle_driver tegra_idle_driver = {
.exit_latency = 5000,
.target_residency = 10000,
.power_usage = 0,
.flags = CPUIDLE_FLAG_TIME_VALID |
CPUIDLE_FLAG_COUPLED,
.flags = CPUIDLE_FLAG_COUPLED,
.name = "powered-down",
.desc = "CPU power gated",
},

View File

@ -56,7 +56,6 @@ static struct cpuidle_driver tegra_idle_driver = {
.exit_latency = 2000,
.target_residency = 2200,
.power_usage = 0,
.flags = CPUIDLE_FLAG_TIME_VALID,
.name = "powered-down",
.desc = "CPU power gated",
},

View File

@ -306,9 +306,10 @@ EXPORT_SYMBOL(orion_gpio_set_blink);
#define ORION_BLINK_HALF_PERIOD 100 /* ms */
int orion_gpio_led_blink_set(unsigned gpio, int state,
int orion_gpio_led_blink_set(struct gpio_desc *desc, int state,
unsigned long *delay_on, unsigned long *delay_off)
{
unsigned gpio = desc_to_gpio(desc);
if (delay_on && delay_off && !*delay_on && !*delay_off)
*delay_on = *delay_off = ORION_BLINK_HALF_PERIOD;

View File

@ -14,12 +14,15 @@
#include <linux/init.h>
#include <linux/types.h>
#include <linux/irqdomain.h>
struct gpio_desc;
/*
* Orion-specific GPIO API extensions.
*/
void orion_gpio_set_unused(unsigned pin);
void orion_gpio_set_blink(unsigned pin, int blink);
int orion_gpio_led_blink_set(unsigned gpio, int state,
int orion_gpio_led_blink_set(struct gpio_desc *desc, int state,
unsigned long *delay_on, unsigned long *delay_off);
#define GPIO_INPUT_OK (1 << 0)

View File

@ -11,7 +11,6 @@ config IA64
select PCI if (!IA64_HP_SIM)
select ACPI if (!IA64_HP_SIM)
select ARCH_MIGHT_HAVE_ACPI_PDC if ACPI
select PM if (!IA64_HP_SIM)
select HAVE_UNSTABLE_SCHED_CLOCK
select HAVE_IDE
select HAVE_OPROFILE
@ -233,6 +232,7 @@ config IA64_SGI_UV
config IA64_HP_SIM
bool "Ski-simulator"
select SWIOTLB
depends on !PM_RUNTIME
endchoice

View File

@ -22,7 +22,6 @@ extern int mips_cpuidle_wait_enter(struct cpuidle_device *dev,
.exit_latency = 1,\
.target_residency = 1,\
.power_usage = UINT_MAX,\
.flags = CPUIDLE_FLAG_TIME_VALID,\
.name = "wait",\
.desc = "MIPS wait",\
}

View File

@ -222,7 +222,6 @@ config CPU_SHX3
config ARCH_SHMOBILE
bool
select ARCH_SUSPEND_POSSIBLE
select PM
select PM_RUNTIME
config CPU_HAS_PMU

View File

@ -59,7 +59,6 @@ static struct cpuidle_driver cpuidle_driver = {
.exit_latency = 1,
.target_residency = 1 * 2,
.power_usage = 3,
.flags = CPUIDLE_FLAG_TIME_VALID,
.enter = cpuidle_sleep_enter,
.name = "C1",
.desc = "SuperH Sleep Mode",
@ -68,7 +67,6 @@ static struct cpuidle_driver cpuidle_driver = {
.exit_latency = 100,
.target_residency = 1 * 2,
.power_usage = 1,
.flags = CPUIDLE_FLAG_TIME_VALID,
.enter = cpuidle_sleep_enter,
.name = "C2",
.desc = "SuperH Sleep Mode [SF]",
@ -78,7 +76,6 @@ static struct cpuidle_driver cpuidle_driver = {
.exit_latency = 2300,
.target_residency = 1 * 2,
.power_usage = 1,
.flags = CPUIDLE_FLAG_TIME_VALID,
.enter = cpuidle_sleep_enter,
.name = "C3",
.desc = "SuperH Mobile Standby Mode [SF]",

View File

@ -189,6 +189,11 @@
#define X86_FEATURE_DTHERM ( 7*32+ 7) /* Digital Thermal Sensor */
#define X86_FEATURE_HW_PSTATE ( 7*32+ 8) /* AMD HW-PState */
#define X86_FEATURE_PROC_FEEDBACK ( 7*32+ 9) /* AMD ProcFeedbackInterface */
#define X86_FEATURE_HWP ( 7*32+ 10) /* "hwp" Intel HWP */
#define X86_FEATURE_HWP_NOITFY ( 7*32+ 11) /* Intel HWP_NOTIFY */
#define X86_FEATURE_HWP_ACT_WINDOW ( 7*32+ 12) /* Intel HWP_ACT_WINDOW */
#define X86_FEATURE_HWP_EPP ( 7*32+13) /* Intel HWP_EPP */
#define X86_FEATURE_HWP_PKG_REQ ( 7*32+14) /* Intel HWP_PKG_REQ */
/* Virtualization flags: Linux defined, word 8 */
#define X86_FEATURE_TPR_SHADOW ( 8*32+ 0) /* Intel TPR Shadow */

View File

@ -152,6 +152,45 @@
#define MSR_CC6_DEMOTION_POLICY_CONFIG 0x00000668
#define MSR_MC6_DEMOTION_POLICY_CONFIG 0x00000669
/* Hardware P state interface */
#define MSR_PPERF 0x0000064e
#define MSR_PERF_LIMIT_REASONS 0x0000064f
#define MSR_PM_ENABLE 0x00000770
#define MSR_HWP_CAPABILITIES 0x00000771
#define MSR_HWP_REQUEST_PKG 0x00000772
#define MSR_HWP_INTERRUPT 0x00000773
#define MSR_HWP_REQUEST 0x00000774
#define MSR_HWP_STATUS 0x00000777
/* CPUID.6.EAX */
#define HWP_BASE_BIT (1<<7)
#define HWP_NOTIFICATIONS_BIT (1<<8)
#define HWP_ACTIVITY_WINDOW_BIT (1<<9)
#define HWP_ENERGY_PERF_PREFERENCE_BIT (1<<10)
#define HWP_PACKAGE_LEVEL_REQUEST_BIT (1<<11)
/* IA32_HWP_CAPABILITIES */
#define HWP_HIGHEST_PERF(x) (x & 0xff)
#define HWP_GUARANTEED_PERF(x) ((x & (0xff << 8)) >>8)
#define HWP_MOSTEFFICIENT_PERF(x) ((x & (0xff << 16)) >>16)
#define HWP_LOWEST_PERF(x) ((x & (0xff << 24)) >>24)
/* IA32_HWP_REQUEST */
#define HWP_MIN_PERF(x) (x & 0xff)
#define HWP_MAX_PERF(x) ((x & 0xff) << 8)
#define HWP_DESIRED_PERF(x) ((x & 0xff) << 16)
#define HWP_ENERGY_PERF_PREFERENCE(x) ((x & 0xff) << 24)
#define HWP_ACTIVITY_WINDOW(x) ((x & 0xff3) << 32)
#define HWP_PACKAGE_CONTROL(x) ((x & 0x1) << 42)
/* IA32_HWP_STATUS */
#define HWP_GUARANTEED_CHANGE(x) (x & 0x1)
#define HWP_EXCURSION_TO_MINIMUM(x) (x & 0x4)
/* IA32_HWP_INTERRUPT */
#define HWP_CHANGE_TO_GUARANTEED_INT(x) (x & 0x1)
#define HWP_EXCURSION_TO_MINIMUM_INT(x) (x & 0x2)
#define MSR_AMD64_MC0_MASK 0xc0010044
#define MSR_IA32_MCx_CTL(x) (MSR_IA32_MC0_CTL + 4*(x))
@ -346,6 +385,8 @@
#define MSR_IA32_TEMPERATURE_TARGET 0x000001a2
#define MSR_MISC_PWR_MGMT 0x000001aa
#define MSR_IA32_ENERGY_PERF_BIAS 0x000001b0
#define ENERGY_PERF_BIAS_PERFORMANCE 0
#define ENERGY_PERF_BIAS_NORMAL 6

View File

@ -378,7 +378,6 @@ static struct cpuidle_driver apm_idle_driver = {
{ /* entry 1 is for APM idle */
.name = "APM",
.desc = "APM idle",
.flags = CPUIDLE_FLAG_TIME_VALID,
.exit_latency = 250, /* WAG */
.target_residency = 500, /* WAG */
.enter = &apm_cpu_idle

View File

@ -36,6 +36,11 @@ void init_scattered_cpuid_features(struct cpuinfo_x86 *c)
{ X86_FEATURE_ARAT, CR_EAX, 2, 0x00000006, 0 },
{ X86_FEATURE_PLN, CR_EAX, 4, 0x00000006, 0 },
{ X86_FEATURE_PTS, CR_EAX, 6, 0x00000006, 0 },
{ X86_FEATURE_HWP, CR_EAX, 7, 0x00000006, 0 },
{ X86_FEATURE_HWP_NOITFY, CR_EAX, 8, 0x00000006, 0 },
{ X86_FEATURE_HWP_ACT_WINDOW, CR_EAX, 9, 0x00000006, 0 },
{ X86_FEATURE_HWP_EPP, CR_EAX,10, 0x00000006, 0 },
{ X86_FEATURE_HWP_PKG_REQ, CR_EAX,11, 0x00000006, 0 },
{ X86_FEATURE_APERFMPERF, CR_ECX, 0, 0x00000006, 0 },
{ X86_FEATURE_EPB, CR_ECX, 3, 0x00000006, 0 },
{ X86_FEATURE_HW_PSTATE, CR_EDX, 7, 0x80000007, 0 },

View File

@ -1325,7 +1325,7 @@ void part_round_stats(int cpu, struct hd_struct *part)
}
EXPORT_SYMBOL_GPL(part_round_stats);
#ifdef CONFIG_PM_RUNTIME
#ifdef CONFIG_PM
static void blk_pm_put_request(struct request *rq)
{
if (rq->q->dev && !(rq->cmd_flags & REQ_PM) && !--rq->q->nr_pending)
@ -2134,7 +2134,7 @@ void blk_account_io_done(struct request *req)
}
}
#ifdef CONFIG_PM_RUNTIME
#ifdef CONFIG_PM
/*
* Don't process normal requests when queue is suspended
* or in the process of suspending/resuming
@ -3159,7 +3159,7 @@ void blk_finish_plug(struct blk_plug *plug)
}
EXPORT_SYMBOL(blk_finish_plug);
#ifdef CONFIG_PM_RUNTIME
#ifdef CONFIG_PM
/**
* blk_pm_runtime_init - Block layer runtime PM initialization routine
* @q: the queue of the device

View File

@ -539,7 +539,7 @@ void elv_bio_merged(struct request_queue *q, struct request *rq,
e->type->ops.elevator_bio_merged_fn(q, rq, bio);
}
#ifdef CONFIG_PM_RUNTIME
#ifdef CONFIG_PM
static void blk_pm_requeue_request(struct request *rq)
{
if (rq->q->dev && !(rq->cmd_flags & REQ_PM))

View File

@ -360,15 +360,14 @@ config ACPI_BGRT
config ACPI_REDUCED_HARDWARE_ONLY
bool "Hardware-reduced ACPI support only" if EXPERT
def_bool n
depends on ACPI
help
This config item changes the way the ACPI code is built. When this
option is selected, the kernel will use a specialized version of
ACPICA that ONLY supports the ACPI "reduced hardware" mode. The
resulting kernel will be smaller but it will also be restricted to
running in ACPI reduced hardware mode ONLY.
This config item changes the way the ACPI code is built. When this
option is selected, the kernel will use a specialized version of
ACPICA that ONLY supports the ACPI "reduced hardware" mode. The
resulting kernel will be smaller but it will also be restricted to
running in ACPI reduced hardware mode ONLY.
If you are unsure what to do, do not enable this option.
If you are unsure what to do, do not enable this option.
source "drivers/acpi/apei/Kconfig"
@ -394,4 +393,27 @@ config ACPI_EXTLOG
driver adds support for that functionality with corresponding
tracepoint which carries that information to userspace.
menuconfig PMIC_OPREGION
bool "PMIC (Power Management Integrated Circuit) operation region support"
help
Select this option to enable support for ACPI operation
region of the PMIC chip. The operation region can be used
to control power rails and sensor reading/writing on the
PMIC chip.
if PMIC_OPREGION
config CRC_PMIC_OPREGION
bool "ACPI operation region support for CrystalCove PMIC"
depends on INTEL_SOC_PMIC
help
This config adds ACPI operation region support for CrystalCove PMIC.
config XPOWER_PMIC_OPREGION
bool "ACPI operation region support for XPower AXP288 PMIC"
depends on AXP288_ADC = y
help
This config adds ACPI operation region support for XPower AXP288 PMIC.
endif
endif # ACPI

View File

@ -47,6 +47,7 @@ acpi-y += int340x_thermal.o
acpi-y += power.o
acpi-y += event.o
acpi-y += sysfs.o
acpi-y += property.o
acpi-$(CONFIG_X86) += acpi_cmos_rtc.o
acpi-$(CONFIG_DEBUG_FS) += debugfs.o
acpi-$(CONFIG_ACPI_NUMA) += numa.o
@ -87,3 +88,7 @@ obj-$(CONFIG_ACPI_PROCESSOR_AGGREGATOR) += acpi_pad.o
obj-$(CONFIG_ACPI_APEI) += apei/
obj-$(CONFIG_ACPI_EXTLOG) += acpi_extlog.o
obj-$(CONFIG_PMIC_OPREGION) += pmic/intel_pmic.o
obj-$(CONFIG_CRC_PMIC_OPREGION) += pmic/intel_pmic_crc.o
obj-$(CONFIG_XPOWER_PMIC_OPREGION) += pmic/intel_pmic_xpower.o

View File

@ -1,7 +1,7 @@
/*
* ACPI support for Intel Lynxpoint LPSS.
*
* Copyright (C) 2013, Intel Corporation
* Copyright (C) 2013, 2014, Intel Corporation
* Authors: Mika Westerberg <mika.westerberg@linux.intel.com>
* Rafael J. Wysocki <rafael.j.wysocki@intel.com>
*
@ -60,6 +60,8 @@ ACPI_MODULE_NAME("acpi_lpss");
#define LPSS_CLK_DIVIDER BIT(2)
#define LPSS_LTR BIT(3)
#define LPSS_SAVE_CTX BIT(4)
#define LPSS_DEV_PROXY BIT(5)
#define LPSS_PROXY_REQ BIT(6)
struct lpss_private_data;
@ -70,8 +72,10 @@ struct lpss_device_desc {
void (*setup)(struct lpss_private_data *pdata);
};
static struct device *proxy_device;
static struct lpss_device_desc lpss_dma_desc = {
.flags = LPSS_CLK,
.flags = LPSS_CLK | LPSS_PROXY_REQ,
};
struct lpss_private_data {
@ -146,22 +150,24 @@ static struct lpss_device_desc byt_pwm_dev_desc = {
};
static struct lpss_device_desc byt_uart_dev_desc = {
.flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_SAVE_CTX,
.flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_SAVE_CTX |
LPSS_DEV_PROXY,
.prv_offset = 0x800,
.setup = lpss_uart_setup,
};
static struct lpss_device_desc byt_spi_dev_desc = {
.flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_SAVE_CTX,
.flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_SAVE_CTX |
LPSS_DEV_PROXY,
.prv_offset = 0x400,
};
static struct lpss_device_desc byt_sdio_dev_desc = {
.flags = LPSS_CLK,
.flags = LPSS_CLK | LPSS_DEV_PROXY,
};
static struct lpss_device_desc byt_i2c_dev_desc = {
.flags = LPSS_CLK | LPSS_SAVE_CTX,
.flags = LPSS_CLK | LPSS_SAVE_CTX | LPSS_DEV_PROXY,
.prv_offset = 0x800,
.setup = byt_i2c_setup,
};
@ -368,6 +374,8 @@ static int acpi_lpss_create_device(struct acpi_device *adev,
adev->driver_data = pdata;
pdev = acpi_create_platform_device(adev);
if (!IS_ERR_OR_NULL(pdev)) {
if (!proxy_device && dev_desc->flags & LPSS_DEV_PROXY)
proxy_device = &pdev->dev;
return 1;
}
@ -499,14 +507,15 @@ static void acpi_lpss_set_ltr(struct device *dev, s32 val)
/**
* acpi_lpss_save_ctx() - Save the private registers of LPSS device
* @dev: LPSS device
* @pdata: pointer to the private data of the LPSS device
*
* Most LPSS devices have private registers which may loose their context when
* the device is powered down. acpi_lpss_save_ctx() saves those registers into
* prv_reg_ctx array.
*/
static void acpi_lpss_save_ctx(struct device *dev)
static void acpi_lpss_save_ctx(struct device *dev,
struct lpss_private_data *pdata)
{
struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
unsigned int i;
for (i = 0; i < LPSS_PRV_REG_COUNT; i++) {
@ -521,12 +530,13 @@ static void acpi_lpss_save_ctx(struct device *dev)
/**
* acpi_lpss_restore_ctx() - Restore the private registers of LPSS device
* @dev: LPSS device
* @pdata: pointer to the private data of the LPSS device
*
* Restores the registers that were previously stored with acpi_lpss_save_ctx().
*/
static void acpi_lpss_restore_ctx(struct device *dev)
static void acpi_lpss_restore_ctx(struct device *dev,
struct lpss_private_data *pdata)
{
struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
unsigned int i;
/*
@ -549,54 +559,82 @@ static void acpi_lpss_restore_ctx(struct device *dev)
#ifdef CONFIG_PM_SLEEP
static int acpi_lpss_suspend_late(struct device *dev)
{
int ret = pm_generic_suspend_late(dev);
struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
int ret;
ret = pm_generic_suspend_late(dev);
if (ret)
return ret;
acpi_lpss_save_ctx(dev);
if (pdata->dev_desc->flags & LPSS_SAVE_CTX)
acpi_lpss_save_ctx(dev, pdata);
return acpi_dev_suspend_late(dev);
}
static int acpi_lpss_resume_early(struct device *dev)
{
int ret = acpi_dev_resume_early(dev);
struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
int ret;
ret = acpi_dev_resume_early(dev);
if (ret)
return ret;
acpi_lpss_restore_ctx(dev);
if (pdata->dev_desc->flags & LPSS_SAVE_CTX)
acpi_lpss_restore_ctx(dev, pdata);
return pm_generic_resume_early(dev);
}
#endif /* CONFIG_PM_SLEEP */
#ifdef CONFIG_PM_RUNTIME
static int acpi_lpss_runtime_suspend(struct device *dev)
{
int ret = pm_generic_runtime_suspend(dev);
struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
int ret;
ret = pm_generic_runtime_suspend(dev);
if (ret)
return ret;
acpi_lpss_save_ctx(dev);
return acpi_dev_runtime_suspend(dev);
if (pdata->dev_desc->flags & LPSS_SAVE_CTX)
acpi_lpss_save_ctx(dev, pdata);
ret = acpi_dev_runtime_suspend(dev);
if (ret)
return ret;
if (pdata->dev_desc->flags & LPSS_PROXY_REQ && proxy_device)
return pm_runtime_put_sync_suspend(proxy_device);
return 0;
}
static int acpi_lpss_runtime_resume(struct device *dev)
{
int ret = acpi_dev_runtime_resume(dev);
struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
int ret;
if (pdata->dev_desc->flags & LPSS_PROXY_REQ && proxy_device) {
ret = pm_runtime_get_sync(proxy_device);
if (ret)
return ret;
}
ret = acpi_dev_runtime_resume(dev);
if (ret)
return ret;
acpi_lpss_restore_ctx(dev);
if (pdata->dev_desc->flags & LPSS_SAVE_CTX)
acpi_lpss_restore_ctx(dev, pdata);
return pm_generic_runtime_resume(dev);
}
#endif /* CONFIG_PM_RUNTIME */
#endif /* CONFIG_PM */
static struct dev_pm_domain acpi_lpss_pm_domain = {
.ops = {
#ifdef CONFIG_PM
#ifdef CONFIG_PM_SLEEP
.prepare = acpi_subsys_prepare,
.complete = acpi_subsys_complete,
@ -608,7 +646,6 @@ static struct dev_pm_domain acpi_lpss_pm_domain = {
.poweroff_late = acpi_lpss_suspend_late,
.restore_early = acpi_lpss_resume_early,
#endif
#ifdef CONFIG_PM_RUNTIME
.runtime_suspend = acpi_lpss_runtime_suspend,
.runtime_resume = acpi_lpss_runtime_resume,
#endif
@ -631,30 +668,27 @@ static int acpi_lpss_platform_notify(struct notifier_block *nb,
return 0;
pdata = acpi_driver_data(adev);
if (!pdata || !pdata->mmio_base)
if (!pdata)
return 0;
if (pdata->mmio_size < pdata->dev_desc->prv_offset + LPSS_LTR_SIZE) {
if (pdata->mmio_base &&
pdata->mmio_size < pdata->dev_desc->prv_offset + LPSS_LTR_SIZE) {
dev_err(&pdev->dev, "MMIO size insufficient to access LTR\n");
return 0;
}
switch (action) {
case BUS_NOTIFY_BOUND_DRIVER:
if (pdata->dev_desc->flags & LPSS_SAVE_CTX)
pdev->dev.pm_domain = &acpi_lpss_pm_domain;
break;
case BUS_NOTIFY_UNBOUND_DRIVER:
if (pdata->dev_desc->flags & LPSS_SAVE_CTX)
pdev->dev.pm_domain = NULL;
break;
case BUS_NOTIFY_ADD_DEVICE:
pdev->dev.pm_domain = &acpi_lpss_pm_domain;
if (pdata->dev_desc->flags & LPSS_LTR)
return sysfs_create_group(&pdev->dev.kobj,
&lpss_attr_group);
break;
case BUS_NOTIFY_DEL_DEVICE:
if (pdata->dev_desc->flags & LPSS_LTR)
sysfs_remove_group(&pdev->dev.kobj, &lpss_attr_group);
pdev->dev.pm_domain = NULL;
break;
default:
break;
}

View File

@ -305,6 +305,7 @@ ACPI_INIT_GLOBAL(u8, acpi_gbl_db_output_flags, ACPI_DB_CONSOLE_OUTPUT);
ACPI_INIT_GLOBAL(u8, acpi_gbl_no_resource_disassembly, FALSE);
ACPI_INIT_GLOBAL(u8, acpi_gbl_ignore_noop_operator, FALSE);
ACPI_INIT_GLOBAL(u8, acpi_gbl_cstyle_disassembly, TRUE);
ACPI_GLOBAL(u8, acpi_gbl_db_opt_disasm);
ACPI_GLOBAL(u8, acpi_gbl_db_opt_verbose);

View File

@ -454,6 +454,7 @@ struct acpi_gpe_register_info {
u16 base_gpe_number; /* Base GPE number for this register */
u8 enable_for_wake; /* GPEs to keep enabled when sleeping */
u8 enable_for_run; /* GPEs to keep enabled when running */
u8 enable_mask; /* Current mask of enabled GPEs */
};
/*
@ -722,6 +723,7 @@ union acpi_parse_value {
ACPI_DISASM_ONLY_MEMBERS (\
u8 disasm_flags; /* Used during AML disassembly */\
u8 disasm_opcode; /* Subtype used for disassembly */\
char *operator_symbol;/* Used for C-style operator name strings */\
char aml_op_name[16]) /* Op name (debug only) */
/* Flags for disasm_flags field above */
@ -827,6 +829,8 @@ struct acpi_parse_state {
#define ACPI_PARSEOP_EMPTY_TERMLIST 0x04
#define ACPI_PARSEOP_PREDEF_CHECKED 0x08
#define ACPI_PARSEOP_SPECIAL 0x10
#define ACPI_PARSEOP_COMPOUND 0x20
#define ACPI_PARSEOP_ASSIGNMENT 0x40
/*****************************************************************************
*

View File

@ -134,7 +134,7 @@ acpi_status acpi_ev_enable_gpe(struct acpi_gpe_event_info *gpe_event_info)
/* Enable the requested GPE */
status = acpi_hw_low_set_gpe(gpe_event_info, ACPI_GPE_ENABLE);
status = acpi_hw_low_set_gpe(gpe_event_info, ACPI_GPE_ENABLE_SAVE);
return_ACPI_STATUS(status);
}
@ -213,7 +213,7 @@ acpi_ev_remove_gpe_reference(struct acpi_gpe_event_info *gpe_event_info)
if (ACPI_SUCCESS(status)) {
status =
acpi_hw_low_set_gpe(gpe_event_info,
ACPI_GPE_DISABLE);
ACPI_GPE_DISABLE_SAVE);
}
if (ACPI_FAILURE(status)) {
@ -616,8 +616,11 @@ static void ACPI_SYSTEM_XFACE acpi_ev_asynch_execute_gpe_method(void *context)
static void ACPI_SYSTEM_XFACE acpi_ev_asynch_enable_gpe(void *context)
{
struct acpi_gpe_event_info *gpe_event_info = context;
acpi_cpu_flags flags;
flags = acpi_os_acquire_lock(acpi_gbl_gpe_lock);
(void)acpi_ev_finish_gpe(gpe_event_info);
acpi_os_release_lock(acpi_gbl_gpe_lock, flags);
ACPI_FREE(gpe_event_info);
return;
@ -655,7 +658,7 @@ acpi_status acpi_ev_finish_gpe(struct acpi_gpe_event_info * gpe_event_info)
/*
* Enable this GPE, conditionally. This means that the GPE will
* only be physically enabled if the enable_for_run bit is set
* only be physically enabled if the enable_mask bit is set
* in the event_info.
*/
(void)acpi_hw_low_set_gpe(gpe_event_info, ACPI_GPE_CONDITIONAL_ENABLE);

View File

@ -115,12 +115,12 @@ acpi_hw_low_set_gpe(struct acpi_gpe_event_info *gpe_event_info, u32 action)
/* Set or clear just the bit that corresponds to this GPE */
register_bit = acpi_hw_get_gpe_register_bit(gpe_event_info);
switch (action) {
switch (action & ~ACPI_GPE_SAVE_MASK) {
case ACPI_GPE_CONDITIONAL_ENABLE:
/* Only enable if the enable_for_run bit is set */
/* Only enable if the corresponding enable_mask bit is set */
if (!(register_bit & gpe_register_info->enable_for_run)) {
if (!(register_bit & gpe_register_info->enable_mask)) {
return (AE_BAD_PARAMETER);
}
@ -145,6 +145,9 @@ acpi_hw_low_set_gpe(struct acpi_gpe_event_info *gpe_event_info, u32 action)
/* Write the updated enable mask */
status = acpi_hw_write(enable_mask, &gpe_register_info->enable_address);
if (ACPI_SUCCESS(status) && (action & ACPI_GPE_SAVE_MASK)) {
gpe_register_info->enable_mask = enable_mask;
}
return (status);
}
@ -260,6 +263,32 @@ acpi_hw_get_gpe_status(struct acpi_gpe_event_info * gpe_event_info,
return (AE_OK);
}
/******************************************************************************
*
* FUNCTION: acpi_hw_gpe_enable_write
*
* PARAMETERS: enable_mask - Bit mask to write to the GPE register
* gpe_register_info - Gpe Register info
*
* RETURN: Status
*
* DESCRIPTION: Write the enable mask byte to the given GPE register.
*
******************************************************************************/
static acpi_status
acpi_hw_gpe_enable_write(u8 enable_mask,
struct acpi_gpe_register_info *gpe_register_info)
{
acpi_status status;
status = acpi_hw_write(enable_mask, &gpe_register_info->enable_address);
if (ACPI_SUCCESS(status)) {
gpe_register_info->enable_mask = enable_mask;
}
return (status);
}
/******************************************************************************
*
* FUNCTION: acpi_hw_disable_gpe_block
@ -287,8 +316,8 @@ acpi_hw_disable_gpe_block(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
/* Disable all GPEs in this register */
status =
acpi_hw_write(0x00,
&gpe_block->register_info[i].enable_address);
acpi_hw_gpe_enable_write(0x00,
&gpe_block->register_info[i]);
if (ACPI_FAILURE(status)) {
return (status);
}
@ -355,21 +384,23 @@ acpi_hw_enable_runtime_gpe_block(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
{
u32 i;
acpi_status status;
struct acpi_gpe_register_info *gpe_register_info;
/* NOTE: assumes that all GPEs are currently disabled */
/* Examine each GPE Register within the block */
for (i = 0; i < gpe_block->register_count; i++) {
if (!gpe_block->register_info[i].enable_for_run) {
gpe_register_info = &gpe_block->register_info[i];
if (!gpe_register_info->enable_for_run) {
continue;
}
/* Enable all "runtime" GPEs in this register */
status =
acpi_hw_write(gpe_block->register_info[i].enable_for_run,
&gpe_block->register_info[i].enable_address);
acpi_hw_gpe_enable_write(gpe_register_info->enable_for_run,
gpe_register_info);
if (ACPI_FAILURE(status)) {
return (status);
}
@ -399,10 +430,12 @@ acpi_hw_enable_wakeup_gpe_block(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
{
u32 i;
acpi_status status;
struct acpi_gpe_register_info *gpe_register_info;
/* Examine each GPE Register within the block */
for (i = 0; i < gpe_block->register_count; i++) {
gpe_register_info = &gpe_block->register_info[i];
/*
* Enable all "wake" GPEs in this register and disable the
@ -410,8 +443,8 @@ acpi_hw_enable_wakeup_gpe_block(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
*/
status =
acpi_hw_write(gpe_block->register_info[i].enable_for_wake,
&gpe_block->register_info[i].enable_address);
acpi_hw_gpe_enable_write(gpe_register_info->enable_for_wake,
gpe_register_info);
if (ACPI_FAILURE(status)) {
return (status);
}

View File

@ -263,7 +263,7 @@ const char *acpi_gbl_bpb_decode[] = {
/* UART serial bus stop bits */
const char *acpi_gbl_sb_decode[] = {
"StopBitsNone",
"StopBitsZero",
"StopBitsOne",
"StopBitsOnePlusHalf",
"StopBitsTwo"

View File

@ -531,7 +531,9 @@ acpi_decode_pld_buffer(u8 *in_buffer,
ACPI_MOVE_32_TO_32(&dword, &buffer[0]);
pld_info->revision = ACPI_PLD_GET_REVISION(&dword);
pld_info->ignore_color = ACPI_PLD_GET_IGNORE_COLOR(&dword);
pld_info->color = ACPI_PLD_GET_COLOR(&dword);
pld_info->red = ACPI_PLD_GET_RED(&dword);
pld_info->green = ACPI_PLD_GET_GREEN(&dword);
pld_info->blue = ACPI_PLD_GET_BLUE(&dword);
/* Second 32-bit DWord */

View File

@ -53,6 +53,9 @@
#define _COMPONENT ACPI_UTILITIES
ACPI_MODULE_NAME("utxfinit")
/* For acpi_exec only */
void ae_do_object_overrides(void);
/*******************************************************************************
*
* FUNCTION: acpi_initialize_subsystem
@ -65,6 +68,7 @@ ACPI_MODULE_NAME("utxfinit")
* called, so any early initialization belongs here.
*
******************************************************************************/
acpi_status __init acpi_initialize_subsystem(void)
{
acpi_status status;
@ -275,6 +279,13 @@ acpi_status __init acpi_initialize_objects(u32 flags)
return_ACPI_STATUS(status);
}
}
#ifdef ACPI_EXEC_APP
/*
* This call implements the "initialization file" option for acpi_exec.
* This is the precise point that we want to perform the overrides.
*/
ae_do_object_overrides();
#endif
/*
* Execute any module-level code that was detected during the table load

View File

@ -1180,6 +1180,10 @@ static int acpi_battery_add(struct acpi_device *device)
if (!device)
return -EINVAL;
if (device->dep_unmet)
return -EPROBE_DEFER;
battery = kzalloc(sizeof(struct acpi_battery), GFP_KERNEL);
if (!battery)
return -ENOMEM;

View File

@ -201,7 +201,7 @@ int acpi_device_set_power(struct acpi_device *device, int state)
* Transition Power
* ----------------
* In accordance with the ACPI specification first apply power (via
* power resources) and then evalute _PSx.
* power resources) and then evaluate _PSx.
*/
if (device->power.flags.power_resources) {
result = acpi_power_transition(device, state);
@ -692,7 +692,6 @@ static int acpi_device_wakeup(struct acpi_device *adev, u32 target_state,
return 0;
}
#ifdef CONFIG_PM_RUNTIME
/**
* acpi_pm_device_run_wake - Enable/disable remote wakeup for given device.
* @dev: Device to enable/disable the platform to wake up.
@ -714,7 +713,6 @@ int acpi_pm_device_run_wake(struct device *phys_dev, bool enable)
return acpi_device_wakeup(adev, ACPI_STATE_S0, enable);
}
EXPORT_SYMBOL(acpi_pm_device_run_wake);
#endif /* CONFIG_PM_RUNTIME */
#ifdef CONFIG_PM_SLEEP
/**
@ -773,7 +771,6 @@ static int acpi_dev_pm_full_power(struct acpi_device *adev)
acpi_device_set_power(adev, ACPI_STATE_D0) : 0;
}
#ifdef CONFIG_PM_RUNTIME
/**
* acpi_dev_runtime_suspend - Put device into a low-power state using ACPI.
* @dev: Device to put into a low-power state.
@ -855,7 +852,6 @@ int acpi_subsys_runtime_resume(struct device *dev)
return ret ? ret : pm_generic_runtime_resume(dev);
}
EXPORT_SYMBOL_GPL(acpi_subsys_runtime_resume);
#endif /* CONFIG_PM_RUNTIME */
#ifdef CONFIG_PM_SLEEP
/**
@ -1023,10 +1019,9 @@ EXPORT_SYMBOL_GPL(acpi_subsys_freeze);
static struct dev_pm_domain acpi_general_pm_domain = {
.ops = {
#ifdef CONFIG_PM_RUNTIME
#ifdef CONFIG_PM
.runtime_suspend = acpi_subsys_runtime_suspend,
.runtime_resume = acpi_subsys_runtime_resume,
#endif
#ifdef CONFIG_PM_SLEEP
.prepare = acpi_subsys_prepare,
.complete = acpi_subsys_complete,
@ -1037,6 +1032,7 @@ static struct dev_pm_domain acpi_general_pm_domain = {
.poweroff = acpi_subsys_suspend,
.poweroff_late = acpi_subsys_suspend_late,
.restore_early = acpi_subsys_resume_early,
#endif
#endif
},
};

View File

@ -173,4 +173,10 @@ static inline void suspend_nvs_restore(void) {}
bool acpi_osi_is_win8(void);
#endif
/*--------------------------------------------------------------------------
Device properties
-------------------------------------------------------------------------- */
void acpi_init_properties(struct acpi_device *adev);
void acpi_free_properties(struct acpi_device *adev);
#endif /* _ACPI_INTERNAL_H_ */

View File

@ -436,7 +436,7 @@ static void acpi_os_drop_map_ref(struct acpi_ioremap *map)
static void acpi_os_map_cleanup(struct acpi_ioremap *map)
{
if (!map->refcount) {
synchronize_rcu();
synchronize_rcu_expedited();
acpi_unmap(map->phys, map->virt);
kfree(map);
}
@ -1188,6 +1188,12 @@ EXPORT_SYMBOL(acpi_os_execute);
void acpi_os_wait_events_complete(void)
{
/*
* Make sure the GPE handler or the fixed event handler is not used
* on another CPU after removal.
*/
if (acpi_irq_handler)
synchronize_hardirq(acpi_gbl_FADT.sci_interrupt);
flush_workqueue(kacpid_wq);
flush_workqueue(kacpi_notify_wq);
}

View File

@ -484,7 +484,7 @@ void acpi_pci_irq_disable(struct pci_dev *dev)
/* Keep IOAPIC pin configuration when suspending */
if (dev->dev.power.is_prepared)
return;
#ifdef CONFIG_PM_RUNTIME
#ifdef CONFIG_PM
if (dev->dev.power.runtime_status == RPM_SUSPENDING)
return;
#endif

View File

@ -0,0 +1,354 @@
/*
* intel_pmic.c - Intel PMIC operation region driver
*
* Copyright (C) 2014 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License version
* 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/module.h>
#include <linux/acpi.h>
#include <linux/regmap.h>
#include "intel_pmic.h"
#define PMIC_POWER_OPREGION_ID 0x8d
#define PMIC_THERMAL_OPREGION_ID 0x8c
struct acpi_lpat {
int temp;
int raw;
};
struct intel_pmic_opregion {
struct mutex lock;
struct acpi_lpat *lpat;
int lpat_count;
struct regmap *regmap;
struct intel_pmic_opregion_data *data;
};
static int pmic_get_reg_bit(int address, struct pmic_table *table,
int count, int *reg, int *bit)
{
int i;
for (i = 0; i < count; i++) {
if (table[i].address == address) {
*reg = table[i].reg;
if (bit)
*bit = table[i].bit;
return 0;
}
}
return -ENOENT;
}
/**
* raw_to_temp(): Return temperature from raw value through LPAT table
*
* @lpat: the temperature_raw mapping table
* @count: the count of the above mapping table
* @raw: the raw value, used as a key to get the temerature from the
* above mapping table
*
* A positive value will be returned on success, a negative errno will
* be returned in error cases.
*/
static int raw_to_temp(struct acpi_lpat *lpat, int count, int raw)
{
int i, delta_temp, delta_raw, temp;
for (i = 0; i < count - 1; i++) {
if ((raw >= lpat[i].raw && raw <= lpat[i+1].raw) ||
(raw <= lpat[i].raw && raw >= lpat[i+1].raw))
break;
}
if (i == count - 1)
return -ENOENT;
delta_temp = lpat[i+1].temp - lpat[i].temp;
delta_raw = lpat[i+1].raw - lpat[i].raw;
temp = lpat[i].temp + (raw - lpat[i].raw) * delta_temp / delta_raw;
return temp;
}
/**
* temp_to_raw(): Return raw value from temperature through LPAT table
*
* @lpat: the temperature_raw mapping table
* @count: the count of the above mapping table
* @temp: the temperature, used as a key to get the raw value from the
* above mapping table
*
* A positive value will be returned on success, a negative errno will
* be returned in error cases.
*/
static int temp_to_raw(struct acpi_lpat *lpat, int count, int temp)
{
int i, delta_temp, delta_raw, raw;
for (i = 0; i < count - 1; i++) {
if (temp >= lpat[i].temp && temp <= lpat[i+1].temp)
break;
}
if (i == count - 1)
return -ENOENT;
delta_temp = lpat[i+1].temp - lpat[i].temp;
delta_raw = lpat[i+1].raw - lpat[i].raw;
raw = lpat[i].raw + (temp - lpat[i].temp) * delta_raw / delta_temp;
return raw;
}
static void pmic_thermal_lpat(struct intel_pmic_opregion *opregion,
acpi_handle handle, struct device *dev)
{
struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
union acpi_object *obj_p, *obj_e;
int *lpat, i;
acpi_status status;
status = acpi_evaluate_object(handle, "LPAT", NULL, &buffer);
if (ACPI_FAILURE(status))
return;
obj_p = (union acpi_object *)buffer.pointer;
if (!obj_p || (obj_p->type != ACPI_TYPE_PACKAGE) ||
(obj_p->package.count % 2) || (obj_p->package.count < 4))
goto out;
lpat = devm_kmalloc(dev, sizeof(int) * obj_p->package.count,
GFP_KERNEL);
if (!lpat)
goto out;
for (i = 0; i < obj_p->package.count; i++) {
obj_e = &obj_p->package.elements[i];
if (obj_e->type != ACPI_TYPE_INTEGER) {
devm_kfree(dev, lpat);
goto out;
}
lpat[i] = (s64)obj_e->integer.value;
}
opregion->lpat = (struct acpi_lpat *)lpat;
opregion->lpat_count = obj_p->package.count / 2;
out:
kfree(buffer.pointer);
}
static acpi_status intel_pmic_power_handler(u32 function,
acpi_physical_address address, u32 bits, u64 *value64,
void *handler_context, void *region_context)
{
struct intel_pmic_opregion *opregion = region_context;
struct regmap *regmap = opregion->regmap;
struct intel_pmic_opregion_data *d = opregion->data;
int reg, bit, result;
if (bits != 32 || !value64)
return AE_BAD_PARAMETER;
if (function == ACPI_WRITE && !(*value64 == 0 || *value64 == 1))
return AE_BAD_PARAMETER;
result = pmic_get_reg_bit(address, d->power_table,
d->power_table_count, &reg, &bit);
if (result == -ENOENT)
return AE_BAD_PARAMETER;
mutex_lock(&opregion->lock);
result = function == ACPI_READ ?
d->get_power(regmap, reg, bit, value64) :
d->update_power(regmap, reg, bit, *value64 == 1);
mutex_unlock(&opregion->lock);
return result ? AE_ERROR : AE_OK;
}
static int pmic_read_temp(struct intel_pmic_opregion *opregion,
int reg, u64 *value)
{
int raw_temp, temp;
if (!opregion->data->get_raw_temp)
return -ENXIO;
raw_temp = opregion->data->get_raw_temp(opregion->regmap, reg);
if (raw_temp < 0)
return raw_temp;
if (!opregion->lpat) {
*value = raw_temp;
return 0;
}
temp = raw_to_temp(opregion->lpat, opregion->lpat_count, raw_temp);
if (temp < 0)
return temp;
*value = temp;
return 0;
}
static int pmic_thermal_temp(struct intel_pmic_opregion *opregion, int reg,
u32 function, u64 *value)
{
return function == ACPI_READ ?
pmic_read_temp(opregion, reg, value) : -EINVAL;
}
static int pmic_thermal_aux(struct intel_pmic_opregion *opregion, int reg,
u32 function, u64 *value)
{
int raw_temp;
if (function == ACPI_READ)
return pmic_read_temp(opregion, reg, value);
if (!opregion->data->update_aux)
return -ENXIO;
if (opregion->lpat) {
raw_temp = temp_to_raw(opregion->lpat, opregion->lpat_count,
*value);
if (raw_temp < 0)
return raw_temp;
} else {
raw_temp = *value;
}
return opregion->data->update_aux(opregion->regmap, reg, raw_temp);
}
static int pmic_thermal_pen(struct intel_pmic_opregion *opregion, int reg,
u32 function, u64 *value)
{
struct intel_pmic_opregion_data *d = opregion->data;
struct regmap *regmap = opregion->regmap;
if (!d->get_policy || !d->update_policy)
return -ENXIO;
if (function == ACPI_READ)
return d->get_policy(regmap, reg, value);
if (*value != 0 && *value != 1)
return -EINVAL;
return d->update_policy(regmap, reg, *value);
}
static bool pmic_thermal_is_temp(int address)
{
return (address <= 0x3c) && !(address % 12);
}
static bool pmic_thermal_is_aux(int address)
{
return (address >= 4 && address <= 0x40 && !((address - 4) % 12)) ||
(address >= 8 && address <= 0x44 && !((address - 8) % 12));
}
static bool pmic_thermal_is_pen(int address)
{
return address >= 0x48 && address <= 0x5c;
}
static acpi_status intel_pmic_thermal_handler(u32 function,
acpi_physical_address address, u32 bits, u64 *value64,
void *handler_context, void *region_context)
{
struct intel_pmic_opregion *opregion = region_context;
struct intel_pmic_opregion_data *d = opregion->data;
int reg, result;
if (bits != 32 || !value64)
return AE_BAD_PARAMETER;
result = pmic_get_reg_bit(address, d->thermal_table,
d->thermal_table_count, &reg, NULL);
if (result == -ENOENT)
return AE_BAD_PARAMETER;
mutex_lock(&opregion->lock);
if (pmic_thermal_is_temp(address))
result = pmic_thermal_temp(opregion, reg, function, value64);
else if (pmic_thermal_is_aux(address))
result = pmic_thermal_aux(opregion, reg, function, value64);
else if (pmic_thermal_is_pen(address))
result = pmic_thermal_pen(opregion, reg, function, value64);
else
result = -EINVAL;
mutex_unlock(&opregion->lock);
if (result < 0) {
if (result == -EINVAL)
return AE_BAD_PARAMETER;
else
return AE_ERROR;
}
return AE_OK;
}
int intel_pmic_install_opregion_handler(struct device *dev, acpi_handle handle,
struct regmap *regmap,
struct intel_pmic_opregion_data *d)
{
acpi_status status;
struct intel_pmic_opregion *opregion;
if (!dev || !regmap || !d)
return -EINVAL;
if (!handle)
return -ENODEV;
opregion = devm_kzalloc(dev, sizeof(*opregion), GFP_KERNEL);
if (!opregion)
return -ENOMEM;
mutex_init(&opregion->lock);
opregion->regmap = regmap;
pmic_thermal_lpat(opregion, handle, dev);
status = acpi_install_address_space_handler(handle,
PMIC_POWER_OPREGION_ID,
intel_pmic_power_handler,
NULL, opregion);
if (ACPI_FAILURE(status))
return -ENODEV;
status = acpi_install_address_space_handler(handle,
PMIC_THERMAL_OPREGION_ID,
intel_pmic_thermal_handler,
NULL, opregion);
if (ACPI_FAILURE(status)) {
acpi_remove_address_space_handler(handle, PMIC_POWER_OPREGION_ID,
intel_pmic_power_handler);
return -ENODEV;
}
opregion->data = d;
return 0;
}
EXPORT_SYMBOL_GPL(intel_pmic_install_opregion_handler);
MODULE_LICENSE("GPL");

View File

@ -0,0 +1,25 @@
#ifndef __INTEL_PMIC_H
#define __INTEL_PMIC_H
struct pmic_table {
int address; /* operation region address */
int reg; /* corresponding thermal register */
int bit; /* control bit for power */
};
struct intel_pmic_opregion_data {
int (*get_power)(struct regmap *r, int reg, int bit, u64 *value);
int (*update_power)(struct regmap *r, int reg, int bit, bool on);
int (*get_raw_temp)(struct regmap *r, int reg);
int (*update_aux)(struct regmap *r, int reg, int raw_temp);
int (*get_policy)(struct regmap *r, int reg, u64 *value);
int (*update_policy)(struct regmap *r, int reg, int enable);
struct pmic_table *power_table;
int power_table_count;
struct pmic_table *thermal_table;
int thermal_table_count;
};
int intel_pmic_install_opregion_handler(struct device *dev, acpi_handle handle, struct regmap *regmap, struct intel_pmic_opregion_data *d);
#endif

View File

@ -0,0 +1,211 @@
/*
* intel_pmic_crc.c - Intel CrystalCove PMIC operation region driver
*
* Copyright (C) 2014 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License version
* 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/module.h>
#include <linux/acpi.h>
#include <linux/mfd/intel_soc_pmic.h>
#include <linux/regmap.h>
#include <linux/platform_device.h>
#include "intel_pmic.h"
#define PWR_SOURCE_SELECT BIT(1)
#define PMIC_A0LOCK_REG 0xc5
static struct pmic_table power_table[] = {
{
.address = 0x24,
.reg = 0x66,
.bit = 0x00,
},
{
.address = 0x48,
.reg = 0x5d,
.bit = 0x00,
},
};
static struct pmic_table thermal_table[] = {
{
.address = 0x00,
.reg = 0x75
},
{
.address = 0x04,
.reg = 0x95
},
{
.address = 0x08,
.reg = 0x97
},
{
.address = 0x0c,
.reg = 0x77
},
{
.address = 0x10,
.reg = 0x9a
},
{
.address = 0x14,
.reg = 0x9c
},
{
.address = 0x18,
.reg = 0x79
},
{
.address = 0x1c,
.reg = 0x9f
},
{
.address = 0x20,
.reg = 0xa1
},
{
.address = 0x48,
.reg = 0x94
},
{
.address = 0x4c,
.reg = 0x99
},
{
.address = 0x50,
.reg = 0x9e
},
};
static int intel_crc_pmic_get_power(struct regmap *regmap, int reg,
int bit, u64 *value)
{
int data;
if (regmap_read(regmap, reg, &data))
return -EIO;
*value = (data & PWR_SOURCE_SELECT) && (data & BIT(bit)) ? 1 : 0;
return 0;
}
static int intel_crc_pmic_update_power(struct regmap *regmap, int reg,
int bit, bool on)
{
int data;
if (regmap_read(regmap, reg, &data))
return -EIO;
if (on) {
data |= PWR_SOURCE_SELECT | BIT(bit);
} else {
data &= ~BIT(bit);
data |= PWR_SOURCE_SELECT;
}
if (regmap_write(regmap, reg, data))
return -EIO;
return 0;
}
static int intel_crc_pmic_get_raw_temp(struct regmap *regmap, int reg)
{
int temp_l, temp_h;
/*
* Raw temperature value is 10bits: 8bits in reg
* and 2bits in reg-1: bit0,1
*/
if (regmap_read(regmap, reg, &temp_l) ||
regmap_read(regmap, reg - 1, &temp_h))
return -EIO;
return temp_l | (temp_h & 0x3) << 8;
}
static int intel_crc_pmic_update_aux(struct regmap *regmap, int reg, int raw)
{
return regmap_write(regmap, reg, raw) ||
regmap_update_bits(regmap, reg - 1, 0x3, raw >> 8) ? -EIO : 0;
}
static int intel_crc_pmic_get_policy(struct regmap *regmap, int reg, u64 *value)
{
int pen;
if (regmap_read(regmap, reg, &pen))
return -EIO;
*value = pen >> 7;
return 0;
}
static int intel_crc_pmic_update_policy(struct regmap *regmap,
int reg, int enable)
{
int alert0;
/* Update to policy enable bit requires unlocking a0lock */
if (regmap_read(regmap, PMIC_A0LOCK_REG, &alert0))
return -EIO;
if (regmap_update_bits(regmap, PMIC_A0LOCK_REG, 0x01, 0))
return -EIO;
if (regmap_update_bits(regmap, reg, 0x80, enable << 7))
return -EIO;
/* restore alert0 */
if (regmap_write(regmap, PMIC_A0LOCK_REG, alert0))
return -EIO;
return 0;
}
static struct intel_pmic_opregion_data intel_crc_pmic_opregion_data = {
.get_power = intel_crc_pmic_get_power,
.update_power = intel_crc_pmic_update_power,
.get_raw_temp = intel_crc_pmic_get_raw_temp,
.update_aux = intel_crc_pmic_update_aux,
.get_policy = intel_crc_pmic_get_policy,
.update_policy = intel_crc_pmic_update_policy,
.power_table = power_table,
.power_table_count= ARRAY_SIZE(power_table),
.thermal_table = thermal_table,
.thermal_table_count = ARRAY_SIZE(thermal_table),
};
static int intel_crc_pmic_opregion_probe(struct platform_device *pdev)
{
struct intel_soc_pmic *pmic = dev_get_drvdata(pdev->dev.parent);
return intel_pmic_install_opregion_handler(&pdev->dev,
ACPI_HANDLE(pdev->dev.parent), pmic->regmap,
&intel_crc_pmic_opregion_data);
}
static struct platform_driver intel_crc_pmic_opregion_driver = {
.probe = intel_crc_pmic_opregion_probe,
.driver = {
.name = "crystal_cove_pmic",
},
};
static int __init intel_crc_pmic_opregion_driver_init(void)
{
return platform_driver_register(&intel_crc_pmic_opregion_driver);
}
module_init(intel_crc_pmic_opregion_driver_init);
MODULE_DESCRIPTION("CrystalCove ACPI opration region driver");
MODULE_LICENSE("GPL");

View File

@ -0,0 +1,268 @@
/*
* intel_pmic_xpower.c - XPower AXP288 PMIC operation region driver
*
* Copyright (C) 2014 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License version
* 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/module.h>
#include <linux/acpi.h>
#include <linux/mfd/axp20x.h>
#include <linux/regmap.h>
#include <linux/platform_device.h>
#include <linux/iio/consumer.h>
#include "intel_pmic.h"
#define XPOWER_GPADC_LOW 0x5b
static struct pmic_table power_table[] = {
{
.address = 0x00,
.reg = 0x13,
.bit = 0x05,
},
{
.address = 0x04,
.reg = 0x13,
.bit = 0x06,
},
{
.address = 0x08,
.reg = 0x13,
.bit = 0x07,
},
{
.address = 0x0c,
.reg = 0x12,
.bit = 0x03,
},
{
.address = 0x10,
.reg = 0x12,
.bit = 0x04,
},
{
.address = 0x14,
.reg = 0x12,
.bit = 0x05,
},
{
.address = 0x18,
.reg = 0x12,
.bit = 0x06,
},
{
.address = 0x1c,
.reg = 0x12,
.bit = 0x00,
},
{
.address = 0x20,
.reg = 0x12,
.bit = 0x01,
},
{
.address = 0x24,
.reg = 0x12,
.bit = 0x02,
},
{
.address = 0x28,
.reg = 0x13,
.bit = 0x02,
},
{
.address = 0x2c,
.reg = 0x13,
.bit = 0x03,
},
{
.address = 0x30,
.reg = 0x13,
.bit = 0x04,
},
{
.address = 0x38,
.reg = 0x10,
.bit = 0x03,
},
{
.address = 0x3c,
.reg = 0x10,
.bit = 0x06,
},
{
.address = 0x40,
.reg = 0x10,
.bit = 0x05,
},
{
.address = 0x44,
.reg = 0x10,
.bit = 0x04,
},
{
.address = 0x48,
.reg = 0x10,
.bit = 0x01,
},
{
.address = 0x4c,
.reg = 0x10,
.bit = 0x00
},
};
/* TMP0 - TMP5 are the same, all from GPADC */
static struct pmic_table thermal_table[] = {
{
.address = 0x00,
.reg = XPOWER_GPADC_LOW
},
{
.address = 0x0c,
.reg = XPOWER_GPADC_LOW
},
{
.address = 0x18,
.reg = XPOWER_GPADC_LOW
},
{
.address = 0x24,
.reg = XPOWER_GPADC_LOW
},
{
.address = 0x30,
.reg = XPOWER_GPADC_LOW
},
{
.address = 0x3c,
.reg = XPOWER_GPADC_LOW
},
};
static int intel_xpower_pmic_get_power(struct regmap *regmap, int reg,
int bit, u64 *value)
{
int data;
if (regmap_read(regmap, reg, &data))
return -EIO;
*value = (data & BIT(bit)) ? 1 : 0;
return 0;
}
static int intel_xpower_pmic_update_power(struct regmap *regmap, int reg,
int bit, bool on)
{
int data;
if (regmap_read(regmap, reg, &data))
return -EIO;
if (on)
data |= BIT(bit);
else
data &= ~BIT(bit);
if (regmap_write(regmap, reg, data))
return -EIO;
return 0;
}
/**
* intel_xpower_pmic_get_raw_temp(): Get raw temperature reading from the PMIC
*
* @regmap: regmap of the PMIC device
* @reg: register to get the reading
*
* We could get the sensor value by manipulating the HW regs here, but since
* the axp288 IIO driver may also access the same regs at the same time, the
* APIs provided by IIO subsystem are used here instead to avoid problems. As
* a result, the two passed in params are of no actual use.
*
* Return a positive value on success, errno on failure.
*/
static int intel_xpower_pmic_get_raw_temp(struct regmap *regmap, int reg)
{
struct iio_channel *gpadc_chan;
int ret, val;
gpadc_chan = iio_channel_get(NULL, "axp288-system-temp");
if (IS_ERR_OR_NULL(gpadc_chan))
return -EACCES;
ret = iio_read_channel_raw(gpadc_chan, &val);
if (ret < 0)
val = ret;
iio_channel_release(gpadc_chan);
return val;
}
static struct intel_pmic_opregion_data intel_xpower_pmic_opregion_data = {
.get_power = intel_xpower_pmic_get_power,
.update_power = intel_xpower_pmic_update_power,
.get_raw_temp = intel_xpower_pmic_get_raw_temp,
.power_table = power_table,
.power_table_count = ARRAY_SIZE(power_table),
.thermal_table = thermal_table,
.thermal_table_count = ARRAY_SIZE(thermal_table),
};
static acpi_status intel_xpower_pmic_gpio_handler(u32 function,
acpi_physical_address address, u32 bit_width, u64 *value,
void *handler_context, void *region_context)
{
return AE_OK;
}
static int intel_xpower_pmic_opregion_probe(struct platform_device *pdev)
{
struct device *parent = pdev->dev.parent;
struct axp20x_dev *axp20x = dev_get_drvdata(parent);
acpi_status status;
int result;
status = acpi_install_address_space_handler(ACPI_HANDLE(parent),
ACPI_ADR_SPACE_GPIO, intel_xpower_pmic_gpio_handler,
NULL, NULL);
if (ACPI_FAILURE(status))
return -ENODEV;
result = intel_pmic_install_opregion_handler(&pdev->dev,
ACPI_HANDLE(parent), axp20x->regmap,
&intel_xpower_pmic_opregion_data);
if (result)
acpi_remove_address_space_handler(ACPI_HANDLE(parent),
ACPI_ADR_SPACE_GPIO,
intel_xpower_pmic_gpio_handler);
return result;
}
static struct platform_driver intel_xpower_pmic_opregion_driver = {
.probe = intel_xpower_pmic_opregion_probe,
.driver = {
.name = "axp288_pmic_acpi",
},
};
static int __init intel_xpower_pmic_opregion_driver_init(void)
{
return platform_driver_register(&intel_xpower_pmic_opregion_driver);
}
module_init(intel_xpower_pmic_opregion_driver_init);
MODULE_DESCRIPTION("XPower AXP288 ACPI operation region driver");
MODULE_LICENSE("GPL");

View File

@ -334,10 +334,10 @@ static int acpi_processor_get_power_info_default(struct acpi_processor *pr)
static int acpi_processor_get_power_info_cst(struct acpi_processor *pr)
{
acpi_status status = 0;
acpi_status status;
u64 count;
int current_count;
int i;
int i, ret = 0;
struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
union acpi_object *cst;
@ -358,7 +358,7 @@ static int acpi_processor_get_power_info_cst(struct acpi_processor *pr)
/* There must be at least 2 elements */
if (!cst || (cst->type != ACPI_TYPE_PACKAGE) || cst->package.count < 2) {
printk(KERN_ERR PREFIX "not enough elements in _CST\n");
status = -EFAULT;
ret = -EFAULT;
goto end;
}
@ -367,7 +367,7 @@ static int acpi_processor_get_power_info_cst(struct acpi_processor *pr)
/* Validate number of power states. */
if (count < 1 || count != cst->package.count - 1) {
printk(KERN_ERR PREFIX "count given by _CST is not valid\n");
status = -EFAULT;
ret = -EFAULT;
goto end;
}
@ -489,12 +489,12 @@ static int acpi_processor_get_power_info_cst(struct acpi_processor *pr)
/* Validate number of power states discovered */
if (current_count < 2)
status = -EFAULT;
ret = -EFAULT;
end:
kfree(buffer.pointer);
return status;
return ret;
}
static void acpi_processor_power_verify_c3(struct acpi_processor *pr,
@ -985,8 +985,8 @@ static int acpi_processor_setup_cpuidle_states(struct acpi_processor *pr)
state->flags = 0;
switch (cx->type) {
case ACPI_STATE_C1:
if (cx->entry_method == ACPI_CSTATE_FFH)
state->flags |= CPUIDLE_FLAG_TIME_VALID;
if (cx->entry_method != ACPI_CSTATE_FFH)
state->flags |= CPUIDLE_FLAG_TIME_INVALID;
state->enter = acpi_idle_enter_c1;
state->enter_dead = acpi_idle_play_dead;
@ -994,14 +994,12 @@ static int acpi_processor_setup_cpuidle_states(struct acpi_processor *pr)
break;
case ACPI_STATE_C2:
state->flags |= CPUIDLE_FLAG_TIME_VALID;
state->enter = acpi_idle_enter_simple;
state->enter_dead = acpi_idle_play_dead;
drv->safe_state_index = count;
break;
case ACPI_STATE_C3:
state->flags |= CPUIDLE_FLAG_TIME_VALID;
state->enter = pr->flags.bm_check ?
acpi_idle_enter_bm :
acpi_idle_enter_simple;
@ -1111,7 +1109,7 @@ static int acpi_processor_registered;
int acpi_processor_power_init(struct acpi_processor *pr)
{
acpi_status status = 0;
acpi_status status;
int retval;
struct cpuidle_device *dev;
static int first_run;

551
drivers/acpi/property.c Normal file
View File

@ -0,0 +1,551 @@
/*
* ACPI device specific properties support.
*
* Copyright (C) 2014, Intel Corporation
* All rights reserved.
*
* Authors: Mika Westerberg <mika.westerberg@linux.intel.com>
* Darren Hart <dvhart@linux.intel.com>
* Rafael J. Wysocki <rafael.j.wysocki@intel.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/acpi.h>
#include <linux/device.h>
#include <linux/export.h>
#include "internal.h"
/* ACPI _DSD device properties UUID: daffd814-6eba-4d8c-8a91-bc9bbf4aa301 */
static const u8 prp_uuid[16] = {
0x14, 0xd8, 0xff, 0xda, 0xba, 0x6e, 0x8c, 0x4d,
0x8a, 0x91, 0xbc, 0x9b, 0xbf, 0x4a, 0xa3, 0x01
};
static bool acpi_property_value_ok(const union acpi_object *value)
{
int j;
/*
* The value must be an integer, a string, a reference, or a package
* whose every element must be an integer, a string, or a reference.
*/
switch (value->type) {
case ACPI_TYPE_INTEGER:
case ACPI_TYPE_STRING:
case ACPI_TYPE_LOCAL_REFERENCE:
return true;
case ACPI_TYPE_PACKAGE:
for (j = 0; j < value->package.count; j++)
switch (value->package.elements[j].type) {
case ACPI_TYPE_INTEGER:
case ACPI_TYPE_STRING:
case ACPI_TYPE_LOCAL_REFERENCE:
continue;
default:
return false;
}
return true;
}
return false;
}
static bool acpi_properties_format_valid(const union acpi_object *properties)
{
int i;
for (i = 0; i < properties->package.count; i++) {
const union acpi_object *property;
property = &properties->package.elements[i];
/*
* Only two elements allowed, the first one must be a string and
* the second one has to satisfy certain conditions.
*/
if (property->package.count != 2
|| property->package.elements[0].type != ACPI_TYPE_STRING
|| !acpi_property_value_ok(&property->package.elements[1]))
return false;
}
return true;
}
static void acpi_init_of_compatible(struct acpi_device *adev)
{
const union acpi_object *of_compatible;
struct acpi_hardware_id *hwid;
bool acpi_of = false;
int ret;
/*
* Check if the special PRP0001 ACPI ID is present and in that
* case we fill in Device Tree compatible properties for this
* device.
*/
list_for_each_entry(hwid, &adev->pnp.ids, list) {
if (!strcmp(hwid->id, "PRP0001")) {
acpi_of = true;
break;
}
}
if (!acpi_of)
return;
ret = acpi_dev_get_property_array(adev, "compatible", ACPI_TYPE_STRING,
&of_compatible);
if (ret) {
ret = acpi_dev_get_property(adev, "compatible",
ACPI_TYPE_STRING, &of_compatible);
if (ret) {
acpi_handle_warn(adev->handle,
"PRP0001 requires compatible property\n");
return;
}
}
adev->data.of_compatible = of_compatible;
}
void acpi_init_properties(struct acpi_device *adev)
{
struct acpi_buffer buf = { ACPI_ALLOCATE_BUFFER };
const union acpi_object *desc;
acpi_status status;
int i;
status = acpi_evaluate_object_typed(adev->handle, "_DSD", NULL, &buf,
ACPI_TYPE_PACKAGE);
if (ACPI_FAILURE(status))
return;
desc = buf.pointer;
if (desc->package.count % 2)
goto fail;
/* Look for the device properties UUID. */
for (i = 0; i < desc->package.count; i += 2) {
const union acpi_object *uuid, *properties;
uuid = &desc->package.elements[i];
properties = &desc->package.elements[i + 1];
/*
* The first element must be a UUID and the second one must be
* a package.
*/
if (uuid->type != ACPI_TYPE_BUFFER || uuid->buffer.length != 16
|| properties->type != ACPI_TYPE_PACKAGE)
break;
if (memcmp(uuid->buffer.pointer, prp_uuid, sizeof(prp_uuid)))
continue;
/*
* We found the matching UUID. Now validate the format of the
* package immediately following it.
*/
if (!acpi_properties_format_valid(properties))
break;
adev->data.pointer = buf.pointer;
adev->data.properties = properties;
acpi_init_of_compatible(adev);
return;
}
fail:
dev_warn(&adev->dev, "Returned _DSD data is not valid, skipping\n");
ACPI_FREE(buf.pointer);
}
void acpi_free_properties(struct acpi_device *adev)
{
ACPI_FREE((void *)adev->data.pointer);
adev->data.of_compatible = NULL;
adev->data.pointer = NULL;
adev->data.properties = NULL;
}
/**
* acpi_dev_get_property - return an ACPI property with given name
* @adev: ACPI device to get property
* @name: Name of the property
* @type: Expected property type
* @obj: Location to store the property value (if not %NULL)
*
* Look up a property with @name and store a pointer to the resulting ACPI
* object at the location pointed to by @obj if found.
*
* Callers must not attempt to free the returned objects. These objects will be
* freed by the ACPI core automatically during the removal of @adev.
*
* Return: %0 if property with @name has been found (success),
* %-EINVAL if the arguments are invalid,
* %-ENODATA if the property doesn't exist,
* %-EPROTO if the property value type doesn't match @type.
*/
int acpi_dev_get_property(struct acpi_device *adev, const char *name,
acpi_object_type type, const union acpi_object **obj)
{
const union acpi_object *properties;
int i;
if (!adev || !name)
return -EINVAL;
if (!adev->data.pointer || !adev->data.properties)
return -ENODATA;
properties = adev->data.properties;
for (i = 0; i < properties->package.count; i++) {
const union acpi_object *propname, *propvalue;
const union acpi_object *property;
property = &properties->package.elements[i];
propname = &property->package.elements[0];
propvalue = &property->package.elements[1];
if (!strcmp(name, propname->string.pointer)) {
if (type != ACPI_TYPE_ANY && propvalue->type != type)
return -EPROTO;
else if (obj)
*obj = propvalue;
return 0;
}
}
return -ENODATA;
}
EXPORT_SYMBOL_GPL(acpi_dev_get_property);
/**
* acpi_dev_get_property_array - return an ACPI array property with given name
* @adev: ACPI device to get property
* @name: Name of the property
* @type: Expected type of array elements
* @obj: Location to store a pointer to the property value (if not NULL)
*
* Look up an array property with @name and store a pointer to the resulting
* ACPI object at the location pointed to by @obj if found.
*
* Callers must not attempt to free the returned objects. Those objects will be
* freed by the ACPI core automatically during the removal of @adev.
*
* Return: %0 if array property (package) with @name has been found (success),
* %-EINVAL if the arguments are invalid,
* %-ENODATA if the property doesn't exist,
* %-EPROTO if the property is not a package or the type of its elements
* doesn't match @type.
*/
int acpi_dev_get_property_array(struct acpi_device *adev, const char *name,
acpi_object_type type,
const union acpi_object **obj)
{
const union acpi_object *prop;
int ret, i;
ret = acpi_dev_get_property(adev, name, ACPI_TYPE_PACKAGE, &prop);
if (ret)
return ret;
if (type != ACPI_TYPE_ANY) {
/* Check that all elements are of correct type. */
for (i = 0; i < prop->package.count; i++)
if (prop->package.elements[i].type != type)
return -EPROTO;
}
if (obj)
*obj = prop;
return 0;
}
EXPORT_SYMBOL_GPL(acpi_dev_get_property_array);
/**
* acpi_dev_get_property_reference - returns handle to the referenced object
* @adev: ACPI device to get property
* @name: Name of the property
* @index: Index of the reference to return
* @args: Location to store the returned reference with optional arguments
*
* Find property with @name, verifify that it is a package containing at least
* one object reference and if so, store the ACPI device object pointer to the
* target object in @args->adev. If the reference includes arguments, store
* them in the @args->args[] array.
*
* If there's more than one reference in the property value package, @index is
* used to select the one to return.
*
* Return: %0 on success, negative error code on failure.
*/
int acpi_dev_get_property_reference(struct acpi_device *adev,
const char *name, size_t index,
struct acpi_reference_args *args)
{
const union acpi_object *element, *end;
const union acpi_object *obj;
struct acpi_device *device;
int ret, idx = 0;
ret = acpi_dev_get_property(adev, name, ACPI_TYPE_ANY, &obj);
if (ret)
return ret;
/*
* The simplest case is when the value is a single reference. Just
* return that reference then.
*/
if (obj->type == ACPI_TYPE_LOCAL_REFERENCE) {
if (index)
return -EINVAL;
ret = acpi_bus_get_device(obj->reference.handle, &device);
if (ret)
return ret;
args->adev = device;
args->nargs = 0;
return 0;
}
/*
* If it is not a single reference, then it is a package of
* references followed by number of ints as follows:
*
* Package () { REF, INT, REF, INT, INT }
*
* The index argument is then used to determine which reference
* the caller wants (along with the arguments).
*/
if (obj->type != ACPI_TYPE_PACKAGE || index >= obj->package.count)
return -EPROTO;
element = obj->package.elements;
end = element + obj->package.count;
while (element < end) {
u32 nargs, i;
if (element->type != ACPI_TYPE_LOCAL_REFERENCE)
return -EPROTO;
ret = acpi_bus_get_device(element->reference.handle, &device);
if (ret)
return -ENODEV;
element++;
nargs = 0;
/* assume following integer elements are all args */
for (i = 0; element + i < end; i++) {
int type = element[i].type;
if (type == ACPI_TYPE_INTEGER)
nargs++;
else if (type == ACPI_TYPE_LOCAL_REFERENCE)
break;
else
return -EPROTO;
}
if (idx++ == index) {
args->adev = device;
args->nargs = nargs;
for (i = 0; i < nargs; i++)
args->args[i] = element[i].integer.value;
return 0;
}
element += nargs;
}
return -EPROTO;
}
EXPORT_SYMBOL_GPL(acpi_dev_get_property_reference);
int acpi_dev_prop_get(struct acpi_device *adev, const char *propname,
void **valptr)
{
return acpi_dev_get_property(adev, propname, ACPI_TYPE_ANY,
(const union acpi_object **)valptr);
}
int acpi_dev_prop_read_single(struct acpi_device *adev, const char *propname,
enum dev_prop_type proptype, void *val)
{
const union acpi_object *obj;
int ret;
if (!val)
return -EINVAL;
if (proptype >= DEV_PROP_U8 && proptype <= DEV_PROP_U64) {
ret = acpi_dev_get_property(adev, propname, ACPI_TYPE_INTEGER, &obj);
if (ret)
return ret;
switch (proptype) {
case DEV_PROP_U8:
if (obj->integer.value > U8_MAX)
return -EOVERFLOW;
*(u8 *)val = obj->integer.value;
break;
case DEV_PROP_U16:
if (obj->integer.value > U16_MAX)
return -EOVERFLOW;
*(u16 *)val = obj->integer.value;
break;
case DEV_PROP_U32:
if (obj->integer.value > U32_MAX)
return -EOVERFLOW;
*(u32 *)val = obj->integer.value;
break;
default:
*(u64 *)val = obj->integer.value;
break;
}
} else if (proptype == DEV_PROP_STRING) {
ret = acpi_dev_get_property(adev, propname, ACPI_TYPE_STRING, &obj);
if (ret)
return ret;
*(char **)val = obj->string.pointer;
} else {
ret = -EINVAL;
}
return ret;
}
static int acpi_copy_property_array_u8(const union acpi_object *items, u8 *val,
size_t nval)
{
int i;
for (i = 0; i < nval; i++) {
if (items[i].type != ACPI_TYPE_INTEGER)
return -EPROTO;
if (items[i].integer.value > U8_MAX)
return -EOVERFLOW;
val[i] = items[i].integer.value;
}
return 0;
}
static int acpi_copy_property_array_u16(const union acpi_object *items,
u16 *val, size_t nval)
{
int i;
for (i = 0; i < nval; i++) {
if (items[i].type != ACPI_TYPE_INTEGER)
return -EPROTO;
if (items[i].integer.value > U16_MAX)
return -EOVERFLOW;
val[i] = items[i].integer.value;
}
return 0;
}
static int acpi_copy_property_array_u32(const union acpi_object *items,
u32 *val, size_t nval)
{
int i;
for (i = 0; i < nval; i++) {
if (items[i].type != ACPI_TYPE_INTEGER)
return -EPROTO;
if (items[i].integer.value > U32_MAX)
return -EOVERFLOW;
val[i] = items[i].integer.value;
}
return 0;
}
static int acpi_copy_property_array_u64(const union acpi_object *items,
u64 *val, size_t nval)
{
int i;
for (i = 0; i < nval; i++) {
if (items[i].type != ACPI_TYPE_INTEGER)
return -EPROTO;
val[i] = items[i].integer.value;
}
return 0;
}
static int acpi_copy_property_array_string(const union acpi_object *items,
char **val, size_t nval)
{
int i;
for (i = 0; i < nval; i++) {
if (items[i].type != ACPI_TYPE_STRING)
return -EPROTO;
val[i] = items[i].string.pointer;
}
return 0;
}
int acpi_dev_prop_read(struct acpi_device *adev, const char *propname,
enum dev_prop_type proptype, void *val, size_t nval)
{
const union acpi_object *obj;
const union acpi_object *items;
int ret;
if (val && nval == 1) {
ret = acpi_dev_prop_read_single(adev, propname, proptype, val);
if (!ret)
return ret;
}
ret = acpi_dev_get_property_array(adev, propname, ACPI_TYPE_ANY, &obj);
if (ret)
return ret;
if (!val)
return obj->package.count;
else if (nval <= 0)
return -EINVAL;
if (nval > obj->package.count)
return -EOVERFLOW;
items = obj->package.elements;
switch (proptype) {
case DEV_PROP_U8:
ret = acpi_copy_property_array_u8(items, (u8 *)val, nval);
break;
case DEV_PROP_U16:
ret = acpi_copy_property_array_u16(items, (u16 *)val, nval);
break;
case DEV_PROP_U32:
ret = acpi_copy_property_array_u32(items, (u32 *)val, nval);
break;
case DEV_PROP_U64:
ret = acpi_copy_property_array_u64(items, (u64 *)val, nval);
break;
case DEV_PROP_STRING:
ret = acpi_copy_property_array_string(items, (char **)val, nval);
break;
default:
ret = -EINVAL;
break;
}
return ret;
}

View File

@ -36,6 +36,8 @@ bool acpi_force_hot_remove;
static const char *dummy_hid = "device";
static LIST_HEAD(acpi_dep_list);
static DEFINE_MUTEX(acpi_dep_list_lock);
static LIST_HEAD(acpi_bus_id_list);
static DEFINE_MUTEX(acpi_scan_lock);
static LIST_HEAD(acpi_scan_handlers_list);
@ -43,6 +45,12 @@ DEFINE_MUTEX(acpi_device_lock);
LIST_HEAD(acpi_wakeup_device_list);
static DEFINE_MUTEX(acpi_hp_context_lock);
struct acpi_dep_data {
struct list_head node;
acpi_handle master;
acpi_handle slave;
};
struct acpi_device_bus_id{
char bus_id[15];
unsigned int instance_no;
@ -124,17 +132,56 @@ static int create_modalias(struct acpi_device *acpi_dev, char *modalias,
if (list_empty(&acpi_dev->pnp.ids))
return 0;
len = snprintf(modalias, size, "acpi:");
size -= len;
/*
* If the device has PRP0001 we expose DT compatible modalias
* instead in form of of:NnameTCcompatible.
*/
if (acpi_dev->data.of_compatible) {
struct acpi_buffer buf = { ACPI_ALLOCATE_BUFFER };
const union acpi_object *of_compatible, *obj;
int i, nval;
char *c;
list_for_each_entry(id, &acpi_dev->pnp.ids, list) {
count = snprintf(&modalias[len], size, "%s:", id->id);
if (count < 0)
return -EINVAL;
if (count >= size)
return -ENOMEM;
len += count;
size -= count;
acpi_get_name(acpi_dev->handle, ACPI_SINGLE_NAME, &buf);
/* DT strings are all in lower case */
for (c = buf.pointer; *c != '\0'; c++)
*c = tolower(*c);
len = snprintf(modalias, size, "of:N%sT", (char *)buf.pointer);
ACPI_FREE(buf.pointer);
of_compatible = acpi_dev->data.of_compatible;
if (of_compatible->type == ACPI_TYPE_PACKAGE) {
nval = of_compatible->package.count;
obj = of_compatible->package.elements;
} else { /* Must be ACPI_TYPE_STRING. */
nval = 1;
obj = of_compatible;
}
for (i = 0; i < nval; i++, obj++) {
count = snprintf(&modalias[len], size, "C%s",
obj->string.pointer);
if (count < 0)
return -EINVAL;
if (count >= size)
return -ENOMEM;
len += count;
size -= count;
}
} else {
len = snprintf(modalias, size, "acpi:");
size -= len;
list_for_each_entry(id, &acpi_dev->pnp.ids, list) {
count = snprintf(&modalias[len], size, "%s:", id->id);
if (count < 0)
return -EINVAL;
if (count >= size)
return -ENOMEM;
len += count;
size -= count;
}
}
modalias[len] = '\0';
@ -902,6 +949,51 @@ int acpi_match_device_ids(struct acpi_device *device,
}
EXPORT_SYMBOL(acpi_match_device_ids);
/* Performs match against special "PRP0001" shoehorn ACPI ID */
static bool acpi_of_driver_match_device(struct device *dev,
const struct device_driver *drv)
{
const union acpi_object *of_compatible, *obj;
struct acpi_device *adev;
int i, nval;
adev = ACPI_COMPANION(dev);
if (!adev)
return false;
of_compatible = adev->data.of_compatible;
if (!drv->of_match_table || !of_compatible)
return false;
if (of_compatible->type == ACPI_TYPE_PACKAGE) {
nval = of_compatible->package.count;
obj = of_compatible->package.elements;
} else { /* Must be ACPI_TYPE_STRING. */
nval = 1;
obj = of_compatible;
}
/* Now we can look for the driver DT compatible strings */
for (i = 0; i < nval; i++, obj++) {
const struct of_device_id *id;
for (id = drv->of_match_table; id->compatible[0]; id++)
if (!strcasecmp(obj->string.pointer, id->compatible))
return true;
}
return false;
}
bool acpi_driver_match_device(struct device *dev,
const struct device_driver *drv)
{
if (!drv->acpi_match_table)
return acpi_of_driver_match_device(dev, drv);
return !!acpi_match_device(drv->acpi_match_table, dev);
}
EXPORT_SYMBOL_GPL(acpi_driver_match_device);
static void acpi_free_power_resources_lists(struct acpi_device *device)
{
int i;
@ -922,6 +1014,7 @@ static void acpi_device_release(struct device *dev)
{
struct acpi_device *acpi_dev = to_acpi_device(dev);
acpi_free_properties(acpi_dev);
acpi_free_pnp_ids(&acpi_dev->pnp);
acpi_free_power_resources_lists(acpi_dev);
kfree(acpi_dev);
@ -1304,6 +1397,26 @@ int acpi_device_add(struct acpi_device *device,
return result;
}
struct acpi_device *acpi_get_next_child(struct device *dev,
struct acpi_device *child)
{
struct acpi_device *adev = ACPI_COMPANION(dev);
struct list_head *head, *next;
if (!adev)
return NULL;
head = &adev->children;
if (list_empty(head))
return NULL;
if (!child)
return list_first_entry(head, struct acpi_device, node);
next = child->node.next;
return next == head ? NULL : list_entry(next, struct acpi_device, node);
}
/* --------------------------------------------------------------------------
Driver Management
-------------------------------------------------------------------------- */
@ -1923,9 +2036,11 @@ void acpi_init_device_object(struct acpi_device *device, acpi_handle handle,
device->device_type = type;
device->handle = handle;
device->parent = acpi_bus_get_parent(handle);
device->fwnode.type = FWNODE_ACPI;
acpi_set_device_status(device, sta);
acpi_device_get_busid(device);
acpi_set_pnp_ids(handle, &device->pnp, type);
acpi_init_properties(device);
acpi_bus_get_flags(device);
device->flags.match_driver = false;
device->flags.initialized = true;
@ -2086,6 +2201,59 @@ static void acpi_scan_init_hotplug(struct acpi_device *adev)
}
}
static void acpi_device_dep_initialize(struct acpi_device *adev)
{
struct acpi_dep_data *dep;
struct acpi_handle_list dep_devices;
acpi_status status;
int i;
if (!acpi_has_method(adev->handle, "_DEP"))
return;
status = acpi_evaluate_reference(adev->handle, "_DEP", NULL,
&dep_devices);
if (ACPI_FAILURE(status)) {
dev_err(&adev->dev, "Failed to evaluate _DEP.\n");
return;
}
for (i = 0; i < dep_devices.count; i++) {
struct acpi_device_info *info;
int skip;
status = acpi_get_object_info(dep_devices.handles[i], &info);
if (ACPI_FAILURE(status)) {
dev_err(&adev->dev, "Error reading device info\n");
continue;
}
/*
* Skip the dependency of Windows System Power
* Management Controller
*/
skip = info->valid & ACPI_VALID_HID &&
!strcmp(info->hardware_id.string, "INT3396");
kfree(info);
if (skip)
continue;
dep = kzalloc(sizeof(struct acpi_dep_data), GFP_KERNEL);
if (!dep)
return;
dep->master = dep_devices.handles[i];
dep->slave = adev->handle;
adev->dep_unmet++;
mutex_lock(&acpi_dep_list_lock);
list_add_tail(&dep->node , &acpi_dep_list);
mutex_unlock(&acpi_dep_list_lock);
}
}
static acpi_status acpi_bus_check_add(acpi_handle handle, u32 lvl_not_used,
void *not_used, void **return_value)
{
@ -2112,6 +2280,7 @@ static acpi_status acpi_bus_check_add(acpi_handle handle, u32 lvl_not_used,
return AE_CTRL_DEPTH;
acpi_scan_init_hotplug(device);
acpi_device_dep_initialize(device);
out:
if (!*return_value)
@ -2232,6 +2401,29 @@ static void acpi_bus_attach(struct acpi_device *device)
device->handler->hotplug.notify_online(device);
}
void acpi_walk_dep_device_list(acpi_handle handle)
{
struct acpi_dep_data *dep, *tmp;
struct acpi_device *adev;
mutex_lock(&acpi_dep_list_lock);
list_for_each_entry_safe(dep, tmp, &acpi_dep_list, node) {
if (dep->master == handle) {
acpi_bus_get_device(dep->slave, &adev);
if (!adev)
continue;
adev->dep_unmet--;
if (!adev->dep_unmet)
acpi_bus_attach(adev);
list_del(&dep->node);
kfree(dep);
}
}
mutex_unlock(&acpi_dep_list_lock);
}
EXPORT_SYMBOL_GPL(acpi_walk_dep_device_list);
/**
* acpi_bus_scan - Add ACPI device node objects in a given namespace scope.
* @handle: Root of the namespace scope to scan.

View File

@ -630,6 +630,7 @@ static int acpi_freeze_begin(void)
static int acpi_freeze_prepare(void)
{
acpi_enable_all_wakeup_gpes();
acpi_os_wait_events_complete();
enable_irq_wake(acpi_gbl_FADT.sci_interrupt);
return 0;
}
@ -825,6 +826,7 @@ static void acpi_power_off_prepare(void)
/* Prepare to power off the system */
acpi_sleep_prepare(ACPI_STATE_S5);
acpi_disable_all_gpes();
acpi_os_wait_events_complete();
}
static void acpi_power_off(void)

View File

@ -190,30 +190,24 @@ void acpi_table_print_madt_entry(struct acpi_subtable_header *header)
}
}
int __init
acpi_table_parse_entries(char *id,
unsigned long table_size,
int entry_id,
acpi_tbl_entry_handler handler,
unsigned int max_entries)
acpi_parse_entries(char *id, unsigned long table_size,
acpi_tbl_entry_handler handler,
struct acpi_table_header *table_header,
int entry_id, unsigned int max_entries)
{
struct acpi_table_header *table_header = NULL;
struct acpi_subtable_header *entry;
unsigned int count = 0;
int count = 0;
unsigned long table_end;
acpi_size tbl_size;
if (acpi_disabled)
return -ENODEV;
if (!handler)
if (!id || !handler)
return -EINVAL;
if (strncmp(id, ACPI_SIG_MADT, 4) == 0)
acpi_get_table_with_size(id, acpi_apic_instance, &table_header, &tbl_size);
else
acpi_get_table_with_size(id, 0, &table_header, &tbl_size);
if (!table_size)
return -EINVAL;
if (!table_header) {
pr_warn("%4.4s not present\n", id);
@ -230,9 +224,12 @@ acpi_table_parse_entries(char *id,
while (((unsigned long)entry) + sizeof(struct acpi_subtable_header) <
table_end) {
if (entry->type == entry_id
&& (!max_entries || count++ < max_entries))
&& (!max_entries || count < max_entries)) {
if (handler(entry, table_end))
goto err;
return -EINVAL;
count++;
}
/*
* If entry->length is 0, break from this loop to avoid
@ -240,22 +237,53 @@ acpi_table_parse_entries(char *id,
*/
if (entry->length == 0) {
pr_err("[%4.4s:0x%02x] Invalid zero length\n", id, entry_id);
goto err;
return -EINVAL;
}
entry = (struct acpi_subtable_header *)
((unsigned long)entry + entry->length);
}
if (max_entries && count > max_entries) {
pr_warn("[%4.4s:0x%02x] ignored %i entries of %i found\n",
id, entry_id, count - max_entries, count);
}
return count;
}
int __init
acpi_table_parse_entries(char *id,
unsigned long table_size,
int entry_id,
acpi_tbl_entry_handler handler,
unsigned int max_entries)
{
struct acpi_table_header *table_header = NULL;
acpi_size tbl_size;
int count;
u32 instance = 0;
if (acpi_disabled)
return -ENODEV;
if (!id || !handler)
return -EINVAL;
if (!strncmp(id, ACPI_SIG_MADT, 4))
instance = acpi_apic_instance;
acpi_get_table_with_size(id, instance, &table_header, &tbl_size);
if (!table_header) {
pr_warn("%4.4s not present\n", id);
return -ENODEV;
}
count = acpi_parse_entries(id, table_size, handler, table_header,
entry_id, max_entries);
early_acpi_os_unmap_memory((char *)table_header, tbl_size);
return count;
err:
early_acpi_os_unmap_memory((char *)table_header, tbl_size);
return -EINVAL;
}
int __init

View File

@ -136,8 +136,7 @@ acpi_extract_package(union acpi_object *package,
break;
case 'B':
size_required +=
sizeof(u8 *) +
(element->buffer.length * sizeof(u8));
sizeof(u8 *) + element->buffer.length;
tail_offset += sizeof(u8 *);
break;
default:
@ -255,7 +254,7 @@ acpi_extract_package(union acpi_object *package,
memcpy(tail, element->buffer.pointer,
element->buffer.length);
head += sizeof(u8 *);
tail += element->buffer.length * sizeof(u8);
tail += element->buffer.length;
break;
default:
/* Should never get here */

View File

@ -1681,6 +1681,19 @@ static void acpi_video_dev_register_backlight(struct acpi_video_device *device)
printk(KERN_ERR PREFIX "Create sysfs link\n");
}
static void acpi_video_run_bcl_for_osi(struct acpi_video_bus *video)
{
struct acpi_video_device *dev;
union acpi_object *levels;
mutex_lock(&video->device_list_lock);
list_for_each_entry(dev, &video->video_device_list, entry) {
if (!acpi_video_device_lcd_query_levels(dev, &levels))
kfree(levels);
}
mutex_unlock(&video->device_list_lock);
}
static int acpi_video_bus_register_backlight(struct acpi_video_bus *video)
{
struct acpi_video_device *dev;
@ -1688,6 +1701,8 @@ static int acpi_video_bus_register_backlight(struct acpi_video_bus *video)
if (video->backlight_registered)
return 0;
acpi_video_run_bcl_for_osi(video);
if (!acpi_video_verify_backlight_support())
return 0;

View File

@ -124,7 +124,7 @@ static const struct dev_pm_ops amba_pm = {
.thaw = pm_generic_thaw,
.poweroff = pm_generic_poweroff,
.restore = pm_generic_restore,
SET_PM_RUNTIME_PM_OPS(
SET_RUNTIME_PM_OPS(
amba_pm_runtime_suspend,
amba_pm_runtime_resume,
NULL

View File

@ -4,7 +4,7 @@ obj-y := component.o core.o bus.o dd.o syscore.o \
driver.o class.o platform.o \
cpu.o firmware.o init.o map.o devres.o \
attribute_container.o transport_class.o \
topology.o container.o
topology.o container.o property.o
obj-$(CONFIG_DEVTMPFS) += devtmpfs.o
obj-$(CONFIG_DMA_CMA) += dma-contiguous.o
obj-y += power/

View File

@ -12,6 +12,7 @@
#include <linux/pm.h>
#include <linux/pm_clock.h>
#include <linux/clk.h>
#include <linux/clkdev.h>
#include <linux/slab.h>
#include <linux/err.h>
@ -34,14 +35,20 @@ struct pm_clock_entry {
/**
* pm_clk_enable - Enable a clock, reporting any errors
* @dev: The device for the given clock
* @clk: The clock being enabled.
* @ce: PM clock entry corresponding to the clock.
*/
static inline int __pm_clk_enable(struct device *dev, struct clk *clk)
static inline int __pm_clk_enable(struct device *dev, struct pm_clock_entry *ce)
{
int ret = clk_enable(clk);
if (ret)
dev_err(dev, "%s: failed to enable clk %p, error %d\n",
__func__, clk, ret);
int ret;
if (ce->status < PCE_STATUS_ERROR) {
ret = clk_enable(ce->clk);
if (!ret)
ce->status = PCE_STATUS_ENABLED;
else
dev_err(dev, "%s: failed to enable clk %p, error %d\n",
__func__, ce->clk, ret);
}
return ret;
}
@ -53,7 +60,8 @@ static inline int __pm_clk_enable(struct device *dev, struct clk *clk)
*/
static void pm_clk_acquire(struct device *dev, struct pm_clock_entry *ce)
{
ce->clk = clk_get(dev, ce->con_id);
if (!ce->clk)
ce->clk = clk_get(dev, ce->con_id);
if (IS_ERR(ce->clk)) {
ce->status = PCE_STATUS_ERROR;
} else {
@ -63,15 +71,8 @@ static void pm_clk_acquire(struct device *dev, struct pm_clock_entry *ce)
}
}
/**
* pm_clk_add - Start using a device clock for power management.
* @dev: Device whose clock is going to be used for power management.
* @con_id: Connection ID of the clock.
*
* Add the clock represented by @con_id to the list of clocks used for
* the power management of @dev.
*/
int pm_clk_add(struct device *dev, const char *con_id)
static int __pm_clk_add(struct device *dev, const char *con_id,
struct clk *clk)
{
struct pm_subsys_data *psd = dev_to_psd(dev);
struct pm_clock_entry *ce;
@ -93,6 +94,12 @@ int pm_clk_add(struct device *dev, const char *con_id)
kfree(ce);
return -ENOMEM;
}
} else {
if (IS_ERR(ce->clk) || !__clk_get(clk)) {
kfree(ce);
return -ENOENT;
}
ce->clk = clk;
}
pm_clk_acquire(dev, ce);
@ -103,6 +110,32 @@ int pm_clk_add(struct device *dev, const char *con_id)
return 0;
}
/**
* pm_clk_add - Start using a device clock for power management.
* @dev: Device whose clock is going to be used for power management.
* @con_id: Connection ID of the clock.
*
* Add the clock represented by @con_id to the list of clocks used for
* the power management of @dev.
*/
int pm_clk_add(struct device *dev, const char *con_id)
{
return __pm_clk_add(dev, con_id, NULL);
}
/**
* pm_clk_add_clk - Start using a device clock for power management.
* @dev: Device whose clock is going to be used for power management.
* @clk: Clock pointer
*
* Add the clock to the list of clocks used for the power management of @dev.
* It will increment refcount on clock pointer, use clk_put() on it when done.
*/
int pm_clk_add_clk(struct device *dev, struct clk *clk)
{
return __pm_clk_add(dev, NULL, clk);
}
/**
* __pm_clk_remove - Destroy PM clock entry.
* @ce: PM clock entry to destroy.
@ -223,10 +256,6 @@ void pm_clk_destroy(struct device *dev)
}
}
#endif /* CONFIG_PM */
#ifdef CONFIG_PM_RUNTIME
/**
* pm_clk_suspend - Disable clocks in a device's PM clock list.
* @dev: Device to disable the clocks for.
@ -266,7 +295,6 @@ int pm_clk_resume(struct device *dev)
struct pm_subsys_data *psd = dev_to_psd(dev);
struct pm_clock_entry *ce;
unsigned long flags;
int ret;
dev_dbg(dev, "%s()\n", __func__);
@ -275,13 +303,8 @@ int pm_clk_resume(struct device *dev)
spin_lock_irqsave(&psd->lock, flags);
list_for_each_entry(ce, &psd->clock_list, node) {
if (ce->status < PCE_STATUS_ERROR) {
ret = __pm_clk_enable(dev, ce->clk);
if (!ret)
ce->status = PCE_STATUS_ENABLED;
}
}
list_for_each_entry(ce, &psd->clock_list, node)
__pm_clk_enable(dev, ce);
spin_unlock_irqrestore(&psd->lock, flags);
@ -346,74 +369,7 @@ static int pm_clk_notify(struct notifier_block *nb,
return 0;
}
#else /* !CONFIG_PM_RUNTIME */
#ifdef CONFIG_PM
/**
* pm_clk_suspend - Disable clocks in a device's PM clock list.
* @dev: Device to disable the clocks for.
*/
int pm_clk_suspend(struct device *dev)
{
struct pm_subsys_data *psd = dev_to_psd(dev);
struct pm_clock_entry *ce;
unsigned long flags;
dev_dbg(dev, "%s()\n", __func__);
/* If there is no driver, the clocks are already disabled. */
if (!psd || !dev->driver)
return 0;
spin_lock_irqsave(&psd->lock, flags);
list_for_each_entry_reverse(ce, &psd->clock_list, node) {
if (ce->status < PCE_STATUS_ERROR) {
if (ce->status == PCE_STATUS_ENABLED)
clk_disable(ce->clk);
ce->status = PCE_STATUS_ACQUIRED;
}
}
spin_unlock_irqrestore(&psd->lock, flags);
return 0;
}
/**
* pm_clk_resume - Enable clocks in a device's PM clock list.
* @dev: Device to enable the clocks for.
*/
int pm_clk_resume(struct device *dev)
{
struct pm_subsys_data *psd = dev_to_psd(dev);
struct pm_clock_entry *ce;
unsigned long flags;
int ret;
dev_dbg(dev, "%s()\n", __func__);
/* If there is no driver, the clocks should remain disabled. */
if (!psd || !dev->driver)
return 0;
spin_lock_irqsave(&psd->lock, flags);
list_for_each_entry(ce, &psd->clock_list, node) {
if (ce->status < PCE_STATUS_ERROR) {
ret = __pm_clk_enable(dev, ce->clk);
if (!ret)
ce->status = PCE_STATUS_ENABLED;
}
}
spin_unlock_irqrestore(&psd->lock, flags);
return 0;
}
#endif /* CONFIG_PM */
#else /* !CONFIG_PM */
/**
* enable_clock - Enable a device clock.
@ -493,7 +449,7 @@ static int pm_clk_notify(struct notifier_block *nb,
return 0;
}
#endif /* !CONFIG_PM_RUNTIME */
#endif /* !CONFIG_PM */
/**
* pm_clk_add_notifier - Add bus type notifier for power management clocks.

View File

@ -12,6 +12,7 @@
#include <linux/pm_runtime.h>
#include <linux/pm_domain.h>
#include <linux/pm_qos.h>
#include <linux/pm_clock.h>
#include <linux/slab.h>
#include <linux/err.h>
#include <linux/sched.h>
@ -151,6 +152,59 @@ static void genpd_recalc_cpu_exit_latency(struct generic_pm_domain *genpd)
genpd->cpuidle_data->idle_state->exit_latency = usecs64;
}
static int genpd_power_on(struct generic_pm_domain *genpd)
{
ktime_t time_start;
s64 elapsed_ns;
int ret;
if (!genpd->power_on)
return 0;
time_start = ktime_get();
ret = genpd->power_on(genpd);
if (ret)
return ret;
elapsed_ns = ktime_to_ns(ktime_sub(ktime_get(), time_start));
if (elapsed_ns <= genpd->power_on_latency_ns)
return ret;
genpd->power_on_latency_ns = elapsed_ns;
genpd->max_off_time_changed = true;
genpd_recalc_cpu_exit_latency(genpd);
pr_warn("%s: Power-%s latency exceeded, new value %lld ns\n",
genpd->name, "on", elapsed_ns);
return ret;
}
static int genpd_power_off(struct generic_pm_domain *genpd)
{
ktime_t time_start;
s64 elapsed_ns;
int ret;
if (!genpd->power_off)
return 0;
time_start = ktime_get();
ret = genpd->power_off(genpd);
if (ret == -EBUSY)
return ret;
elapsed_ns = ktime_to_ns(ktime_sub(ktime_get(), time_start));
if (elapsed_ns <= genpd->power_off_latency_ns)
return ret;
genpd->power_off_latency_ns = elapsed_ns;
genpd->max_off_time_changed = true;
pr_warn("%s: Power-%s latency exceeded, new value %lld ns\n",
genpd->name, "off", elapsed_ns);
return ret;
}
/**
* __pm_genpd_poweron - Restore power to a given PM domain and its masters.
* @genpd: PM domain to power up.
@ -222,25 +276,9 @@ static int __pm_genpd_poweron(struct generic_pm_domain *genpd)
}
}
if (genpd->power_on) {
ktime_t time_start = ktime_get();
s64 elapsed_ns;
ret = genpd->power_on(genpd);
if (ret)
goto err;
elapsed_ns = ktime_to_ns(ktime_sub(ktime_get(), time_start));
if (elapsed_ns > genpd->power_on_latency_ns) {
genpd->power_on_latency_ns = elapsed_ns;
genpd->max_off_time_changed = true;
genpd_recalc_cpu_exit_latency(genpd);
if (genpd->name)
pr_warning("%s: Power-on latency exceeded, "
"new value %lld ns\n", genpd->name,
elapsed_ns);
}
}
ret = genpd_power_on(genpd);
if (ret)
goto err;
out:
genpd_set_active(genpd);
@ -280,8 +318,6 @@ int pm_genpd_name_poweron(const char *domain_name)
return genpd ? pm_genpd_poweron(genpd) : -EINVAL;
}
#ifdef CONFIG_PM_RUNTIME
static int genpd_start_dev_no_timing(struct generic_pm_domain *genpd,
struct device *dev)
{
@ -544,16 +580,11 @@ static int pm_genpd_poweroff(struct generic_pm_domain *genpd)
}
if (genpd->power_off) {
ktime_t time_start;
s64 elapsed_ns;
if (atomic_read(&genpd->sd_count) > 0) {
ret = -EBUSY;
goto out;
}
time_start = ktime_get();
/*
* If sd_count > 0 at this point, one of the subdomains hasn't
* managed to call pm_genpd_poweron() for the master yet after
@ -562,21 +593,11 @@ static int pm_genpd_poweroff(struct generic_pm_domain *genpd)
* the pm_genpd_poweron() restore power for us (this shouldn't
* happen very often).
*/
ret = genpd->power_off(genpd);
ret = genpd_power_off(genpd);
if (ret == -EBUSY) {
genpd_set_active(genpd);
goto out;
}
elapsed_ns = ktime_to_ns(ktime_sub(ktime_get(), time_start));
if (elapsed_ns > genpd->power_off_latency_ns) {
genpd->power_off_latency_ns = elapsed_ns;
genpd->max_off_time_changed = true;
if (genpd->name)
pr_warning("%s: Power-off latency exceeded, "
"new value %lld ns\n", genpd->name,
elapsed_ns);
}
}
genpd->status = GPD_STATE_POWER_OFF;
@ -755,33 +776,15 @@ static int __init genpd_poweroff_unused(void)
}
late_initcall(genpd_poweroff_unused);
#else
static inline int genpd_dev_pm_qos_notifier(struct notifier_block *nb,
unsigned long val, void *ptr)
{
return NOTIFY_DONE;
}
static inline void
genpd_queue_power_off_work(struct generic_pm_domain *genpd) {}
static inline void genpd_power_off_work_fn(struct work_struct *work) {}
#define pm_genpd_runtime_suspend NULL
#define pm_genpd_runtime_resume NULL
#endif /* CONFIG_PM_RUNTIME */
#ifdef CONFIG_PM_SLEEP
/**
* pm_genpd_present - Check if the given PM domain has been initialized.
* @genpd: PM domain to check.
*/
static bool pm_genpd_present(struct generic_pm_domain *genpd)
static bool pm_genpd_present(const struct generic_pm_domain *genpd)
{
struct generic_pm_domain *gpd;
const struct generic_pm_domain *gpd;
if (IS_ERR_OR_NULL(genpd))
return false;
@ -822,8 +825,7 @@ static void pm_genpd_sync_poweroff(struct generic_pm_domain *genpd)
|| atomic_read(&genpd->sd_count) > 0)
return;
if (genpd->power_off)
genpd->power_off(genpd);
genpd_power_off(genpd);
genpd->status = GPD_STATE_POWER_OFF;
@ -854,8 +856,7 @@ static void pm_genpd_sync_poweron(struct generic_pm_domain *genpd)
genpd_sd_counter_inc(link->master);
}
if (genpd->power_on)
genpd->power_on(genpd);
genpd_power_on(genpd);
genpd->status = GPD_STATE_ACTIVE;
}
@ -1277,8 +1278,7 @@ static int pm_genpd_restore_noirq(struct device *dev)
* If the domain was off before the hibernation, make
* sure it will be off going forward.
*/
if (genpd->power_off)
genpd->power_off(genpd);
genpd_power_off(genpd);
return 0;
}
@ -1364,7 +1364,7 @@ void pm_genpd_syscore_poweron(struct device *dev)
}
EXPORT_SYMBOL_GPL(pm_genpd_syscore_poweron);
#else
#else /* !CONFIG_PM_SLEEP */
#define pm_genpd_prepare NULL
#define pm_genpd_suspend NULL
@ -1929,6 +1929,12 @@ void pm_genpd_init(struct generic_pm_domain *genpd,
genpd->domain.ops.complete = pm_genpd_complete;
genpd->dev_ops.save_state = pm_genpd_default_save_state;
genpd->dev_ops.restore_state = pm_genpd_default_restore_state;
if (genpd->flags & GENPD_FLAG_PM_CLK) {
genpd->dev_ops.stop = pm_clk_suspend;
genpd->dev_ops.start = pm_clk_resume;
}
mutex_lock(&gpd_list_lock);
list_add(&genpd->gpd_list_node, &gpd_list);
mutex_unlock(&gpd_list_lock);
@ -2216,11 +2222,12 @@ int genpd_dev_pm_attach(struct device *dev)
}
dev->pm_domain->detach = genpd_dev_pm_detach;
pm_genpd_poweron(pd);
return 0;
}
EXPORT_SYMBOL_GPL(genpd_dev_pm_attach);
#endif
#endif /* CONFIG_PM_GENERIC_DOMAINS_OF */
/*** debugfs support ***/
@ -2236,10 +2243,8 @@ static struct dentry *pm_genpd_debugfs_dir;
/*
* TODO: This function is a slightly modified version of rtpm_status_show
* from sysfs.c, but dependencies between PM_GENERIC_DOMAINS and PM_RUNTIME
* are too loose to generalize it.
* from sysfs.c, so generalize it.
*/
#ifdef CONFIG_PM_RUNTIME
static void rtpm_status_str(struct seq_file *s, struct device *dev)
{
static const char * const status_lookup[] = {
@ -2261,12 +2266,6 @@ static void rtpm_status_str(struct seq_file *s, struct device *dev)
seq_puts(s, p);
}
#else
static void rtpm_status_str(struct seq_file *s, struct device *dev)
{
seq_puts(s, "active");
}
#endif
static int pm_genpd_summary_one(struct seq_file *s,
struct generic_pm_domain *gpd)

View File

@ -11,8 +11,6 @@
#include <linux/pm_qos.h>
#include <linux/hrtimer.h>
#ifdef CONFIG_PM_RUNTIME
static int dev_update_qos_constraint(struct device *dev, void *data)
{
s64 *constraint_ns_p = data;
@ -227,15 +225,6 @@ static bool always_on_power_down_ok(struct dev_pm_domain *domain)
return false;
}
#else /* !CONFIG_PM_RUNTIME */
static inline bool default_stop_ok(struct device *dev) { return false; }
#define default_power_down_ok NULL
#define always_on_power_down_ok NULL
#endif /* !CONFIG_PM_RUNTIME */
struct dev_power_governor simple_qos_governor = {
.stop_ok = default_stop_ok,
.power_down_ok = default_power_down_ok,

View File

@ -49,11 +49,12 @@
* are protected by the dev_opp_list_lock for integrity.
* IMPORTANT: the opp nodes should be maintained in increasing
* order.
* @dynamic: not-created from static DT entries.
* @available: true/false - marks if this OPP as available or not
* @rate: Frequency in hertz
* @u_volt: Nominal voltage in microvolts corresponding to this OPP
* @dev_opp: points back to the device_opp struct this opp belongs to
* @head: RCU callback head used for deferred freeing
* @rcu_head: RCU callback head used for deferred freeing
*
* This structure stores the OPP information for a given device.
*/
@ -61,11 +62,12 @@ struct dev_pm_opp {
struct list_head node;
bool available;
bool dynamic;
unsigned long rate;
unsigned long u_volt;
struct device_opp *dev_opp;
struct rcu_head head;
struct rcu_head rcu_head;
};
/**
@ -76,7 +78,8 @@ struct dev_pm_opp {
* RCU usage: nodes are not modified in the list of device_opp,
* however addition is possible and is secured by dev_opp_list_lock
* @dev: device pointer
* @head: notifier head to notify the OPP availability changes.
* @srcu_head: notifier head to notify the OPP availability changes.
* @rcu_head: RCU callback head used for deferred freeing
* @opp_list: list of opps
*
* This is an internal data structure maintaining the link to opps attached to
@ -87,7 +90,8 @@ struct device_opp {
struct list_head node;
struct device *dev;
struct srcu_notifier_head head;
struct srcu_notifier_head srcu_head;
struct rcu_head rcu_head;
struct list_head opp_list;
};
@ -378,30 +382,8 @@ struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev,
}
EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_floor);
/**
* dev_pm_opp_add() - Add an OPP table from a table definitions
* @dev: device for which we do this operation
* @freq: Frequency in Hz for this OPP
* @u_volt: Voltage in uVolts for this OPP
*
* This function adds an opp definition to the opp list and returns status.
* The opp is made available by default and it can be controlled using
* dev_pm_opp_enable/disable functions.
*
* Locking: The internal device_opp and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*
* Return:
* 0: On success OR
* Duplicate OPPs (both freq and volt are same) and opp->available
* -EEXIST: Freq are same and volt are different OR
* Duplicate OPPs (both freq and volt are same) and !opp->available
* -ENOMEM: Memory allocation failure
*/
int dev_pm_opp_add(struct device *dev, unsigned long freq, unsigned long u_volt)
static int dev_pm_opp_add_dynamic(struct device *dev, unsigned long freq,
unsigned long u_volt, bool dynamic)
{
struct device_opp *dev_opp = NULL;
struct dev_pm_opp *opp, *new_opp;
@ -417,6 +399,13 @@ int dev_pm_opp_add(struct device *dev, unsigned long freq, unsigned long u_volt)
/* Hold our list modification lock here */
mutex_lock(&dev_opp_list_lock);
/* populate the opp table */
new_opp->dev_opp = dev_opp;
new_opp->rate = freq;
new_opp->u_volt = u_volt;
new_opp->available = true;
new_opp->dynamic = dynamic;
/* Check for existing list for 'dev' */
dev_opp = find_device_opp(dev);
if (IS_ERR(dev_opp)) {
@ -436,19 +425,15 @@ int dev_pm_opp_add(struct device *dev, unsigned long freq, unsigned long u_volt)
}
dev_opp->dev = dev;
srcu_init_notifier_head(&dev_opp->head);
srcu_init_notifier_head(&dev_opp->srcu_head);
INIT_LIST_HEAD(&dev_opp->opp_list);
/* Secure the device list modification */
list_add_rcu(&dev_opp->node, &dev_opp_list);
head = &dev_opp->opp_list;
goto list_add;
}
/* populate the opp table */
new_opp->dev_opp = dev_opp;
new_opp->rate = freq;
new_opp->u_volt = u_volt;
new_opp->available = true;
/*
* Insert new OPP in order of increasing frequency
* and discard if already present
@ -474,6 +459,7 @@ int dev_pm_opp_add(struct device *dev, unsigned long freq, unsigned long u_volt)
return ret;
}
list_add:
list_add_rcu(&new_opp->node, head);
mutex_unlock(&dev_opp_list_lock);
@ -481,11 +467,109 @@ int dev_pm_opp_add(struct device *dev, unsigned long freq, unsigned long u_volt)
* Notify the changes in the availability of the operable
* frequency/voltage list.
*/
srcu_notifier_call_chain(&dev_opp->head, OPP_EVENT_ADD, new_opp);
srcu_notifier_call_chain(&dev_opp->srcu_head, OPP_EVENT_ADD, new_opp);
return 0;
}
/**
* dev_pm_opp_add() - Add an OPP table from a table definitions
* @dev: device for which we do this operation
* @freq: Frequency in Hz for this OPP
* @u_volt: Voltage in uVolts for this OPP
*
* This function adds an opp definition to the opp list and returns status.
* The opp is made available by default and it can be controlled using
* dev_pm_opp_enable/disable functions.
*
* Locking: The internal device_opp and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*
* Return:
* 0: On success OR
* Duplicate OPPs (both freq and volt are same) and opp->available
* -EEXIST: Freq are same and volt are different OR
* Duplicate OPPs (both freq and volt are same) and !opp->available
* -ENOMEM: Memory allocation failure
*/
int dev_pm_opp_add(struct device *dev, unsigned long freq, unsigned long u_volt)
{
return dev_pm_opp_add_dynamic(dev, freq, u_volt, true);
}
EXPORT_SYMBOL_GPL(dev_pm_opp_add);
static void kfree_opp_rcu(struct rcu_head *head)
{
struct dev_pm_opp *opp = container_of(head, struct dev_pm_opp, rcu_head);
kfree_rcu(opp, rcu_head);
}
static void kfree_device_rcu(struct rcu_head *head)
{
struct device_opp *device_opp = container_of(head, struct device_opp, rcu_head);
kfree(device_opp);
}
void __dev_pm_opp_remove(struct device_opp *dev_opp, struct dev_pm_opp *opp)
{
/*
* Notify the changes in the availability of the operable
* frequency/voltage list.
*/
srcu_notifier_call_chain(&dev_opp->srcu_head, OPP_EVENT_REMOVE, opp);
list_del_rcu(&opp->node);
call_srcu(&dev_opp->srcu_head.srcu, &opp->rcu_head, kfree_opp_rcu);
if (list_empty(&dev_opp->opp_list)) {
list_del_rcu(&dev_opp->node);
call_srcu(&dev_opp->srcu_head.srcu, &dev_opp->rcu_head,
kfree_device_rcu);
}
}
/**
* dev_pm_opp_remove() - Remove an OPP from OPP list
* @dev: device for which we do this operation
* @freq: OPP to remove with matching 'freq'
*
* This function removes an opp from the opp list.
*/
void dev_pm_opp_remove(struct device *dev, unsigned long freq)
{
struct dev_pm_opp *opp;
struct device_opp *dev_opp;
bool found = false;
/* Hold our list modification lock here */
mutex_lock(&dev_opp_list_lock);
dev_opp = find_device_opp(dev);
if (IS_ERR(dev_opp))
goto unlock;
list_for_each_entry(opp, &dev_opp->opp_list, node) {
if (opp->rate == freq) {
found = true;
break;
}
}
if (!found) {
dev_warn(dev, "%s: Couldn't find OPP with freq: %lu\n",
__func__, freq);
goto unlock;
}
__dev_pm_opp_remove(dev_opp, opp);
unlock:
mutex_unlock(&dev_opp_list_lock);
}
EXPORT_SYMBOL_GPL(dev_pm_opp_remove);
/**
* opp_set_availability() - helper to set the availability of an opp
* @dev: device for which we do this operation
@ -557,14 +641,14 @@ static int opp_set_availability(struct device *dev, unsigned long freq,
list_replace_rcu(&opp->node, &new_opp->node);
mutex_unlock(&dev_opp_list_lock);
kfree_rcu(opp, head);
call_srcu(&dev_opp->srcu_head.srcu, &opp->rcu_head, kfree_opp_rcu);
/* Notify the change of the OPP availability */
if (availability_req)
srcu_notifier_call_chain(&dev_opp->head, OPP_EVENT_ENABLE,
srcu_notifier_call_chain(&dev_opp->srcu_head, OPP_EVENT_ENABLE,
new_opp);
else
srcu_notifier_call_chain(&dev_opp->head, OPP_EVENT_DISABLE,
srcu_notifier_call_chain(&dev_opp->srcu_head, OPP_EVENT_DISABLE,
new_opp);
return 0;
@ -629,7 +713,7 @@ struct srcu_notifier_head *dev_pm_opp_get_notifier(struct device *dev)
if (IS_ERR(dev_opp))
return ERR_CAST(dev_opp); /* matching type */
return &dev_opp->head;
return &dev_opp->srcu_head;
}
#ifdef CONFIG_OF
@ -666,7 +750,7 @@ int of_init_opp_table(struct device *dev)
unsigned long freq = be32_to_cpup(val++) * 1000;
unsigned long volt = be32_to_cpup(val++);
if (dev_pm_opp_add(dev, freq, volt))
if (dev_pm_opp_add_dynamic(dev, freq, volt, false))
dev_warn(dev, "%s: Failed to add OPP %ld\n",
__func__, freq);
nr -= 2;
@ -675,4 +759,34 @@ int of_init_opp_table(struct device *dev)
return 0;
}
EXPORT_SYMBOL_GPL(of_init_opp_table);
/**
* of_free_opp_table() - Free OPP table entries created from static DT entries
* @dev: device pointer used to lookup device OPPs.
*
* Free OPPs created using static entries present in DT.
*/
void of_free_opp_table(struct device *dev)
{
struct device_opp *dev_opp = find_device_opp(dev);
struct dev_pm_opp *opp, *tmp;
/* Check for existing list for 'dev' */
dev_opp = find_device_opp(dev);
if (WARN(IS_ERR(dev_opp), "%s: dev_opp: %ld\n", dev_name(dev),
PTR_ERR(dev_opp)))
return;
/* Hold our list modification lock here */
mutex_lock(&dev_opp_list_lock);
/* Free static OPPs */
list_for_each_entry_safe(opp, tmp, &dev_opp->opp_list, node) {
if (!opp->dynamic)
__dev_pm_opp_remove(dev_opp, opp);
}
mutex_unlock(&dev_opp_list_lock);
}
EXPORT_SYMBOL_GPL(of_free_opp_table);
#endif

View File

@ -9,7 +9,7 @@ static inline void device_pm_init_common(struct device *dev)
}
}
#ifdef CONFIG_PM_RUNTIME
#ifdef CONFIG_PM
static inline void pm_runtime_early_init(struct device *dev)
{
@ -20,7 +20,21 @@ static inline void pm_runtime_early_init(struct device *dev)
extern void pm_runtime_init(struct device *dev);
extern void pm_runtime_remove(struct device *dev);
#else /* !CONFIG_PM_RUNTIME */
/*
* sysfs.c
*/
extern int dpm_sysfs_add(struct device *dev);
extern void dpm_sysfs_remove(struct device *dev);
extern void rpm_sysfs_remove(struct device *dev);
extern int wakeup_sysfs_add(struct device *dev);
extern void wakeup_sysfs_remove(struct device *dev);
extern int pm_qos_sysfs_add_resume_latency(struct device *dev);
extern void pm_qos_sysfs_remove_resume_latency(struct device *dev);
extern int pm_qos_sysfs_add_flags(struct device *dev);
extern void pm_qos_sysfs_remove_flags(struct device *dev);
#else /* CONFIG_PM */
static inline void pm_runtime_early_init(struct device *dev)
{
@ -30,7 +44,15 @@ static inline void pm_runtime_early_init(struct device *dev)
static inline void pm_runtime_init(struct device *dev) {}
static inline void pm_runtime_remove(struct device *dev) {}
#endif /* !CONFIG_PM_RUNTIME */
static inline int dpm_sysfs_add(struct device *dev) { return 0; }
static inline void dpm_sysfs_remove(struct device *dev) {}
static inline void rpm_sysfs_remove(struct device *dev) {}
static inline int wakeup_sysfs_add(struct device *dev) { return 0; }
static inline void wakeup_sysfs_remove(struct device *dev) {}
static inline int pm_qos_sysfs_add(struct device *dev) { return 0; }
static inline void pm_qos_sysfs_remove(struct device *dev) {}
#endif
#ifdef CONFIG_PM_SLEEP
@ -77,31 +99,3 @@ static inline void device_pm_init(struct device *dev)
device_pm_sleep_init(dev);
pm_runtime_init(dev);
}
#ifdef CONFIG_PM
/*
* sysfs.c
*/
extern int dpm_sysfs_add(struct device *dev);
extern void dpm_sysfs_remove(struct device *dev);
extern void rpm_sysfs_remove(struct device *dev);
extern int wakeup_sysfs_add(struct device *dev);
extern void wakeup_sysfs_remove(struct device *dev);
extern int pm_qos_sysfs_add_resume_latency(struct device *dev);
extern void pm_qos_sysfs_remove_resume_latency(struct device *dev);
extern int pm_qos_sysfs_add_flags(struct device *dev);
extern void pm_qos_sysfs_remove_flags(struct device *dev);
#else /* CONFIG_PM */
static inline int dpm_sysfs_add(struct device *dev) { return 0; }
static inline void dpm_sysfs_remove(struct device *dev) {}
static inline void rpm_sysfs_remove(struct device *dev) {}
static inline int wakeup_sysfs_add(struct device *dev) { return 0; }
static inline void wakeup_sysfs_remove(struct device *dev) {}
static inline int pm_qos_sysfs_add(struct device *dev) { return 0; }
static inline void pm_qos_sysfs_remove(struct device *dev) {}
#endif

View File

@ -599,7 +599,6 @@ int dev_pm_qos_add_ancestor_request(struct device *dev,
}
EXPORT_SYMBOL_GPL(dev_pm_qos_add_ancestor_request);
#ifdef CONFIG_PM_RUNTIME
static void __dev_pm_qos_drop_user_request(struct device *dev,
enum dev_pm_qos_req_type type)
{
@ -880,7 +879,3 @@ int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val)
mutex_unlock(&dev_pm_qos_mtx);
return ret;
}
#else /* !CONFIG_PM_RUNTIME */
static void __dev_pm_qos_hide_latency_limit(struct device *dev) {}
static void __dev_pm_qos_hide_flags(struct device *dev) {}
#endif /* CONFIG_PM_RUNTIME */

View File

@ -13,42 +13,37 @@
#include <trace/events/rpm.h>
#include "power.h"
#define RPM_GET_CALLBACK(dev, cb) \
({ \
int (*__rpm_cb)(struct device *__d); \
\
if (dev->pm_domain) \
__rpm_cb = dev->pm_domain->ops.cb; \
else if (dev->type && dev->type->pm) \
__rpm_cb = dev->type->pm->cb; \
else if (dev->class && dev->class->pm) \
__rpm_cb = dev->class->pm->cb; \
else if (dev->bus && dev->bus->pm) \
__rpm_cb = dev->bus->pm->cb; \
else \
__rpm_cb = NULL; \
\
if (!__rpm_cb && dev->driver && dev->driver->pm) \
__rpm_cb = dev->driver->pm->cb; \
\
__rpm_cb; \
})
typedef int (*pm_callback_t)(struct device *);
static int (*rpm_get_suspend_cb(struct device *dev))(struct device *)
static pm_callback_t __rpm_get_callback(struct device *dev, size_t cb_offset)
{
return RPM_GET_CALLBACK(dev, runtime_suspend);
pm_callback_t cb;
const struct dev_pm_ops *ops;
if (dev->pm_domain)
ops = &dev->pm_domain->ops;
else if (dev->type && dev->type->pm)
ops = dev->type->pm;
else if (dev->class && dev->class->pm)
ops = dev->class->pm;
else if (dev->bus && dev->bus->pm)
ops = dev->bus->pm;
else
ops = NULL;
if (ops)
cb = *(pm_callback_t *)((void *)ops + cb_offset);
else
cb = NULL;
if (!cb && dev->driver && dev->driver->pm)
cb = *(pm_callback_t *)((void *)dev->driver->pm + cb_offset);
return cb;
}
static int (*rpm_get_resume_cb(struct device *dev))(struct device *)
{
return RPM_GET_CALLBACK(dev, runtime_resume);
}
#ifdef CONFIG_PM_RUNTIME
static int (*rpm_get_idle_cb(struct device *dev))(struct device *)
{
return RPM_GET_CALLBACK(dev, runtime_idle);
}
#define RPM_GET_CALLBACK(dev, callback) \
__rpm_get_callback(dev, offsetof(struct dev_pm_ops, callback))
static int rpm_resume(struct device *dev, int rpmflags);
static int rpm_suspend(struct device *dev, int rpmflags);
@ -347,7 +342,7 @@ static int rpm_idle(struct device *dev, int rpmflags)
dev->power.idle_notification = true;
callback = rpm_get_idle_cb(dev);
callback = RPM_GET_CALLBACK(dev, runtime_idle);
if (callback)
retval = __rpm_callback(callback, dev);
@ -517,7 +512,7 @@ static int rpm_suspend(struct device *dev, int rpmflags)
__update_runtime_status(dev, RPM_SUSPENDING);
callback = rpm_get_suspend_cb(dev);
callback = RPM_GET_CALLBACK(dev, runtime_suspend);
retval = rpm_callback(callback, dev);
if (retval)
@ -737,7 +732,7 @@ static int rpm_resume(struct device *dev, int rpmflags)
__update_runtime_status(dev, RPM_RESUMING);
callback = rpm_get_resume_cb(dev);
callback = RPM_GET_CALLBACK(dev, runtime_resume);
retval = rpm_callback(callback, dev);
if (retval) {
@ -1402,7 +1397,6 @@ void pm_runtime_remove(struct device *dev)
if (dev->power.irq_safe && dev->parent)
pm_runtime_put(dev->parent);
}
#endif
/**
* pm_runtime_force_suspend - Force a device into suspend state if needed.
@ -1422,16 +1416,10 @@ int pm_runtime_force_suspend(struct device *dev)
int ret = 0;
pm_runtime_disable(dev);
/*
* Note that pm_runtime_status_suspended() returns false while
* !CONFIG_PM_RUNTIME, which means the device will be put into low
* power state.
*/
if (pm_runtime_status_suspended(dev))
return 0;
callback = rpm_get_suspend_cb(dev);
callback = RPM_GET_CALLBACK(dev, runtime_suspend);
if (!callback) {
ret = -ENOSYS;
@ -1467,7 +1455,7 @@ int pm_runtime_force_resume(struct device *dev)
int (*callback)(struct device *);
int ret = 0;
callback = rpm_get_resume_cb(dev);
callback = RPM_GET_CALLBACK(dev, runtime_resume);
if (!callback) {
ret = -ENOSYS;

View File

@ -95,7 +95,6 @@
const char power_group_name[] = "power";
EXPORT_SYMBOL_GPL(power_group_name);
#ifdef CONFIG_PM_RUNTIME
static const char ctrl_auto[] = "auto";
static const char ctrl_on[] = "on";
@ -330,7 +329,6 @@ static ssize_t pm_qos_remote_wakeup_store(struct device *dev,
static DEVICE_ATTR(pm_qos_remote_wakeup, 0644,
pm_qos_remote_wakeup_show, pm_qos_remote_wakeup_store);
#endif /* CONFIG_PM_RUNTIME */
#ifdef CONFIG_PM_SLEEP
static const char _enabled[] = "enabled";
@ -531,8 +529,6 @@ static DEVICE_ATTR(wakeup_prevent_sleep_time_ms, 0444,
#endif /* CONFIG_PM_SLEEP */
#ifdef CONFIG_PM_ADVANCED_DEBUG
#ifdef CONFIG_PM_RUNTIME
static ssize_t rtpm_usagecount_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
@ -562,10 +558,7 @@ static DEVICE_ATTR(runtime_usage, 0444, rtpm_usagecount_show, NULL);
static DEVICE_ATTR(runtime_active_kids, 0444, rtpm_children_show, NULL);
static DEVICE_ATTR(runtime_enabled, 0444, rtpm_enabled_show, NULL);
#endif
#ifdef CONFIG_PM_SLEEP
static ssize_t async_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
@ -595,7 +588,7 @@ static ssize_t async_store(struct device *dev, struct device_attribute *attr,
static DEVICE_ATTR(async, 0644, async_show, async_store);
#endif
#endif /* CONFIG_PM_SLEEP */
#endif /* CONFIG_PM_ADVANCED_DEBUG */
static struct attribute *power_attrs[] = {
@ -603,12 +596,10 @@ static struct attribute *power_attrs[] = {
#ifdef CONFIG_PM_SLEEP
&dev_attr_async.attr,
#endif
#ifdef CONFIG_PM_RUNTIME
&dev_attr_runtime_status.attr,
&dev_attr_runtime_usage.attr,
&dev_attr_runtime_active_kids.attr,
&dev_attr_runtime_enabled.attr,
#endif
#endif /* CONFIG_PM_ADVANCED_DEBUG */
NULL,
};
@ -640,7 +631,6 @@ static struct attribute_group pm_wakeup_attr_group = {
};
static struct attribute *runtime_attrs[] = {
#ifdef CONFIG_PM_RUNTIME
#ifndef CONFIG_PM_ADVANCED_DEBUG
&dev_attr_runtime_status.attr,
#endif
@ -648,7 +638,6 @@ static struct attribute *runtime_attrs[] = {
&dev_attr_runtime_suspended_time.attr,
&dev_attr_runtime_active_time.attr,
&dev_attr_autosuspend_delay_ms.attr,
#endif /* CONFIG_PM_RUNTIME */
NULL,
};
static struct attribute_group pm_runtime_attr_group = {
@ -657,9 +646,7 @@ static struct attribute_group pm_runtime_attr_group = {
};
static struct attribute *pm_qos_resume_latency_attrs[] = {
#ifdef CONFIG_PM_RUNTIME
&dev_attr_pm_qos_resume_latency_us.attr,
#endif /* CONFIG_PM_RUNTIME */
NULL,
};
static struct attribute_group pm_qos_resume_latency_attr_group = {
@ -668,9 +655,7 @@ static struct attribute_group pm_qos_resume_latency_attr_group = {
};
static struct attribute *pm_qos_latency_tolerance_attrs[] = {
#ifdef CONFIG_PM_RUNTIME
&dev_attr_pm_qos_latency_tolerance_us.attr,
#endif /* CONFIG_PM_RUNTIME */
NULL,
};
static struct attribute_group pm_qos_latency_tolerance_attr_group = {
@ -679,10 +664,8 @@ static struct attribute_group pm_qos_latency_tolerance_attr_group = {
};
static struct attribute *pm_qos_flags_attrs[] = {
#ifdef CONFIG_PM_RUNTIME
&dev_attr_pm_qos_no_power_off.attr,
&dev_attr_pm_qos_remote_wakeup.attr,
#endif /* CONFIG_PM_RUNTIME */
NULL,
};
static struct attribute_group pm_qos_flags_attr_group = {

431
drivers/base/property.c Normal file
View File

@ -0,0 +1,431 @@
/*
* property.c - Unified device property interface.
*
* Copyright (C) 2014, Intel Corporation
* Authors: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* Mika Westerberg <mika.westerberg@linux.intel.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/property.h>
#include <linux/export.h>
#include <linux/acpi.h>
#include <linux/of.h>
/**
* device_property_present - check if a property of a device is present
* @dev: Device whose property is being checked
* @propname: Name of the property
*
* Check if property @propname is present in the device firmware description.
*/
bool device_property_present(struct device *dev, const char *propname)
{
if (IS_ENABLED(CONFIG_OF) && dev->of_node)
return of_property_read_bool(dev->of_node, propname);
return !acpi_dev_prop_get(ACPI_COMPANION(dev), propname, NULL);
}
EXPORT_SYMBOL_GPL(device_property_present);
/**
* fwnode_property_present - check if a property of a firmware node is present
* @fwnode: Firmware node whose property to check
* @propname: Name of the property
*/
bool fwnode_property_present(struct fwnode_handle *fwnode, const char *propname)
{
if (is_of_node(fwnode))
return of_property_read_bool(of_node(fwnode), propname);
else if (is_acpi_node(fwnode))
return !acpi_dev_prop_get(acpi_node(fwnode), propname, NULL);
return false;
}
EXPORT_SYMBOL_GPL(fwnode_property_present);
#define OF_DEV_PROP_READ_ARRAY(node, propname, type, val, nval) \
(val) ? of_property_read_##type##_array((node), (propname), (val), (nval)) \
: of_property_count_elems_of_size((node), (propname), sizeof(type))
#define DEV_PROP_READ_ARRAY(_dev_, _propname_, _type_, _proptype_, _val_, _nval_) \
IS_ENABLED(CONFIG_OF) && _dev_->of_node ? \
(OF_DEV_PROP_READ_ARRAY(_dev_->of_node, _propname_, _type_, \
_val_, _nval_)) : \
acpi_dev_prop_read(ACPI_COMPANION(_dev_), _propname_, \
_proptype_, _val_, _nval_)
/**
* device_property_read_u8_array - return a u8 array property of a device
* @dev: Device to get the property of
* @propname: Name of the property
* @val: The values are stored here
* @nval: Size of the @val array
*
* Function reads an array of u8 properties with @propname from the device
* firmware description and stores them to @val if found.
*
* Return: %0 if the property was found (success),
* %-EINVAL if given arguments are not valid,
* %-ENODATA if the property does not have a value,
* %-EPROTO if the property is not an array of numbers,
* %-EOVERFLOW if the size of the property is not as expected.
*/
int device_property_read_u8_array(struct device *dev, const char *propname,
u8 *val, size_t nval)
{
return DEV_PROP_READ_ARRAY(dev, propname, u8, DEV_PROP_U8, val, nval);
}
EXPORT_SYMBOL_GPL(device_property_read_u8_array);
/**
* device_property_read_u16_array - return a u16 array property of a device
* @dev: Device to get the property of
* @propname: Name of the property
* @val: The values are stored here
* @nval: Size of the @val array
*
* Function reads an array of u16 properties with @propname from the device
* firmware description and stores them to @val if found.
*
* Return: %0 if the property was found (success),
* %-EINVAL if given arguments are not valid,
* %-ENODATA if the property does not have a value,
* %-EPROTO if the property is not an array of numbers,
* %-EOVERFLOW if the size of the property is not as expected.
*/
int device_property_read_u16_array(struct device *dev, const char *propname,
u16 *val, size_t nval)
{
return DEV_PROP_READ_ARRAY(dev, propname, u16, DEV_PROP_U16, val, nval);
}
EXPORT_SYMBOL_GPL(device_property_read_u16_array);
/**
* device_property_read_u32_array - return a u32 array property of a device
* @dev: Device to get the property of
* @propname: Name of the property
* @val: The values are stored here
* @nval: Size of the @val array
*
* Function reads an array of u32 properties with @propname from the device
* firmware description and stores them to @val if found.
*
* Return: %0 if the property was found (success),
* %-EINVAL if given arguments are not valid,
* %-ENODATA if the property does not have a value,
* %-EPROTO if the property is not an array of numbers,
* %-EOVERFLOW if the size of the property is not as expected.
*/
int device_property_read_u32_array(struct device *dev, const char *propname,
u32 *val, size_t nval)
{
return DEV_PROP_READ_ARRAY(dev, propname, u32, DEV_PROP_U32, val, nval);
}
EXPORT_SYMBOL_GPL(device_property_read_u32_array);
/**
* device_property_read_u64_array - return a u64 array property of a device
* @dev: Device to get the property of
* @propname: Name of the property
* @val: The values are stored here
* @nval: Size of the @val array
*
* Function reads an array of u64 properties with @propname from the device
* firmware description and stores them to @val if found.
*
* Return: %0 if the property was found (success),
* %-EINVAL if given arguments are not valid,
* %-ENODATA if the property does not have a value,
* %-EPROTO if the property is not an array of numbers,
* %-EOVERFLOW if the size of the property is not as expected.
*/
int device_property_read_u64_array(struct device *dev, const char *propname,
u64 *val, size_t nval)
{
return DEV_PROP_READ_ARRAY(dev, propname, u64, DEV_PROP_U64, val, nval);
}
EXPORT_SYMBOL_GPL(device_property_read_u64_array);
/**
* device_property_read_string_array - return a string array property of device
* @dev: Device to get the property of
* @propname: Name of the property
* @val: The values are stored here
* @nval: Size of the @val array
*
* Function reads an array of string properties with @propname from the device
* firmware description and stores them to @val if found.
*
* Return: %0 if the property was found (success),
* %-EINVAL if given arguments are not valid,
* %-ENODATA if the property does not have a value,
* %-EPROTO or %-EILSEQ if the property is not an array of strings,
* %-EOVERFLOW if the size of the property is not as expected.
*/
int device_property_read_string_array(struct device *dev, const char *propname,
const char **val, size_t nval)
{
return IS_ENABLED(CONFIG_OF) && dev->of_node ?
of_property_read_string_array(dev->of_node, propname, val, nval) :
acpi_dev_prop_read(ACPI_COMPANION(dev), propname,
DEV_PROP_STRING, val, nval);
}
EXPORT_SYMBOL_GPL(device_property_read_string_array);
/**
* device_property_read_string - return a string property of a device
* @dev: Device to get the property of
* @propname: Name of the property
* @val: The value is stored here
*
* Function reads property @propname from the device firmware description and
* stores the value into @val if found. The value is checked to be a string.
*
* Return: %0 if the property was found (success),
* %-EINVAL if given arguments are not valid,
* %-ENODATA if the property does not have a value,
* %-EPROTO or %-EILSEQ if the property type is not a string.
*/
int device_property_read_string(struct device *dev, const char *propname,
const char **val)
{
return IS_ENABLED(CONFIG_OF) && dev->of_node ?
of_property_read_string(dev->of_node, propname, val) :
acpi_dev_prop_read(ACPI_COMPANION(dev), propname,
DEV_PROP_STRING, val, 1);
}
EXPORT_SYMBOL_GPL(device_property_read_string);
#define FWNODE_PROP_READ_ARRAY(_fwnode_, _propname_, _type_, _proptype_, _val_, _nval_) \
({ \
int _ret_; \
if (is_of_node(_fwnode_)) \
_ret_ = OF_DEV_PROP_READ_ARRAY(of_node(_fwnode_), _propname_, \
_type_, _val_, _nval_); \
else if (is_acpi_node(_fwnode_)) \
_ret_ = acpi_dev_prop_read(acpi_node(_fwnode_), _propname_, \
_proptype_, _val_, _nval_); \
else \
_ret_ = -ENXIO; \
_ret_; \
})
/**
* fwnode_property_read_u8_array - return a u8 array property of firmware node
* @fwnode: Firmware node to get the property of
* @propname: Name of the property
* @val: The values are stored here
* @nval: Size of the @val array
*
* Read an array of u8 properties with @propname from @fwnode and stores them to
* @val if found.
*
* Return: %0 if the property was found (success),
* %-EINVAL if given arguments are not valid,
* %-ENODATA if the property does not have a value,
* %-EPROTO if the property is not an array of numbers,
* %-EOVERFLOW if the size of the property is not as expected,
* %-ENXIO if no suitable firmware interface is present.
*/
int fwnode_property_read_u8_array(struct fwnode_handle *fwnode,
const char *propname, u8 *val, size_t nval)
{
return FWNODE_PROP_READ_ARRAY(fwnode, propname, u8, DEV_PROP_U8,
val, nval);
}
EXPORT_SYMBOL_GPL(fwnode_property_read_u8_array);
/**
* fwnode_property_read_u16_array - return a u16 array property of firmware node
* @fwnode: Firmware node to get the property of
* @propname: Name of the property
* @val: The values are stored here
* @nval: Size of the @val array
*
* Read an array of u16 properties with @propname from @fwnode and store them to
* @val if found.
*
* Return: %0 if the property was found (success),
* %-EINVAL if given arguments are not valid,
* %-ENODATA if the property does not have a value,
* %-EPROTO if the property is not an array of numbers,
* %-EOVERFLOW if the size of the property is not as expected,
* %-ENXIO if no suitable firmware interface is present.
*/
int fwnode_property_read_u16_array(struct fwnode_handle *fwnode,
const char *propname, u16 *val, size_t nval)
{
return FWNODE_PROP_READ_ARRAY(fwnode, propname, u16, DEV_PROP_U16,
val, nval);
}
EXPORT_SYMBOL_GPL(fwnode_property_read_u16_array);
/**
* fwnode_property_read_u32_array - return a u32 array property of firmware node
* @fwnode: Firmware node to get the property of
* @propname: Name of the property
* @val: The values are stored here
* @nval: Size of the @val array
*
* Read an array of u32 properties with @propname from @fwnode store them to
* @val if found.
*
* Return: %0 if the property was found (success),
* %-EINVAL if given arguments are not valid,
* %-ENODATA if the property does not have a value,
* %-EPROTO if the property is not an array of numbers,
* %-EOVERFLOW if the size of the property is not as expected,
* %-ENXIO if no suitable firmware interface is present.
*/
int fwnode_property_read_u32_array(struct fwnode_handle *fwnode,
const char *propname, u32 *val, size_t nval)
{
return FWNODE_PROP_READ_ARRAY(fwnode, propname, u32, DEV_PROP_U32,
val, nval);
}
EXPORT_SYMBOL_GPL(fwnode_property_read_u32_array);
/**
* fwnode_property_read_u64_array - return a u64 array property firmware node
* @fwnode: Firmware node to get the property of
* @propname: Name of the property
* @val: The values are stored here
* @nval: Size of the @val array
*
* Read an array of u64 properties with @propname from @fwnode and store them to
* @val if found.
*
* Return: %0 if the property was found (success),
* %-EINVAL if given arguments are not valid,
* %-ENODATA if the property does not have a value,
* %-EPROTO if the property is not an array of numbers,
* %-EOVERFLOW if the size of the property is not as expected,
* %-ENXIO if no suitable firmware interface is present.
*/
int fwnode_property_read_u64_array(struct fwnode_handle *fwnode,
const char *propname, u64 *val, size_t nval)
{
return FWNODE_PROP_READ_ARRAY(fwnode, propname, u64, DEV_PROP_U64,
val, nval);
}
EXPORT_SYMBOL_GPL(fwnode_property_read_u64_array);
/**
* fwnode_property_read_string_array - return string array property of a node
* @fwnode: Firmware node to get the property of
* @propname: Name of the property
* @val: The values are stored here
* @nval: Size of the @val array
*
* Read an string list property @propname from the given firmware node and store
* them to @val if found.
*
* Return: %0 if the property was found (success),
* %-EINVAL if given arguments are not valid,
* %-ENODATA if the property does not have a value,
* %-EPROTO if the property is not an array of strings,
* %-EOVERFLOW if the size of the property is not as expected,
* %-ENXIO if no suitable firmware interface is present.
*/
int fwnode_property_read_string_array(struct fwnode_handle *fwnode,
const char *propname, const char **val,
size_t nval)
{
if (is_of_node(fwnode))
return of_property_read_string_array(of_node(fwnode), propname,
val, nval);
else if (is_acpi_node(fwnode))
return acpi_dev_prop_read(acpi_node(fwnode), propname,
DEV_PROP_STRING, val, nval);
return -ENXIO;
}
EXPORT_SYMBOL_GPL(fwnode_property_read_string_array);
/**
* fwnode_property_read_string - return a string property of a firmware node
* @fwnode: Firmware node to get the property of
* @propname: Name of the property
* @val: The value is stored here
*
* Read property @propname from the given firmware node and store the value into
* @val if found. The value is checked to be a string.
*
* Return: %0 if the property was found (success),
* %-EINVAL if given arguments are not valid,
* %-ENODATA if the property does not have a value,
* %-EPROTO or %-EILSEQ if the property is not a string,
* %-ENXIO if no suitable firmware interface is present.
*/
int fwnode_property_read_string(struct fwnode_handle *fwnode,
const char *propname, const char **val)
{
if (is_of_node(fwnode))
return of_property_read_string(of_node(fwnode),propname, val);
else if (is_acpi_node(fwnode))
return acpi_dev_prop_read(acpi_node(fwnode), propname,
DEV_PROP_STRING, val, 1);
return -ENXIO;
}
EXPORT_SYMBOL_GPL(fwnode_property_read_string);
/**
* device_get_next_child_node - Return the next child node handle for a device
* @dev: Device to find the next child node for.
* @child: Handle to one of the device's child nodes or a null handle.
*/
struct fwnode_handle *device_get_next_child_node(struct device *dev,
struct fwnode_handle *child)
{
if (IS_ENABLED(CONFIG_OF) && dev->of_node) {
struct device_node *node;
node = of_get_next_available_child(dev->of_node, of_node(child));
if (node)
return &node->fwnode;
} else if (IS_ENABLED(CONFIG_ACPI)) {
struct acpi_device *node;
node = acpi_get_next_child(dev, acpi_node(child));
if (node)
return acpi_fwnode_handle(node);
}
return NULL;
}
EXPORT_SYMBOL_GPL(device_get_next_child_node);
/**
* fwnode_handle_put - Drop reference to a device node
* @fwnode: Pointer to the device node to drop the reference to.
*
* This has to be used when terminating device_for_each_child_node() iteration
* with break or return to prevent stale device node references from being left
* behind.
*/
void fwnode_handle_put(struct fwnode_handle *fwnode)
{
if (is_of_node(fwnode))
of_node_put(of_node(fwnode));
}
EXPORT_SYMBOL_GPL(fwnode_handle_put);
/**
* device_get_child_node_count - return the number of child nodes for device
* @dev: Device to cound the child nodes for
*/
unsigned int device_get_child_node_count(struct device *dev)
{
struct fwnode_handle *child;
unsigned int count = 0;
device_for_each_child_node(dev, child)
count++;
return count;
}
EXPORT_SYMBOL_GPL(device_get_child_node_count);

View File

@ -143,7 +143,7 @@ static int exynos_rng_remove(struct platform_device *pdev)
return 0;
}
#if defined(CONFIG_PM_SLEEP) || defined(CONFIG_PM_RUNTIME)
#ifdef CONFIG_PM
static int exynos_rng_runtime_suspend(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);

View File

@ -63,7 +63,6 @@ config CPU_FREQ_DEFAULT_GOV_PERFORMANCE
config CPU_FREQ_DEFAULT_GOV_POWERSAVE
bool "powersave"
depends on EXPERT
select CPU_FREQ_GOV_POWERSAVE
help
Use the CPUFreq governor 'powersave' as default. This sets
@ -183,6 +182,8 @@ config CPU_FREQ_GOV_CONSERVATIVE
If in doubt, say N.
comment "CPU frequency scaling drivers"
config CPUFREQ_DT
tristate "Generic DT based cpufreq driver"
depends on HAVE_CLK && OF
@ -196,19 +197,19 @@ config CPUFREQ_DT
If in doubt, say N.
menu "x86 CPU frequency scaling drivers"
depends on X86
if X86
source "drivers/cpufreq/Kconfig.x86"
endmenu
endif
menu "ARM CPU frequency scaling drivers"
depends on ARM || ARM64
if ARM || ARM64
source "drivers/cpufreq/Kconfig.arm"
endmenu
endif
menu "AVR32 CPU frequency scaling drivers"
depends on AVR32
if PPC32 || PPC64
source "drivers/cpufreq/Kconfig.powerpc"
endif
if AVR32
config AVR32_AT32AP_CPUFREQ
bool "CPU frequency driver for AT32AP"
depends on PLATFORM_AT32AP
@ -216,12 +217,9 @@ config AVR32_AT32AP_CPUFREQ
help
This enables the CPU frequency driver for AT32AP processors.
If in doubt, say N.
endif
endmenu
menu "CPUFreq processor drivers"
depends on IA64
if IA64
config IA64_ACPI_CPUFREQ
tristate "ACPI Processor P-States driver"
depends on ACPI_PROCESSOR
@ -232,12 +230,9 @@ config IA64_ACPI_CPUFREQ
For details, take a look at <file:Documentation/cpu-freq/>.
If in doubt, say N.
endif
endmenu
menu "MIPS CPUFreq processor drivers"
depends on MIPS
if MIPS
config LOONGSON2_CPUFREQ
tristate "Loongson2 CPUFreq Driver"
help
@ -250,15 +245,18 @@ config LOONGSON2_CPUFREQ
If in doubt, say N.
endmenu
config LOONGSON1_CPUFREQ
tristate "Loongson1 CPUFreq Driver"
help
This option adds a CPUFreq driver for loongson1 processors which
support software configurable cpu frequency.
menu "PowerPC CPU frequency scaling drivers"
depends on PPC32 || PPC64
source "drivers/cpufreq/Kconfig.powerpc"
endmenu
For details, take a look at <file:Documentation/cpu-freq/>.
menu "SPARC CPU frequency scaling drivers"
depends on SPARC64
If in doubt, say N.
endif
if SPARC64
config SPARC_US3_CPUFREQ
tristate "UltraSPARC-III CPU Frequency driver"
help
@ -276,10 +274,9 @@ config SPARC_US2E_CPUFREQ
For details, take a look at <file:Documentation/cpu-freq>.
If in doubt, say N.
endmenu
endif
menu "SH CPU Frequency scaling"
depends on SUPERH
if SUPERH
config SH_CPU_FREQ
tristate "SuperH CPU Frequency driver"
help
@ -293,7 +290,7 @@ config SH_CPU_FREQ
For details, take a look at <file:Documentation/cpu-freq>.
If unsure, say N.
endmenu
endif
endif
endmenu

View File

@ -247,3 +247,11 @@ config ARM_TEGRA_CPUFREQ
default y
help
This adds the CPUFreq driver support for TEGRA SOCs.
config ARM_PXA2xx_CPUFREQ
tristate "Intel PXA2xx CPUfreq driver"
depends on PXA27x || PXA25x
help
This add the CPUFreq driver support for Intel PXA2xx SOCs.
If in doubt, say N.

View File

@ -61,8 +61,7 @@ obj-$(CONFIG_ARM_IMX6Q_CPUFREQ) += imx6q-cpufreq.o
obj-$(CONFIG_ARM_INTEGRATOR) += integrator-cpufreq.o
obj-$(CONFIG_ARM_KIRKWOOD_CPUFREQ) += kirkwood-cpufreq.o
obj-$(CONFIG_ARM_OMAP2PLUS_CPUFREQ) += omap-cpufreq.o
obj-$(CONFIG_PXA25x) += pxa2xx-cpufreq.o
obj-$(CONFIG_PXA27x) += pxa2xx-cpufreq.o
obj-$(CONFIG_ARM_PXA2xx_CPUFREQ) += pxa2xx-cpufreq.o
obj-$(CONFIG_PXA3xx) += pxa3xx-cpufreq.o
obj-$(CONFIG_ARM_S3C24XX_CPUFREQ) += s3c24xx-cpufreq.o
obj-$(CONFIG_ARM_S3C24XX_CPUFREQ_DEBUGFS) += s3c24xx-cpufreq-debugfs.o
@ -98,6 +97,7 @@ obj-$(CONFIG_CRIS_MACH_ARTPEC3) += cris-artpec3-cpufreq.o
obj-$(CONFIG_ETRAXFS) += cris-etraxfs-cpufreq.o
obj-$(CONFIG_IA64_ACPI_CPUFREQ) += ia64-acpi-cpufreq.o
obj-$(CONFIG_LOONGSON2_CPUFREQ) += loongson2_cpufreq.o
obj-$(CONFIG_LOONGSON1_CPUFREQ) += ls1x-cpufreq.o
obj-$(CONFIG_SH_CPU_FREQ) += sh-cpufreq.o
obj-$(CONFIG_SPARC_US2E_CPUFREQ) += sparc-us2e-cpufreq.o
obj-$(CONFIG_SPARC_US3_CPUFREQ) += sparc-us3-cpufreq.o

View File

@ -289,6 +289,8 @@ static void _put_cluster_clk_and_freq_table(struct device *cpu_dev)
clk_put(clk[cluster]);
dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table[cluster]);
if (arm_bL_ops->free_opp_table)
arm_bL_ops->free_opp_table(cpu_dev);
dev_dbg(cpu_dev, "%s: cluster: %d\n", __func__, cluster);
}
@ -337,7 +339,7 @@ static int _get_cluster_clk_and_freq_table(struct device *cpu_dev)
if (ret) {
dev_err(cpu_dev, "%s: failed to init cpufreq table, cpu: %d, err: %d\n",
__func__, cpu_dev->id, ret);
goto out;
goto free_opp_table;
}
name[12] = cluster + '0';
@ -354,6 +356,9 @@ static int _get_cluster_clk_and_freq_table(struct device *cpu_dev)
ret = PTR_ERR(clk[cluster]);
dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table[cluster]);
free_opp_table:
if (arm_bL_ops->free_opp_table)
arm_bL_ops->free_opp_table(cpu_dev);
out:
dev_err(cpu_dev, "%s: Failed to get data for cluster: %d\n", __func__,
cluster);

View File

@ -25,13 +25,16 @@
struct cpufreq_arm_bL_ops {
char name[CPUFREQ_NAME_LEN];
int (*get_transition_latency)(struct device *cpu_dev);
/*
* This must set opp table for cpu_dev in a similar way as done by
* of_init_opp_table().
*/
int (*init_opp_table)(struct device *cpu_dev);
/* Optional */
int (*get_transition_latency)(struct device *cpu_dev);
void (*free_opp_table)(struct device *cpu_dev);
};
int bL_cpufreq_register(struct cpufreq_arm_bL_ops *ops);

View File

@ -82,6 +82,7 @@ static struct cpufreq_arm_bL_ops dt_bL_ops = {
.name = "dt-bl",
.get_transition_latency = dt_get_transition_latency,
.init_opp_table = dt_init_opp_table,
.free_opp_table = of_free_opp_table,
};
static int generic_bL_probe(struct platform_device *pdev)

View File

@ -58,6 +58,8 @@ static int set_target(struct cpufreq_policy *policy, unsigned int index)
old_freq = clk_get_rate(cpu_clk) / 1000;
if (!IS_ERR(cpu_reg)) {
unsigned long opp_freq;
rcu_read_lock();
opp = dev_pm_opp_find_freq_ceil(cpu_dev, &freq_Hz);
if (IS_ERR(opp)) {
@ -67,13 +69,16 @@ static int set_target(struct cpufreq_policy *policy, unsigned int index)
return PTR_ERR(opp);
}
volt = dev_pm_opp_get_voltage(opp);
opp_freq = dev_pm_opp_get_freq(opp);
rcu_read_unlock();
tol = volt * priv->voltage_tolerance / 100;
volt_old = regulator_get_voltage(cpu_reg);
dev_dbg(cpu_dev, "Found OPP: %ld kHz, %ld uV\n",
opp_freq / 1000, volt);
}
dev_dbg(cpu_dev, "%u MHz, %ld mV --> %u MHz, %ld mV\n",
old_freq / 1000, volt_old ? volt_old / 1000 : -1,
old_freq / 1000, (volt_old > 0) ? volt_old / 1000 : -1,
new_freq / 1000, volt ? volt / 1000 : -1);
/* scaling up? scale voltage before frequency */
@ -89,7 +94,7 @@ static int set_target(struct cpufreq_policy *policy, unsigned int index)
ret = clk_set_rate(cpu_clk, freq_exact);
if (ret) {
dev_err(cpu_dev, "failed to set clock rate: %d\n", ret);
if (!IS_ERR(cpu_reg))
if (!IS_ERR(cpu_reg) && volt_old > 0)
regulator_set_voltage_tol(cpu_reg, volt_old, tol);
return ret;
}
@ -181,7 +186,6 @@ static int cpufreq_init(struct cpufreq_policy *policy)
{
struct cpufreq_dt_platform_data *pd;
struct cpufreq_frequency_table *freq_table;
struct thermal_cooling_device *cdev;
struct device_node *np;
struct private_data *priv;
struct device *cpu_dev;
@ -210,7 +214,7 @@ static int cpufreq_init(struct cpufreq_policy *policy)
priv = kzalloc(sizeof(*priv), GFP_KERNEL);
if (!priv) {
ret = -ENOMEM;
goto out_put_node;
goto out_free_opp;
}
of_property_read_u32(np, "voltage-tolerance", &priv->voltage_tolerance);
@ -264,20 +268,6 @@ static int cpufreq_init(struct cpufreq_policy *policy)
goto out_free_priv;
}
/*
* For now, just loading the cooling device;
* thermal DT code takes care of matching them.
*/
if (of_find_property(np, "#cooling-cells", NULL)) {
cdev = of_cpufreq_cooling_register(np, cpu_present_mask);
if (IS_ERR(cdev))
dev_err(cpu_dev,
"running cpufreq without cooling device: %ld\n",
PTR_ERR(cdev));
else
priv->cdev = cdev;
}
priv->cpu_dev = cpu_dev;
priv->cpu_reg = cpu_reg;
policy->driver_data = priv;
@ -287,7 +277,7 @@ static int cpufreq_init(struct cpufreq_policy *policy)
if (ret) {
dev_err(cpu_dev, "%s: invalid frequency table: %d\n", __func__,
ret);
goto out_cooling_unregister;
goto out_free_cpufreq_table;
}
policy->cpuinfo.transition_latency = transition_latency;
@ -300,12 +290,12 @@ static int cpufreq_init(struct cpufreq_policy *policy)
return 0;
out_cooling_unregister:
cpufreq_cooling_unregister(priv->cdev);
out_free_cpufreq_table:
dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table);
out_free_priv:
kfree(priv);
out_put_node:
out_free_opp:
of_free_opp_table(cpu_dev);
of_node_put(np);
out_put_reg_clk:
clk_put(cpu_clk);
@ -319,8 +309,10 @@ static int cpufreq_exit(struct cpufreq_policy *policy)
{
struct private_data *priv = policy->driver_data;
cpufreq_cooling_unregister(priv->cdev);
if (priv->cdev)
cpufreq_cooling_unregister(priv->cdev);
dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table);
of_free_opp_table(priv->cpu_dev);
clk_put(policy->clk);
if (!IS_ERR(priv->cpu_reg))
regulator_put(priv->cpu_reg);
@ -329,6 +321,33 @@ static int cpufreq_exit(struct cpufreq_policy *policy)
return 0;
}
static void cpufreq_ready(struct cpufreq_policy *policy)
{
struct private_data *priv = policy->driver_data;
struct device_node *np = of_node_get(priv->cpu_dev->of_node);
if (WARN_ON(!np))
return;
/*
* For now, just loading the cooling device;
* thermal DT code takes care of matching them.
*/
if (of_find_property(np, "#cooling-cells", NULL)) {
priv->cdev = of_cpufreq_cooling_register(np,
policy->related_cpus);
if (IS_ERR(priv->cdev)) {
dev_err(priv->cpu_dev,
"running cpufreq without cooling device: %ld\n",
PTR_ERR(priv->cdev));
priv->cdev = NULL;
}
}
of_node_put(np);
}
static struct cpufreq_driver dt_cpufreq_driver = {
.flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK,
.verify = cpufreq_generic_frequency_table_verify,
@ -336,6 +355,7 @@ static struct cpufreq_driver dt_cpufreq_driver = {
.get = cpufreq_generic_get,
.init = cpufreq_init,
.exit = cpufreq_exit,
.ready = cpufreq_ready,
.name = "cpufreq-dt",
.attr = cpufreq_generic_attr,
};

View File

@ -535,7 +535,7 @@ static int cpufreq_set_policy(struct cpufreq_policy *policy,
static ssize_t store_##file_name \
(struct cpufreq_policy *policy, const char *buf, size_t count) \
{ \
int ret; \
int ret, temp; \
struct cpufreq_policy new_policy; \
\
ret = cpufreq_get_policy(&new_policy, policy->cpu); \
@ -546,8 +546,10 @@ static ssize_t store_##file_name \
if (ret != 1) \
return -EINVAL; \
\
temp = new_policy.object; \
ret = cpufreq_set_policy(policy, &new_policy); \
policy->user_policy.object = policy->object; \
if (!ret) \
policy->user_policy.object = temp; \
\
return ret ? ret : count; \
}
@ -898,46 +900,31 @@ static int cpufreq_add_dev_interface(struct cpufreq_policy *policy,
struct freq_attr **drv_attr;
int ret = 0;
/* prepare interface data */
ret = kobject_init_and_add(&policy->kobj, &ktype_cpufreq,
&dev->kobj, "cpufreq");
if (ret)
return ret;
/* set up files for this cpu device */
drv_attr = cpufreq_driver->attr;
while ((drv_attr) && (*drv_attr)) {
ret = sysfs_create_file(&policy->kobj, &((*drv_attr)->attr));
if (ret)
goto err_out_kobj_put;
return ret;
drv_attr++;
}
if (cpufreq_driver->get) {
ret = sysfs_create_file(&policy->kobj, &cpuinfo_cur_freq.attr);
if (ret)
goto err_out_kobj_put;
return ret;
}
ret = sysfs_create_file(&policy->kobj, &scaling_cur_freq.attr);
if (ret)
goto err_out_kobj_put;
return ret;
if (cpufreq_driver->bios_limit) {
ret = sysfs_create_file(&policy->kobj, &bios_limit.attr);
if (ret)
goto err_out_kobj_put;
return ret;
}
ret = cpufreq_add_dev_symlink(policy);
if (ret)
goto err_out_kobj_put;
return ret;
err_out_kobj_put:
kobject_put(&policy->kobj);
wait_for_completion(&policy->kobj_unregister);
return ret;
return cpufreq_add_dev_symlink(policy);
}
static void cpufreq_init_policy(struct cpufreq_policy *policy)
@ -1196,6 +1183,8 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif)
goto err_set_policy_cpu;
}
down_write(&policy->rwsem);
/* related cpus should atleast have policy->cpus */
cpumask_or(policy->related_cpus, policy->related_cpus, policy->cpus);
@ -1208,9 +1197,17 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif)
if (!recover_policy) {
policy->user_policy.min = policy->min;
policy->user_policy.max = policy->max;
/* prepare interface data */
ret = kobject_init_and_add(&policy->kobj, &ktype_cpufreq,
&dev->kobj, "cpufreq");
if (ret) {
pr_err("%s: failed to init policy->kobj: %d\n",
__func__, ret);
goto err_init_policy_kobj;
}
}
down_write(&policy->rwsem);
write_lock_irqsave(&cpufreq_driver_lock, flags);
for_each_cpu(j, policy->cpus)
per_cpu(cpufreq_cpu_data, j) = policy;
@ -1288,8 +1285,13 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif)
up_write(&policy->rwsem);
kobject_uevent(&policy->kobj, KOBJ_ADD);
up_read(&cpufreq_rwsem);
/* Callback for handling stuff after policy is ready */
if (cpufreq_driver->ready)
cpufreq_driver->ready(policy);
pr_debug("initialization complete\n");
return 0;
@ -1301,6 +1303,11 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif)
per_cpu(cpufreq_cpu_data, j) = NULL;
write_unlock_irqrestore(&cpufreq_driver_lock, flags);
if (!recover_policy) {
kobject_put(&policy->kobj);
wait_for_completion(&policy->kobj_unregister);
}
err_init_policy_kobj:
up_write(&policy->rwsem);
if (cpufreq_driver->exit)

View File

@ -371,7 +371,7 @@ static int exynos_cpufreq_probe(struct platform_device *pdev)
if (ret) {
dev_err(dvfs_info->dev,
"failed to init cpufreq table: %d\n", ret);
goto err_put_node;
goto err_free_opp;
}
dvfs_info->freq_count = dev_pm_opp_get_opp_count(dvfs_info->dev);
exynos_sort_descend_freq_table();
@ -423,6 +423,8 @@ static int exynos_cpufreq_probe(struct platform_device *pdev)
err_free_table:
dev_pm_opp_free_cpufreq_table(dvfs_info->dev, &dvfs_info->freq_table);
err_free_opp:
of_free_opp_table(dvfs_info->dev);
err_put_node:
of_node_put(np);
dev_err(&pdev->dev, "%s: failed initialization\n", __func__);
@ -433,6 +435,7 @@ static int exynos_cpufreq_remove(struct platform_device *pdev)
{
cpufreq_unregister_driver(&exynos_driver);
dev_pm_opp_free_cpufreq_table(dvfs_info->dev, &dvfs_info->freq_table);
of_free_opp_table(dvfs_info->dev);
return 0;
}

View File

@ -31,6 +31,7 @@ static struct clk *step_clk;
static struct clk *pll2_pfd2_396m_clk;
static struct device *cpu_dev;
static bool free_opp;
static struct cpufreq_frequency_table *freq_table;
static unsigned int transition_latency;
@ -207,11 +208,14 @@ static int imx6q_cpufreq_probe(struct platform_device *pdev)
goto put_reg;
}
/* Because we have added the OPPs here, we must free them */
free_opp = true;
num = dev_pm_opp_get_opp_count(cpu_dev);
if (num < 0) {
ret = num;
dev_err(cpu_dev, "no OPP table is found: %d\n", ret);
goto put_reg;
goto out_free_opp;
}
}
@ -306,6 +310,9 @@ static int imx6q_cpufreq_probe(struct platform_device *pdev)
free_freq_table:
dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table);
out_free_opp:
if (free_opp)
of_free_opp_table(cpu_dev);
put_reg:
if (!IS_ERR(arm_reg))
regulator_put(arm_reg);
@ -332,6 +339,8 @@ static int imx6q_cpufreq_remove(struct platform_device *pdev)
{
cpufreq_unregister_driver(&imx6q_cpufreq_driver);
dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table);
if (free_opp)
of_free_opp_table(cpu_dev);
regulator_put(arm_reg);
if (!IS_ERR(pu_reg))
regulator_put(pu_reg);

View File

@ -137,6 +137,7 @@ struct cpu_defaults {
static struct pstate_adjust_policy pid_params;
static struct pstate_funcs pstate_funcs;
static int hwp_active;
struct perf_limits {
int no_turbo;
@ -244,6 +245,34 @@ static inline void update_turbo_state(void)
cpu->pstate.max_pstate == cpu->pstate.turbo_pstate);
}
#define PCT_TO_HWP(x) (x * 255 / 100)
static void intel_pstate_hwp_set(void)
{
int min, max, cpu;
u64 value, freq;
get_online_cpus();
for_each_online_cpu(cpu) {
rdmsrl_on_cpu(cpu, MSR_HWP_REQUEST, &value);
min = PCT_TO_HWP(limits.min_perf_pct);
value &= ~HWP_MIN_PERF(~0L);
value |= HWP_MIN_PERF(min);
max = PCT_TO_HWP(limits.max_perf_pct);
if (limits.no_turbo) {
rdmsrl( MSR_HWP_CAPABILITIES, freq);
max = HWP_GUARANTEED_PERF(freq);
}
value &= ~HWP_MAX_PERF(~0L);
value |= HWP_MAX_PERF(max);
wrmsrl_on_cpu(cpu, MSR_HWP_REQUEST, value);
}
put_online_cpus();
}
/************************** debugfs begin ************************/
static int pid_param_set(void *data, u64 val)
{
@ -279,6 +308,8 @@ static void __init intel_pstate_debug_expose_params(void)
struct dentry *debugfs_parent;
int i = 0;
if (hwp_active)
return;
debugfs_parent = debugfs_create_dir("pstate_snb", NULL);
if (IS_ERR_OR_NULL(debugfs_parent))
return;
@ -329,8 +360,12 @@ static ssize_t store_no_turbo(struct kobject *a, struct attribute *b,
pr_warn("Turbo disabled by BIOS or unavailable on processor\n");
return -EPERM;
}
limits.no_turbo = clamp_t(int, input, 0, 1);
if (hwp_active)
intel_pstate_hwp_set();
return count;
}
@ -348,6 +383,8 @@ static ssize_t store_max_perf_pct(struct kobject *a, struct attribute *b,
limits.max_perf_pct = min(limits.max_policy_pct, limits.max_sysfs_pct);
limits.max_perf = div_fp(int_tofp(limits.max_perf_pct), int_tofp(100));
if (hwp_active)
intel_pstate_hwp_set();
return count;
}
@ -363,6 +400,8 @@ static ssize_t store_min_perf_pct(struct kobject *a, struct attribute *b,
limits.min_perf_pct = clamp_t(int, input, 0 , 100);
limits.min_perf = div_fp(int_tofp(limits.min_perf_pct), int_tofp(100));
if (hwp_active)
intel_pstate_hwp_set();
return count;
}
@ -395,8 +434,16 @@ static void __init intel_pstate_sysfs_expose_params(void)
rc = sysfs_create_group(intel_pstate_kobject, &intel_pstate_attr_group);
BUG_ON(rc);
}
/************************** sysfs end ************************/
static void intel_pstate_hwp_enable(void)
{
hwp_active++;
pr_info("intel_pstate HWP enabled\n");
wrmsrl( MSR_PM_ENABLE, 0x1);
}
static int byt_get_min_pstate(void)
{
u64 value;
@ -648,6 +695,14 @@ static inline void intel_pstate_sample(struct cpudata *cpu)
cpu->prev_mperf = mperf;
}
static inline void intel_hwp_set_sample_time(struct cpudata *cpu)
{
int delay;
delay = msecs_to_jiffies(50);
mod_timer_pinned(&cpu->timer, jiffies + delay);
}
static inline void intel_pstate_set_sample_time(struct cpudata *cpu)
{
int delay;
@ -694,6 +749,14 @@ static inline void intel_pstate_adjust_busy_pstate(struct cpudata *cpu)
intel_pstate_set_pstate(cpu, cpu->pstate.current_pstate - ctl);
}
static void intel_hwp_timer_func(unsigned long __data)
{
struct cpudata *cpu = (struct cpudata *) __data;
intel_pstate_sample(cpu);
intel_hwp_set_sample_time(cpu);
}
static void intel_pstate_timer_func(unsigned long __data)
{
struct cpudata *cpu = (struct cpudata *) __data;
@ -730,6 +793,7 @@ static const struct x86_cpu_id intel_pstate_cpu_ids[] = {
ICPU(0x3f, core_params),
ICPU(0x45, core_params),
ICPU(0x46, core_params),
ICPU(0x47, core_params),
ICPU(0x4c, byt_params),
ICPU(0x4f, core_params),
ICPU(0x56, core_params),
@ -737,6 +801,11 @@ static const struct x86_cpu_id intel_pstate_cpu_ids[] = {
};
MODULE_DEVICE_TABLE(x86cpu, intel_pstate_cpu_ids);
static const struct x86_cpu_id intel_pstate_cpu_oob_ids[] = {
ICPU(0x56, core_params),
{}
};
static int intel_pstate_init_cpu(unsigned int cpunum)
{
struct cpudata *cpu;
@ -753,9 +822,14 @@ static int intel_pstate_init_cpu(unsigned int cpunum)
intel_pstate_get_cpu_pstates(cpu);
init_timer_deferrable(&cpu->timer);
cpu->timer.function = intel_pstate_timer_func;
cpu->timer.data = (unsigned long)cpu;
cpu->timer.expires = jiffies + HZ/100;
if (!hwp_active)
cpu->timer.function = intel_pstate_timer_func;
else
cpu->timer.function = intel_hwp_timer_func;
intel_pstate_busy_pid_reset(cpu);
intel_pstate_sample(cpu);
@ -792,6 +866,7 @@ static int intel_pstate_set_policy(struct cpufreq_policy *policy)
limits.no_turbo = 0;
return 0;
}
limits.min_perf_pct = (policy->min * 100) / policy->cpuinfo.max_freq;
limits.min_perf_pct = clamp_t(int, limits.min_perf_pct, 0 , 100);
limits.min_perf = div_fp(int_tofp(limits.min_perf_pct), int_tofp(100));
@ -801,6 +876,9 @@ static int intel_pstate_set_policy(struct cpufreq_policy *policy)
limits.max_perf_pct = min(limits.max_policy_pct, limits.max_sysfs_pct);
limits.max_perf = div_fp(int_tofp(limits.max_perf_pct), int_tofp(100));
if (hwp_active)
intel_pstate_hwp_set();
return 0;
}
@ -823,6 +901,9 @@ static void intel_pstate_stop_cpu(struct cpufreq_policy *policy)
pr_info("intel_pstate CPU %d exiting\n", cpu_num);
del_timer_sync(&all_cpu_data[cpu_num]->timer);
if (hwp_active)
return;
intel_pstate_set_pstate(cpu, cpu->pstate.min_pstate);
}
@ -866,6 +947,7 @@ static struct cpufreq_driver intel_pstate_driver = {
};
static int __initdata no_load;
static int __initdata no_hwp;
static int intel_pstate_msrs_not_valid(void)
{
@ -943,15 +1025,46 @@ static bool intel_pstate_no_acpi_pss(void)
return true;
}
static bool intel_pstate_has_acpi_ppc(void)
{
int i;
for_each_possible_cpu(i) {
struct acpi_processor *pr = per_cpu(processors, i);
if (!pr)
continue;
if (acpi_has_method(pr->handle, "_PPC"))
return true;
}
return false;
}
enum {
PSS,
PPC,
};
struct hw_vendor_info {
u16 valid;
char oem_id[ACPI_OEM_ID_SIZE];
char oem_table_id[ACPI_OEM_TABLE_ID_SIZE];
int oem_pwr_table;
};
/* Hardware vendor-specific info that has its own power management modes */
static struct hw_vendor_info vendor_info[] = {
{1, "HP ", "ProLiant"},
{1, "HP ", "ProLiant", PSS},
{1, "ORACLE", "X4-2 ", PPC},
{1, "ORACLE", "X4-2L ", PPC},
{1, "ORACLE", "X4-2B ", PPC},
{1, "ORACLE", "X3-2 ", PPC},
{1, "ORACLE", "X3-2L ", PPC},
{1, "ORACLE", "X3-2B ", PPC},
{1, "ORACLE", "X4470M2 ", PPC},
{1, "ORACLE", "X4270M3 ", PPC},
{1, "ORACLE", "X4270M2 ", PPC},
{1, "ORACLE", "X4170M2 ", PPC},
{0, "", ""},
};
@ -959,6 +1072,15 @@ static bool intel_pstate_platform_pwr_mgmt_exists(void)
{
struct acpi_table_header hdr;
struct hw_vendor_info *v_info;
const struct x86_cpu_id *id;
u64 misc_pwr;
id = x86_match_cpu(intel_pstate_cpu_oob_ids);
if (id) {
rdmsrl(MSR_MISC_PWR_MGMT, misc_pwr);
if ( misc_pwr & (1 << 8))
return true;
}
if (acpi_disabled ||
ACPI_FAILURE(acpi_get_table_header(ACPI_SIG_FADT, 0, &hdr)))
@ -966,15 +1088,21 @@ static bool intel_pstate_platform_pwr_mgmt_exists(void)
for (v_info = vendor_info; v_info->valid; v_info++) {
if (!strncmp(hdr.oem_id, v_info->oem_id, ACPI_OEM_ID_SIZE) &&
!strncmp(hdr.oem_table_id, v_info->oem_table_id, ACPI_OEM_TABLE_ID_SIZE) &&
intel_pstate_no_acpi_pss())
return true;
!strncmp(hdr.oem_table_id, v_info->oem_table_id,
ACPI_OEM_TABLE_ID_SIZE))
switch (v_info->oem_pwr_table) {
case PSS:
return intel_pstate_no_acpi_pss();
case PPC:
return intel_pstate_has_acpi_ppc();
}
}
return false;
}
#else /* CONFIG_ACPI not enabled */
static inline bool intel_pstate_platform_pwr_mgmt_exists(void) { return false; }
static inline bool intel_pstate_has_acpi_ppc(void) { return false; }
#endif /* CONFIG_ACPI */
static int __init intel_pstate_init(void)
@ -982,6 +1110,7 @@ static int __init intel_pstate_init(void)
int cpu, rc = 0;
const struct x86_cpu_id *id;
struct cpu_defaults *cpu_info;
struct cpuinfo_x86 *c = &boot_cpu_data;
if (no_load)
return -ENODEV;
@ -1011,6 +1140,9 @@ static int __init intel_pstate_init(void)
if (!all_cpu_data)
return -ENOMEM;
if (cpu_has(c,X86_FEATURE_HWP) && !no_hwp)
intel_pstate_hwp_enable();
rc = cpufreq_register_driver(&intel_pstate_driver);
if (rc)
goto out;
@ -1041,6 +1173,8 @@ static int __init intel_pstate_setup(char *str)
if (!strcmp(str, "disable"))
no_load = 1;
if (!strcmp(str, "no_hwp"))
no_hwp = 1;
return 0;
}
early_param("intel_pstate", intel_pstate_setup);

View File

@ -0,0 +1,223 @@
/*
* CPU Frequency Scaling for Loongson 1 SoC
*
* Copyright (C) 2014 Zhang, Keguang <keguang.zhang@gmail.com>
*
* This file is licensed under the terms of the GNU General Public
* License version 2. This program is licensed "as is" without any
* warranty of any kind, whether express or implied.
*/
#include <linux/clk.h>
#include <linux/clk-provider.h>
#include <linux/cpu.h>
#include <linux/cpufreq.h>
#include <linux/delay.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <asm/mach-loongson1/cpufreq.h>
#include <asm/mach-loongson1/loongson1.h>
static struct {
struct device *dev;
struct clk *clk; /* CPU clk */
struct clk *mux_clk; /* MUX of CPU clk */
struct clk *pll_clk; /* PLL clk */
struct clk *osc_clk; /* OSC clk */
unsigned int max_freq;
unsigned int min_freq;
} ls1x_cpufreq;
static int ls1x_cpufreq_notifier(struct notifier_block *nb,
unsigned long val, void *data)
{
if (val == CPUFREQ_POSTCHANGE)
current_cpu_data.udelay_val = loops_per_jiffy;
return NOTIFY_OK;
}
static struct notifier_block ls1x_cpufreq_notifier_block = {
.notifier_call = ls1x_cpufreq_notifier
};
static int ls1x_cpufreq_target(struct cpufreq_policy *policy,
unsigned int index)
{
unsigned int old_freq, new_freq;
old_freq = policy->cur;
new_freq = policy->freq_table[index].frequency;
/*
* The procedure of reconfiguring CPU clk is as below.
*
* - Reparent CPU clk to OSC clk
* - Reset CPU clock (very important)
* - Reconfigure CPU DIV
* - Reparent CPU clk back to CPU DIV clk
*/
dev_dbg(ls1x_cpufreq.dev, "%u KHz --> %u KHz\n", old_freq, new_freq);
clk_set_parent(policy->clk, ls1x_cpufreq.osc_clk);
__raw_writel(__raw_readl(LS1X_CLK_PLL_DIV) | RST_CPU_EN | RST_CPU,
LS1X_CLK_PLL_DIV);
__raw_writel(__raw_readl(LS1X_CLK_PLL_DIV) & ~(RST_CPU_EN | RST_CPU),
LS1X_CLK_PLL_DIV);
clk_set_rate(ls1x_cpufreq.mux_clk, new_freq * 1000);
clk_set_parent(policy->clk, ls1x_cpufreq.mux_clk);
return 0;
}
static int ls1x_cpufreq_init(struct cpufreq_policy *policy)
{
struct cpufreq_frequency_table *freq_tbl;
unsigned int pll_freq, freq;
int steps, i, ret;
pll_freq = clk_get_rate(ls1x_cpufreq.pll_clk) / 1000;
steps = 1 << DIV_CPU_WIDTH;
freq_tbl = kzalloc(sizeof(*freq_tbl) * steps, GFP_KERNEL);
if (!freq_tbl) {
dev_err(ls1x_cpufreq.dev,
"failed to alloc cpufreq_frequency_table\n");
ret = -ENOMEM;
goto out;
}
for (i = 0; i < (steps - 1); i++) {
freq = pll_freq / (i + 1);
if ((freq < ls1x_cpufreq.min_freq) ||
(freq > ls1x_cpufreq.max_freq))
freq_tbl[i].frequency = CPUFREQ_ENTRY_INVALID;
else
freq_tbl[i].frequency = freq;
dev_dbg(ls1x_cpufreq.dev,
"cpufreq table: index %d: frequency %d\n", i,
freq_tbl[i].frequency);
}
freq_tbl[i].frequency = CPUFREQ_TABLE_END;
policy->clk = ls1x_cpufreq.clk;
ret = cpufreq_generic_init(policy, freq_tbl, 0);
if (ret)
kfree(freq_tbl);
out:
return ret;
}
static int ls1x_cpufreq_exit(struct cpufreq_policy *policy)
{
kfree(policy->freq_table);
return 0;
}
static struct cpufreq_driver ls1x_cpufreq_driver = {
.name = "cpufreq-ls1x",
.flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK,
.verify = cpufreq_generic_frequency_table_verify,
.target_index = ls1x_cpufreq_target,
.get = cpufreq_generic_get,
.init = ls1x_cpufreq_init,
.exit = ls1x_cpufreq_exit,
.attr = cpufreq_generic_attr,
};
static int ls1x_cpufreq_remove(struct platform_device *pdev)
{
cpufreq_unregister_notifier(&ls1x_cpufreq_notifier_block,
CPUFREQ_TRANSITION_NOTIFIER);
cpufreq_unregister_driver(&ls1x_cpufreq_driver);
return 0;
}
static int ls1x_cpufreq_probe(struct platform_device *pdev)
{
struct plat_ls1x_cpufreq *pdata = pdev->dev.platform_data;
struct clk *clk;
int ret;
if (!pdata || !pdata->clk_name || !pdata->osc_clk_name)
return -EINVAL;
ls1x_cpufreq.dev = &pdev->dev;
clk = devm_clk_get(&pdev->dev, pdata->clk_name);
if (IS_ERR(clk)) {
dev_err(ls1x_cpufreq.dev, "unable to get %s clock\n",
pdata->clk_name);
ret = PTR_ERR(clk);
goto out;
}
ls1x_cpufreq.clk = clk;
clk = clk_get_parent(clk);
if (IS_ERR(clk)) {
dev_err(ls1x_cpufreq.dev, "unable to get parent of %s clock\n",
__clk_get_name(ls1x_cpufreq.clk));
ret = PTR_ERR(clk);
goto out;
}
ls1x_cpufreq.mux_clk = clk;
clk = clk_get_parent(clk);
if (IS_ERR(clk)) {
dev_err(ls1x_cpufreq.dev, "unable to get parent of %s clock\n",
__clk_get_name(ls1x_cpufreq.mux_clk));
ret = PTR_ERR(clk);
goto out;
}
ls1x_cpufreq.pll_clk = clk;
clk = devm_clk_get(&pdev->dev, pdata->osc_clk_name);
if (IS_ERR(clk)) {
dev_err(ls1x_cpufreq.dev, "unable to get %s clock\n",
pdata->osc_clk_name);
ret = PTR_ERR(clk);
goto out;
}
ls1x_cpufreq.osc_clk = clk;
ls1x_cpufreq.max_freq = pdata->max_freq;
ls1x_cpufreq.min_freq = pdata->min_freq;
ret = cpufreq_register_driver(&ls1x_cpufreq_driver);
if (ret) {
dev_err(ls1x_cpufreq.dev,
"failed to register cpufreq driver: %d\n", ret);
goto out;
}
ret = cpufreq_register_notifier(&ls1x_cpufreq_notifier_block,
CPUFREQ_TRANSITION_NOTIFIER);
if (!ret)
goto out;
dev_err(ls1x_cpufreq.dev, "failed to register cpufreq notifier: %d\n",
ret);
cpufreq_unregister_driver(&ls1x_cpufreq_driver);
out:
return ret;
}
static struct platform_driver ls1x_cpufreq_platdrv = {
.driver = {
.name = "ls1x-cpufreq",
.owner = THIS_MODULE,
},
.probe = ls1x_cpufreq_probe,
.remove = ls1x_cpufreq_remove,
};
module_platform_driver(ls1x_cpufreq_platdrv);
MODULE_AUTHOR("Kelvin Cheung <keguang.zhang@gmail.com>");
MODULE_DESCRIPTION("Loongson 1 CPUFreq driver");
MODULE_LICENSE("GPL");

View File

@ -603,6 +603,13 @@ static void __exit pcc_cpufreq_exit(void)
free_percpu(pcc_cpu_info);
}
static const struct acpi_device_id processor_device_ids[] = {
{ACPI_PROCESSOR_OBJECT_HID, },
{ACPI_PROCESSOR_DEVICE_HID, },
{},
};
MODULE_DEVICE_TABLE(acpi, processor_device_ids);
MODULE_AUTHOR("Matthew Garrett, Naga Chumbalkar");
MODULE_VERSION(PCC_VERSION);
MODULE_DESCRIPTION("Processor Clocking Control interface driver");

View File

@ -73,7 +73,6 @@ static struct cpuidle_driver arm64_idle_driver = {
.exit_latency = 1,
.target_residency = 1,
.power_usage = UINT_MAX,
.flags = CPUIDLE_FLAG_TIME_VALID,
.name = "WFI",
.desc = "ARM64 WFI",
}
@ -104,11 +103,8 @@ static int __init arm64_idle_init(void)
* reason to initialize the idle driver if only wfi is supported.
*/
ret = dt_init_idle_driver(drv, arm64_idle_state_match, 1);
if (ret <= 0) {
if (ret)
pr_err("failed to initialize idle states\n");
if (ret <= 0)
return ret ? : -ENODEV;
}
/*
* Call arch CPU operations in order to initialize
@ -122,12 +118,6 @@ static int __init arm64_idle_init(void)
}
}
ret = cpuidle_register(drv, NULL);
if (ret) {
pr_err("failed to register cpuidle driver\n");
return ret;
}
return 0;
return cpuidle_register(drv, NULL);
}
device_initcall(arm64_idle_init);

View File

@ -43,7 +43,6 @@ static struct cpuidle_driver at91_idle_driver = {
.enter = at91_enter_idle,
.exit_latency = 10,
.target_residency = 10000,
.flags = CPUIDLE_FLAG_TIME_VALID,
.name = "RAM_SR",
.desc = "WFI and DDR Self Refresh",
},

View File

@ -67,8 +67,7 @@ static struct cpuidle_driver bl_idle_little_driver = {
.enter = bl_enter_powerdown,
.exit_latency = 700,
.target_residency = 2500,
.flags = CPUIDLE_FLAG_TIME_VALID |
CPUIDLE_FLAG_TIMER_STOP,
.flags = CPUIDLE_FLAG_TIMER_STOP,
.name = "C1",
.desc = "ARM little-cluster power down",
},
@ -89,8 +88,7 @@ static struct cpuidle_driver bl_idle_big_driver = {
.enter = bl_enter_powerdown,
.exit_latency = 500,
.target_residency = 2000,
.flags = CPUIDLE_FLAG_TIME_VALID |
CPUIDLE_FLAG_TIMER_STOP,
.flags = CPUIDLE_FLAG_TIMER_STOP,
.name = "C1",
.desc = "ARM big-cluster power down",
},

View File

@ -55,7 +55,6 @@ static struct cpuidle_driver calxeda_idle_driver = {
{
.name = "PG",
.desc = "Power Gate",
.flags = CPUIDLE_FLAG_TIME_VALID,
.exit_latency = 30,
.power_usage = 50,
.target_residency = 200,

View File

@ -79,7 +79,6 @@ static struct cpuidle_driver cps_driver = {
.enter = cps_nc_enter,
.exit_latency = 200,
.target_residency = 450,
.flags = CPUIDLE_FLAG_TIME_VALID,
.name = "nc-wait",
.desc = "non-coherent MIPS wait",
},
@ -87,8 +86,7 @@ static struct cpuidle_driver cps_driver = {
.enter = cps_nc_enter,
.exit_latency = 300,
.target_residency = 700,
.flags = CPUIDLE_FLAG_TIME_VALID |
CPUIDLE_FLAG_TIMER_STOP,
.flags = CPUIDLE_FLAG_TIMER_STOP,
.name = "clock-gated",
.desc = "core clock gated",
},
@ -96,8 +94,7 @@ static struct cpuidle_driver cps_driver = {
.enter = cps_nc_enter,
.exit_latency = 600,
.target_residency = 1000,
.flags = CPUIDLE_FLAG_TIME_VALID |
CPUIDLE_FLAG_TIMER_STOP,
.flags = CPUIDLE_FLAG_TIMER_STOP,
.name = "power-gated",
.desc = "core power gated",
},

View File

@ -47,7 +47,6 @@ static struct cpuidle_driver exynos_idle_driver = {
.enter = exynos_enter_lowpower,
.exit_latency = 300,
.target_residency = 100000,
.flags = CPUIDLE_FLAG_TIME_VALID,
.name = "C1",
.desc = "ARM power down",
},

View File

@ -47,7 +47,6 @@ static struct cpuidle_driver kirkwood_idle_driver = {
.enter = kirkwood_enter_idle,
.exit_latency = 10,
.target_residency = 100000,
.flags = CPUIDLE_FLAG_TIME_VALID,
.name = "DDR SR",
.desc = "WFI and DDR Self Refresh",
},

View File

@ -53,7 +53,6 @@ static struct cpuidle_driver armadaxp_idle_driver = {
.exit_latency = 10,
.power_usage = 50,
.target_residency = 100,
.flags = CPUIDLE_FLAG_TIME_VALID,
.name = "MV CPU IDLE",
.desc = "CPU power down",
},
@ -62,8 +61,7 @@ static struct cpuidle_driver armadaxp_idle_driver = {
.exit_latency = 100,
.power_usage = 5,
.target_residency = 1000,
.flags = CPUIDLE_FLAG_TIME_VALID |
MVEBU_V7_FLAG_DEEP_IDLE,
.flags = MVEBU_V7_FLAG_DEEP_IDLE,
.name = "MV CPU DEEP IDLE",
.desc = "CPU and L2 Fabric power down",
},
@ -78,8 +76,7 @@ static struct cpuidle_driver armada370_idle_driver = {
.exit_latency = 100,
.power_usage = 5,
.target_residency = 1000,
.flags = (CPUIDLE_FLAG_TIME_VALID |
MVEBU_V7_FLAG_DEEP_IDLE),
.flags = MVEBU_V7_FLAG_DEEP_IDLE,
.name = "Deep Idle",
.desc = "CPU and L2 Fabric power down",
},
@ -94,7 +91,6 @@ static struct cpuidle_driver armada38x_idle_driver = {
.exit_latency = 10,
.power_usage = 5,
.target_residency = 100,
.flags = CPUIDLE_FLAG_TIME_VALID,
.name = "Idle",
.desc = "CPU and SCU power down",
},

View File

@ -93,7 +93,6 @@ static struct cpuidle_state powernv_states[MAX_POWERNV_IDLE_STATES] = {
{ /* Snooze */
.name = "snooze",
.desc = "snooze",
.flags = CPUIDLE_FLAG_TIME_VALID,
.exit_latency = 0,
.target_residency = 0,
.enter = &snooze_loop },
@ -202,7 +201,7 @@ static int powernv_add_idle_states(void)
/* Add NAP state */
strcpy(powernv_states[nr_idle_states].name, "Nap");
strcpy(powernv_states[nr_idle_states].desc, "Nap");
powernv_states[nr_idle_states].flags = CPUIDLE_FLAG_TIME_VALID;
powernv_states[nr_idle_states].flags = 0;
powernv_states[nr_idle_states].exit_latency =
((unsigned int)latency_ns) / 1000;
powernv_states[nr_idle_states].target_residency =
@ -215,8 +214,7 @@ static int powernv_add_idle_states(void)
/* Add FASTSLEEP state */
strcpy(powernv_states[nr_idle_states].name, "FastSleep");
strcpy(powernv_states[nr_idle_states].desc, "FastSleep");
powernv_states[nr_idle_states].flags =
CPUIDLE_FLAG_TIME_VALID | CPUIDLE_FLAG_TIMER_STOP;
powernv_states[nr_idle_states].flags = CPUIDLE_FLAG_TIMER_STOP;
powernv_states[nr_idle_states].exit_latency =
((unsigned int)latency_ns) / 1000;
powernv_states[nr_idle_states].target_residency =

View File

@ -142,14 +142,12 @@ static struct cpuidle_state dedicated_states[] = {
{ /* Snooze */
.name = "snooze",
.desc = "snooze",
.flags = CPUIDLE_FLAG_TIME_VALID,
.exit_latency = 0,
.target_residency = 0,
.enter = &snooze_loop },
{ /* CEDE */
.name = "CEDE",
.desc = "CEDE",
.flags = CPUIDLE_FLAG_TIME_VALID,
.exit_latency = 10,
.target_residency = 100,
.enter = &dedicated_cede_loop },
@ -162,7 +160,6 @@ static struct cpuidle_state shared_states[] = {
{ /* Shared Cede */
.name = "Shared Cede",
.desc = "Shared Cede",
.flags = CPUIDLE_FLAG_TIME_VALID,
.exit_latency = 0,
.target_residency = 0,
.enter = &shared_cede_loop },

Some files were not shown because too many files have changed in this diff Show More