forked from luck/tmp_suning_uos_patched
2485b8674b
Currently acpi_run_osc() checks all the bits in _OSC result code (the
first DWORD in the capabilities buffer) to see error condition. But the
bit 0, which doesn't indicate any error, must be ignored.
The bit 0 is used as the query flag at _OSC invocation time. Some
platforms clear it during _OSC evaluation, but the others don't. On
latter platforms, current acpi_run_osc() mis-detects error when _OSC is
evaluated with query flag set because it doesn't ignore the bit 0.
Because of this, the __acpi_query_osc() always fails on such platforms.
And this is the cause of the problem that pci_osc_control_set() doesn't
work since the commit
|
||
---|---|---|
.. | ||
hotplug | ||
pcie | ||
.gitignore | ||
access.c | ||
bus.c | ||
dmar.c | ||
hotplug-pci.c | ||
hotplug.c | ||
htirq.c | ||
intel-iommu.c | ||
intr_remapping.c | ||
intr_remapping.h | ||
iova.c | ||
irq.c | ||
Kconfig | ||
Makefile | ||
msi.c | ||
msi.h | ||
pci-acpi.c | ||
pci-driver.c | ||
pci-sysfs.c | ||
pci.c | ||
pci.h | ||
probe.c | ||
proc.c | ||
quirks.c | ||
remove.c | ||
rom.c | ||
search.c | ||
setup-bus.c | ||
setup-irq.c | ||
setup-res.c | ||
slot.c | ||
syscall.c |