forked from luck/tmp_suning_uos_patched
libnvdimm for 5.9
- Add 'Runtime Firmware Activation' support for NVDIMMs that advertise the relevant capability - Misc libnvdimm and DAX cleanups -----BEGIN PGP SIGNATURE----- iHUEABYIAB0WIQT9vPEBxh63bwxRYEEPzq5USduLdgUCXzHodgAKCRAPzq5USduL djTjAQD1THDmizHn16zd94ueygh/BXfN0zyeVvQH352ol7kdfQEAj2A7YJ9XBbBY JC6/CNd+OiB9W88lLOUf3Waj1a7cUQ8= =Q6qn -----END PGP SIGNATURE----- Merge tag 'libnvdimm-for-5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm Pull libnvdimm updayes from Vishal Verma: "You'd normally receive this pull request from Dan Williams, but he's busy watching a newborn (Congrats Dan!), so I'm watching libnvdimm this cycle. This adds a new feature in libnvdimm - 'Runtime Firmware Activation', and a few small cleanups and fixes in libnvdimm and DAX. I'd originally intended to make separate topic-based pull requests - one for libnvdimm, and one for DAX, but some of the DAX material fell out since it wasn't quite ready. Summary: - add 'Runtime Firmware Activation' support for NVDIMMs that advertise the relevant capability - misc libnvdimm and DAX cleanups" * tag 'libnvdimm-for-5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm: libnvdimm/security: ensure sysfs poll thread woke up and fetch updated attr libnvdimm/security: the 'security' attr never show 'overwrite' state libnvdimm/security: fix a typo ACPI: NFIT: Fix ARS zero-sized allocation dax: Fix incorrect argument passed to xas_set_err() ACPI: NFIT: Add runtime firmware activate support PM, libnvdimm: Add runtime firmware activation support libnvdimm: Convert to DEVICE_ATTR_ADMIN_RO() drivers/dax: Expand lock scope to cover the use of addresses fs/dax: Remove unused size parameter dax: print error message by pr_info() in __generic_fsdax_supported() driver-core: Introduce DEVICE_ATTR_ADMIN_{RO,RW} tools/testing/nvdimm: Emulate firmware activation commands tools/testing/nvdimm: Prepare nfit_ctl_test() for ND_CMD_CALL emulation tools/testing/nvdimm: Add command debug messages tools/testing/nvdimm: Cleanup dimm index passing ACPI: NFIT: Define runtime firmware activation commands ACPI: NFIT: Move bus_dsm_mask out of generic nvdimm_bus_descriptor libnvdimm: Validate command family indices
This commit is contained in:
commit
4bf5e36118
|
@ -202,6 +202,25 @@ Description:
|
|||
functions. See the section named 'NVDIMM Root Device _DSMs' in
|
||||
the ACPI specification.
|
||||
|
||||
What: /sys/bus/nd/devices/ndbusX/nfit/firmware_activate_noidle
|
||||
Date: Apr, 2020
|
||||
KernelVersion: v5.8
|
||||
Contact: linux-nvdimm@lists.01.org
|
||||
Description:
|
||||
(RW) The Intel platform implementation of firmware activate
|
||||
support exposes an option let the platform force idle devices in
|
||||
the system over the activation event, or trust that the OS will
|
||||
do it. The safe default is to let the platform force idle
|
||||
devices since the kernel is already in a suspend state, and on
|
||||
the chance that a driver does not properly quiesce bus-mastering
|
||||
after a suspend callback the platform will handle it. However,
|
||||
the activation might abort if, for example, platform firmware
|
||||
determines that the activation time exceeds the max PCI-E
|
||||
completion timeout. Since the platform does not know whether the
|
||||
OS is running the activation from a suspend context it aborts,
|
||||
but if the system owner trusts driver suspend callback to be
|
||||
sufficient then 'firmware_activation_noidle' can be
|
||||
enabled to bypass the activation abort.
|
||||
|
||||
What: /sys/bus/nd/devices/regionX/nfit/range_index
|
||||
Date: Jun, 2015
|
||||
|
|
2
Documentation/ABI/testing/sysfs-bus-nvdimm
Normal file
2
Documentation/ABI/testing/sysfs-bus-nvdimm
Normal file
|
@ -0,0 +1,2 @@
|
|||
The libnvdimm sub-system implements a common sysfs interface for
|
||||
platform nvdimm resources. See Documentation/driver-api/nvdimm/.
|
86
Documentation/driver-api/nvdimm/firmware-activate.rst
Normal file
86
Documentation/driver-api/nvdimm/firmware-activate.rst
Normal file
|
@ -0,0 +1,86 @@
|
|||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
==================================
|
||||
NVDIMM Runtime Firmware Activation
|
||||
==================================
|
||||
|
||||
Some persistent memory devices run a firmware locally on the device /
|
||||
"DIMM" to perform tasks like media management, capacity provisioning,
|
||||
and health monitoring. The process of updating that firmware typically
|
||||
involves a reboot because it has implications for in-flight memory
|
||||
transactions. However, reboots are disruptive and at least the Intel
|
||||
persistent memory platform implementation, described by the Intel ACPI
|
||||
DSM specification [1], has added support for activating firmware at
|
||||
runtime.
|
||||
|
||||
A native sysfs interface is implemented in libnvdimm to allow platform
|
||||
to advertise and control their local runtime firmware activation
|
||||
capability.
|
||||
|
||||
The libnvdimm bus object, ndbusX, implements an ndbusX/firmware/activate
|
||||
attribute that shows the state of the firmware activation as one of 'idle',
|
||||
'armed', 'overflow', and 'busy'.
|
||||
|
||||
- idle:
|
||||
No devices are set / armed to activate firmware
|
||||
|
||||
- armed:
|
||||
At least one device is armed
|
||||
|
||||
- busy:
|
||||
In the busy state armed devices are in the process of transitioning
|
||||
back to idle and completing an activation cycle.
|
||||
|
||||
- overflow:
|
||||
If the platform has a concept of incremental work needed to perform
|
||||
the activation it could be the case that too many DIMMs are armed for
|
||||
activation. In that scenario the potential for firmware activation to
|
||||
timeout is indicated by the 'overflow' state.
|
||||
|
||||
The 'ndbusX/firmware/activate' property can be written with a value of
|
||||
either 'live', or 'quiesce'. A value of 'quiesce' triggers the kernel to
|
||||
run firmware activation from within the equivalent of the hibernation
|
||||
'freeze' state where drivers and applications are notified to stop their
|
||||
modifications of system memory. A value of 'live' attempts
|
||||
firmware activation without this hibernation cycle. The
|
||||
'ndbusX/firmware/activate' property will be elided completely if no
|
||||
firmware activation capability is detected.
|
||||
|
||||
Another property 'ndbusX/firmware/capability' indicates a value of
|
||||
'live' or 'quiesce', where 'live' indicates that the firmware
|
||||
does not require or inflict any quiesce period on the system to update
|
||||
firmware. A capability value of 'quiesce' indicates that firmware does
|
||||
expect and injects a quiet period for the memory controller, but 'live'
|
||||
may still be written to 'ndbusX/firmware/activate' as an override to
|
||||
assume the risk of racing firmware update with in-flight device and
|
||||
application activity. The 'ndbusX/firmware/capability' property will be
|
||||
elided completely if no firmware activation capability is detected.
|
||||
|
||||
The libnvdimm memory-device / DIMM object, nmemX, implements
|
||||
'nmemX/firmware/activate' and 'nmemX/firmware/result' attributes to
|
||||
communicate the per-device firmware activation state. Similar to the
|
||||
'ndbusX/firmware/activate' attribute, the 'nmemX/firmware/activate'
|
||||
attribute indicates 'idle', 'armed', or 'busy'. The state transitions
|
||||
from 'armed' to 'idle' when the system is prepared to activate firmware,
|
||||
firmware staged + state set to armed, and 'ndbusX/firmware/activate' is
|
||||
triggered. After that activation event the nmemX/firmware/result
|
||||
attribute reflects the state of the last activation as one of:
|
||||
|
||||
- none:
|
||||
No runtime activation triggered since the last time the device was reset
|
||||
|
||||
- success:
|
||||
The last runtime activation completed successfully.
|
||||
|
||||
- fail:
|
||||
The last runtime activation failed for device-specific reasons.
|
||||
|
||||
- not_staged:
|
||||
The last runtime activation failed due to a sequencing error of the
|
||||
firmware image not being staged.
|
||||
|
||||
- need_reset:
|
||||
Runtime firmware activation failed, but the firmware can still be
|
||||
activated via the legacy method of power-cycling the system.
|
||||
|
||||
[1]: https://docs.pmem.io/persistent-memory/
|
|
@ -73,6 +73,18 @@ const guid_t *to_nfit_uuid(enum nfit_uuids id)
|
|||
}
|
||||
EXPORT_SYMBOL(to_nfit_uuid);
|
||||
|
||||
static const guid_t *to_nfit_bus_uuid(int family)
|
||||
{
|
||||
if (WARN_ONCE(family == NVDIMM_BUS_FAMILY_NFIT,
|
||||
"only secondary bus families can be translated\n"))
|
||||
return NULL;
|
||||
/*
|
||||
* The index of bus UUIDs starts immediately following the last
|
||||
* NVDIMM/leaf family.
|
||||
*/
|
||||
return to_nfit_uuid(family + NVDIMM_FAMILY_MAX);
|
||||
}
|
||||
|
||||
static struct acpi_device *to_acpi_dev(struct acpi_nfit_desc *acpi_desc)
|
||||
{
|
||||
struct nvdimm_bus_descriptor *nd_desc = &acpi_desc->nd_desc;
|
||||
|
@ -362,24 +374,8 @@ static u8 nfit_dsm_revid(unsigned family, unsigned func)
|
|||
{
|
||||
static const u8 revid_table[NVDIMM_FAMILY_MAX+1][NVDIMM_CMD_MAX+1] = {
|
||||
[NVDIMM_FAMILY_INTEL] = {
|
||||
[NVDIMM_INTEL_GET_MODES] = 2,
|
||||
[NVDIMM_INTEL_GET_FWINFO] = 2,
|
||||
[NVDIMM_INTEL_START_FWUPDATE] = 2,
|
||||
[NVDIMM_INTEL_SEND_FWUPDATE] = 2,
|
||||
[NVDIMM_INTEL_FINISH_FWUPDATE] = 2,
|
||||
[NVDIMM_INTEL_QUERY_FWUPDATE] = 2,
|
||||
[NVDIMM_INTEL_SET_THRESHOLD] = 2,
|
||||
[NVDIMM_INTEL_INJECT_ERROR] = 2,
|
||||
[NVDIMM_INTEL_GET_SECURITY_STATE] = 2,
|
||||
[NVDIMM_INTEL_SET_PASSPHRASE] = 2,
|
||||
[NVDIMM_INTEL_DISABLE_PASSPHRASE] = 2,
|
||||
[NVDIMM_INTEL_UNLOCK_UNIT] = 2,
|
||||
[NVDIMM_INTEL_FREEZE_LOCK] = 2,
|
||||
[NVDIMM_INTEL_SECURE_ERASE] = 2,
|
||||
[NVDIMM_INTEL_OVERWRITE] = 2,
|
||||
[NVDIMM_INTEL_QUERY_OVERWRITE] = 2,
|
||||
[NVDIMM_INTEL_SET_MASTER_PASSPHRASE] = 2,
|
||||
[NVDIMM_INTEL_MASTER_SECURE_ERASE] = 2,
|
||||
[NVDIMM_INTEL_GET_MODES ...
|
||||
NVDIMM_INTEL_FW_ACTIVATE_ARM] = 2,
|
||||
},
|
||||
};
|
||||
u8 id;
|
||||
|
@ -406,7 +402,7 @@ static bool payload_dumpable(struct nvdimm *nvdimm, unsigned int func)
|
|||
}
|
||||
|
||||
static int cmd_to_func(struct nfit_mem *nfit_mem, unsigned int cmd,
|
||||
struct nd_cmd_pkg *call_pkg)
|
||||
struct nd_cmd_pkg *call_pkg, int *family)
|
||||
{
|
||||
if (call_pkg) {
|
||||
int i;
|
||||
|
@ -417,6 +413,7 @@ static int cmd_to_func(struct nfit_mem *nfit_mem, unsigned int cmd,
|
|||
for (i = 0; i < ARRAY_SIZE(call_pkg->nd_reserved2); i++)
|
||||
if (call_pkg->nd_reserved2[i])
|
||||
return -EINVAL;
|
||||
*family = call_pkg->nd_family;
|
||||
return call_pkg->nd_command;
|
||||
}
|
||||
|
||||
|
@ -450,13 +447,14 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
|
|||
acpi_handle handle;
|
||||
const guid_t *guid;
|
||||
int func, rc, i;
|
||||
int family = 0;
|
||||
|
||||
if (cmd_rc)
|
||||
*cmd_rc = -EINVAL;
|
||||
|
||||
if (cmd == ND_CMD_CALL)
|
||||
call_pkg = buf;
|
||||
func = cmd_to_func(nfit_mem, cmd, call_pkg);
|
||||
func = cmd_to_func(nfit_mem, cmd, call_pkg, &family);
|
||||
if (func < 0)
|
||||
return func;
|
||||
|
||||
|
@ -478,9 +476,17 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
|
|||
|
||||
cmd_name = nvdimm_bus_cmd_name(cmd);
|
||||
cmd_mask = nd_desc->cmd_mask;
|
||||
dsm_mask = nd_desc->bus_dsm_mask;
|
||||
desc = nd_cmd_bus_desc(cmd);
|
||||
if (cmd == ND_CMD_CALL && call_pkg->nd_family) {
|
||||
family = call_pkg->nd_family;
|
||||
if (!test_bit(family, &nd_desc->bus_family_mask))
|
||||
return -EINVAL;
|
||||
dsm_mask = acpi_desc->family_dsm_mask[family];
|
||||
guid = to_nfit_bus_uuid(family);
|
||||
} else {
|
||||
dsm_mask = acpi_desc->bus_dsm_mask;
|
||||
guid = to_nfit_uuid(NFIT_DEV_BUS);
|
||||
}
|
||||
desc = nd_cmd_bus_desc(cmd);
|
||||
handle = adev->handle;
|
||||
dimm_name = "bus";
|
||||
}
|
||||
|
@ -516,8 +522,8 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
|
|||
in_buf.buffer.length = call_pkg->nd_size_in;
|
||||
}
|
||||
|
||||
dev_dbg(dev, "%s cmd: %d: func: %d input length: %d\n",
|
||||
dimm_name, cmd, func, in_buf.buffer.length);
|
||||
dev_dbg(dev, "%s cmd: %d: family: %d func: %d input length: %d\n",
|
||||
dimm_name, cmd, family, func, in_buf.buffer.length);
|
||||
if (payload_dumpable(nvdimm, func))
|
||||
print_hex_dump_debug("nvdimm in ", DUMP_PREFIX_OFFSET, 4, 4,
|
||||
in_buf.buffer.pointer,
|
||||
|
@ -1238,8 +1244,9 @@ static ssize_t bus_dsm_mask_show(struct device *dev,
|
|||
{
|
||||
struct nvdimm_bus *nvdimm_bus = to_nvdimm_bus(dev);
|
||||
struct nvdimm_bus_descriptor *nd_desc = to_nd_desc(nvdimm_bus);
|
||||
struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc);
|
||||
|
||||
return sprintf(buf, "%#lx\n", nd_desc->bus_dsm_mask);
|
||||
return sprintf(buf, "%#lx\n", acpi_desc->bus_dsm_mask);
|
||||
}
|
||||
static struct device_attribute dev_attr_bus_dsm_mask =
|
||||
__ATTR(dsm_mask, 0444, bus_dsm_mask_show, NULL);
|
||||
|
@ -1385,8 +1392,12 @@ static umode_t nfit_visible(struct kobject *kobj, struct attribute *a, int n)
|
|||
struct device *dev = container_of(kobj, struct device, kobj);
|
||||
struct nvdimm_bus *nvdimm_bus = to_nvdimm_bus(dev);
|
||||
|
||||
if (a == &dev_attr_scrub.attr && !ars_supported(nvdimm_bus))
|
||||
return 0;
|
||||
if (a == &dev_attr_scrub.attr)
|
||||
return ars_supported(nvdimm_bus) ? a->mode : 0;
|
||||
|
||||
if (a == &dev_attr_firmware_activate_noidle.attr)
|
||||
return intel_fwa_supported(nvdimm_bus) ? a->mode : 0;
|
||||
|
||||
return a->mode;
|
||||
}
|
||||
|
||||
|
@ -1395,6 +1406,7 @@ static struct attribute *acpi_nfit_attributes[] = {
|
|||
&dev_attr_scrub.attr,
|
||||
&dev_attr_hw_error_scrub.attr,
|
||||
&dev_attr_bus_dsm_mask.attr,
|
||||
&dev_attr_firmware_activate_noidle.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
|
@ -1823,6 +1835,7 @@ static void populate_shutdown_status(struct nfit_mem *nfit_mem)
|
|||
static int acpi_nfit_add_dimm(struct acpi_nfit_desc *acpi_desc,
|
||||
struct nfit_mem *nfit_mem, u32 device_handle)
|
||||
{
|
||||
struct nvdimm_bus_descriptor *nd_desc = &acpi_desc->nd_desc;
|
||||
struct acpi_device *adev, *adev_dimm;
|
||||
struct device *dev = acpi_desc->dev;
|
||||
unsigned long dsm_mask, label_mask;
|
||||
|
@ -1834,6 +1847,7 @@ static int acpi_nfit_add_dimm(struct acpi_nfit_desc *acpi_desc,
|
|||
/* nfit test assumes 1:1 relationship between commands and dsms */
|
||||
nfit_mem->dsm_mask = acpi_desc->dimm_cmd_force_en;
|
||||
nfit_mem->family = NVDIMM_FAMILY_INTEL;
|
||||
set_bit(NVDIMM_FAMILY_INTEL, &nd_desc->dimm_family_mask);
|
||||
|
||||
if (dcr->valid_fields & ACPI_NFIT_CONTROL_MFG_INFO_VALID)
|
||||
sprintf(nfit_mem->id, "%04x-%02x-%04x-%08x",
|
||||
|
@ -1886,10 +1900,13 @@ static int acpi_nfit_add_dimm(struct acpi_nfit_desc *acpi_desc,
|
|||
* Note, that checking for function0 (bit0) tells us if any commands
|
||||
* are reachable through this GUID.
|
||||
*/
|
||||
clear_bit(NVDIMM_FAMILY_INTEL, &nd_desc->dimm_family_mask);
|
||||
for (i = 0; i <= NVDIMM_FAMILY_MAX; i++)
|
||||
if (acpi_check_dsm(adev_dimm->handle, to_nfit_uuid(i), 1, 1))
|
||||
if (acpi_check_dsm(adev_dimm->handle, to_nfit_uuid(i), 1, 1)) {
|
||||
set_bit(i, &nd_desc->dimm_family_mask);
|
||||
if (family < 0 || i == default_dsm_family)
|
||||
family = i;
|
||||
}
|
||||
|
||||
/* limit the supported commands to those that are publicly documented */
|
||||
nfit_mem->family = family;
|
||||
|
@ -2007,6 +2024,26 @@ static const struct nvdimm_security_ops *acpi_nfit_get_security_ops(int family)
|
|||
}
|
||||
}
|
||||
|
||||
static const struct nvdimm_fw_ops *acpi_nfit_get_fw_ops(
|
||||
struct nfit_mem *nfit_mem)
|
||||
{
|
||||
unsigned long mask;
|
||||
struct acpi_nfit_desc *acpi_desc = nfit_mem->acpi_desc;
|
||||
struct nvdimm_bus_descriptor *nd_desc = &acpi_desc->nd_desc;
|
||||
|
||||
if (!nd_desc->fw_ops)
|
||||
return NULL;
|
||||
|
||||
if (nfit_mem->family != NVDIMM_FAMILY_INTEL)
|
||||
return NULL;
|
||||
|
||||
mask = nfit_mem->dsm_mask & NVDIMM_INTEL_FW_ACTIVATE_CMDMASK;
|
||||
if (mask != NVDIMM_INTEL_FW_ACTIVATE_CMDMASK)
|
||||
return NULL;
|
||||
|
||||
return intel_fw_ops;
|
||||
}
|
||||
|
||||
static int acpi_nfit_register_dimms(struct acpi_nfit_desc *acpi_desc)
|
||||
{
|
||||
struct nfit_mem *nfit_mem;
|
||||
|
@ -2083,7 +2120,8 @@ static int acpi_nfit_register_dimms(struct acpi_nfit_desc *acpi_desc)
|
|||
acpi_nfit_dimm_attribute_groups,
|
||||
flags, cmd_mask, flush ? flush->hint_count : 0,
|
||||
nfit_mem->flush_wpq, &nfit_mem->id[0],
|
||||
acpi_nfit_get_security_ops(nfit_mem->family));
|
||||
acpi_nfit_get_security_ops(nfit_mem->family),
|
||||
acpi_nfit_get_fw_ops(nfit_mem));
|
||||
if (!nvdimm)
|
||||
return -ENOMEM;
|
||||
|
||||
|
@ -2147,12 +2185,23 @@ static void acpi_nfit_init_dsms(struct acpi_nfit_desc *acpi_desc)
|
|||
{
|
||||
struct nvdimm_bus_descriptor *nd_desc = &acpi_desc->nd_desc;
|
||||
const guid_t *guid = to_nfit_uuid(NFIT_DEV_BUS);
|
||||
unsigned long dsm_mask, *mask;
|
||||
struct acpi_device *adev;
|
||||
unsigned long dsm_mask;
|
||||
int i;
|
||||
|
||||
set_bit(ND_CMD_CALL, &nd_desc->cmd_mask);
|
||||
set_bit(NVDIMM_BUS_FAMILY_NFIT, &nd_desc->bus_family_mask);
|
||||
|
||||
/* enable nfit_test to inject bus command emulation */
|
||||
if (acpi_desc->bus_cmd_force_en) {
|
||||
nd_desc->cmd_mask = acpi_desc->bus_cmd_force_en;
|
||||
nd_desc->bus_dsm_mask = acpi_desc->bus_nfit_cmd_force_en;
|
||||
mask = &nd_desc->bus_family_mask;
|
||||
if (acpi_desc->family_dsm_mask[NVDIMM_BUS_FAMILY_INTEL]) {
|
||||
set_bit(NVDIMM_BUS_FAMILY_INTEL, mask);
|
||||
nd_desc->fw_ops = intel_bus_fw_ops;
|
||||
}
|
||||
}
|
||||
|
||||
adev = to_acpi_dev(acpi_desc);
|
||||
if (!adev)
|
||||
return;
|
||||
|
@ -2160,7 +2209,6 @@ static void acpi_nfit_init_dsms(struct acpi_nfit_desc *acpi_desc)
|
|||
for (i = ND_CMD_ARS_CAP; i <= ND_CMD_CLEAR_ERROR; i++)
|
||||
if (acpi_check_dsm(adev->handle, guid, 1, 1ULL << i))
|
||||
set_bit(i, &nd_desc->cmd_mask);
|
||||
set_bit(ND_CMD_CALL, &nd_desc->cmd_mask);
|
||||
|
||||
dsm_mask =
|
||||
(1 << ND_CMD_ARS_CAP) |
|
||||
|
@ -2173,7 +2221,20 @@ static void acpi_nfit_init_dsms(struct acpi_nfit_desc *acpi_desc)
|
|||
(1 << NFIT_CMD_ARS_INJECT_GET);
|
||||
for_each_set_bit(i, &dsm_mask, BITS_PER_LONG)
|
||||
if (acpi_check_dsm(adev->handle, guid, 1, 1ULL << i))
|
||||
set_bit(i, &nd_desc->bus_dsm_mask);
|
||||
set_bit(i, &acpi_desc->bus_dsm_mask);
|
||||
|
||||
/* Enumerate allowed NVDIMM_BUS_FAMILY_INTEL commands */
|
||||
dsm_mask = NVDIMM_BUS_INTEL_FW_ACTIVATE_CMDMASK;
|
||||
guid = to_nfit_bus_uuid(NVDIMM_BUS_FAMILY_INTEL);
|
||||
mask = &acpi_desc->family_dsm_mask[NVDIMM_BUS_FAMILY_INTEL];
|
||||
for_each_set_bit(i, &dsm_mask, BITS_PER_LONG)
|
||||
if (acpi_check_dsm(adev->handle, guid, 1, 1ULL << i))
|
||||
set_bit(i, mask);
|
||||
|
||||
if (*mask == dsm_mask) {
|
||||
set_bit(NVDIMM_BUS_FAMILY_INTEL, &nd_desc->bus_family_mask);
|
||||
nd_desc->fw_ops = intel_bus_fw_ops;
|
||||
}
|
||||
}
|
||||
|
||||
static ssize_t range_index_show(struct device *dev,
|
||||
|
@ -3273,7 +3334,7 @@ static void acpi_nfit_init_ars(struct acpi_nfit_desc *acpi_desc,
|
|||
static int acpi_nfit_register_regions(struct acpi_nfit_desc *acpi_desc)
|
||||
{
|
||||
struct nfit_spa *nfit_spa;
|
||||
int rc;
|
||||
int rc, do_sched_ars = 0;
|
||||
|
||||
set_bit(ARS_VALID, &acpi_desc->scrub_flags);
|
||||
list_for_each_entry(nfit_spa, &acpi_desc->spas, list) {
|
||||
|
@ -3285,7 +3346,7 @@ static int acpi_nfit_register_regions(struct acpi_nfit_desc *acpi_desc)
|
|||
}
|
||||
}
|
||||
|
||||
list_for_each_entry(nfit_spa, &acpi_desc->spas, list)
|
||||
list_for_each_entry(nfit_spa, &acpi_desc->spas, list) {
|
||||
switch (nfit_spa_type(nfit_spa->spa)) {
|
||||
case NFIT_SPA_VOLATILE:
|
||||
case NFIT_SPA_PM:
|
||||
|
@ -3293,6 +3354,13 @@ static int acpi_nfit_register_regions(struct acpi_nfit_desc *acpi_desc)
|
|||
rc = ars_register(acpi_desc, nfit_spa);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/*
|
||||
* Kick off background ARS if at least one
|
||||
* region successfully registered ARS
|
||||
*/
|
||||
if (!test_bit(ARS_FAILED, &nfit_spa->ars_state))
|
||||
do_sched_ars++;
|
||||
break;
|
||||
case NFIT_SPA_BDW:
|
||||
/* nothing to register */
|
||||
|
@ -3311,7 +3379,9 @@ static int acpi_nfit_register_regions(struct acpi_nfit_desc *acpi_desc)
|
|||
/* don't register unknown regions */
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (do_sched_ars)
|
||||
sched_ars(acpi_desc);
|
||||
return 0;
|
||||
}
|
||||
|
@ -3485,7 +3555,10 @@ static int __acpi_nfit_clear_to_send(struct nvdimm_bus_descriptor *nd_desc,
|
|||
return 0;
|
||||
}
|
||||
|
||||
/* prevent security commands from being issued via ioctl */
|
||||
/*
|
||||
* Prevent security and firmware activate commands from being issued via
|
||||
* ioctl.
|
||||
*/
|
||||
static int acpi_nfit_clear_to_send(struct nvdimm_bus_descriptor *nd_desc,
|
||||
struct nvdimm *nvdimm, unsigned int cmd, void *buf)
|
||||
{
|
||||
|
@ -3496,10 +3569,15 @@ static int acpi_nfit_clear_to_send(struct nvdimm_bus_descriptor *nd_desc,
|
|||
call_pkg->nd_family == NVDIMM_FAMILY_INTEL) {
|
||||
func = call_pkg->nd_command;
|
||||
if (func > NVDIMM_CMD_MAX ||
|
||||
(1 << func) & NVDIMM_INTEL_SECURITY_CMDMASK)
|
||||
(1 << func) & NVDIMM_INTEL_DENY_CMDMASK)
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
/* block all non-nfit bus commands */
|
||||
if (!nvdimm && cmd == ND_CMD_CALL &&
|
||||
call_pkg->nd_family != NVDIMM_BUS_FAMILY_NFIT)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
return __acpi_nfit_clear_to_send(nd_desc, nvdimm, cmd);
|
||||
}
|
||||
|
||||
|
@ -3791,6 +3869,7 @@ static __init int nfit_init(void)
|
|||
guid_parse(UUID_NFIT_DIMM_N_HPE2, &nfit_uuid[NFIT_DEV_DIMM_N_HPE2]);
|
||||
guid_parse(UUID_NFIT_DIMM_N_MSFT, &nfit_uuid[NFIT_DEV_DIMM_N_MSFT]);
|
||||
guid_parse(UUID_NFIT_DIMM_N_HYPERV, &nfit_uuid[NFIT_DEV_DIMM_N_HYPERV]);
|
||||
guid_parse(UUID_INTEL_BUS, &nfit_uuid[NFIT_BUS_INTEL]);
|
||||
|
||||
nfit_wq = create_singlethread_workqueue("nfit");
|
||||
if (!nfit_wq)
|
||||
|
|
|
@ -7,6 +7,48 @@
|
|||
#include "intel.h"
|
||||
#include "nfit.h"
|
||||
|
||||
static ssize_t firmware_activate_noidle_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct nvdimm_bus *nvdimm_bus = to_nvdimm_bus(dev);
|
||||
struct nvdimm_bus_descriptor *nd_desc = to_nd_desc(nvdimm_bus);
|
||||
struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc);
|
||||
|
||||
return sprintf(buf, "%s\n", acpi_desc->fwa_noidle ? "Y" : "N");
|
||||
}
|
||||
|
||||
static ssize_t firmware_activate_noidle_store(struct device *dev,
|
||||
struct device_attribute *attr, const char *buf, size_t size)
|
||||
{
|
||||
struct nvdimm_bus *nvdimm_bus = to_nvdimm_bus(dev);
|
||||
struct nvdimm_bus_descriptor *nd_desc = to_nd_desc(nvdimm_bus);
|
||||
struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc);
|
||||
ssize_t rc;
|
||||
bool val;
|
||||
|
||||
rc = kstrtobool(buf, &val);
|
||||
if (rc)
|
||||
return rc;
|
||||
if (val != acpi_desc->fwa_noidle)
|
||||
acpi_desc->fwa_cap = NVDIMM_FWA_CAP_INVALID;
|
||||
acpi_desc->fwa_noidle = val;
|
||||
return size;
|
||||
}
|
||||
DEVICE_ATTR_RW(firmware_activate_noidle);
|
||||
|
||||
bool intel_fwa_supported(struct nvdimm_bus *nvdimm_bus)
|
||||
{
|
||||
struct nvdimm_bus_descriptor *nd_desc = to_nd_desc(nvdimm_bus);
|
||||
struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc);
|
||||
unsigned long *mask;
|
||||
|
||||
if (!test_bit(NVDIMM_BUS_FAMILY_INTEL, &nd_desc->bus_family_mask))
|
||||
return false;
|
||||
|
||||
mask = &acpi_desc->family_dsm_mask[NVDIMM_BUS_FAMILY_INTEL];
|
||||
return *mask == NVDIMM_BUS_INTEL_FW_ACTIVATE_CMDMASK;
|
||||
}
|
||||
|
||||
static unsigned long intel_security_flags(struct nvdimm *nvdimm,
|
||||
enum nvdimm_passphrase_type ptype)
|
||||
{
|
||||
|
@ -389,3 +431,347 @@ static const struct nvdimm_security_ops __intel_security_ops = {
|
|||
};
|
||||
|
||||
const struct nvdimm_security_ops *intel_security_ops = &__intel_security_ops;
|
||||
|
||||
static int intel_bus_fwa_businfo(struct nvdimm_bus_descriptor *nd_desc,
|
||||
struct nd_intel_bus_fw_activate_businfo *info)
|
||||
{
|
||||
struct {
|
||||
struct nd_cmd_pkg pkg;
|
||||
struct nd_intel_bus_fw_activate_businfo cmd;
|
||||
} nd_cmd = {
|
||||
.pkg = {
|
||||
.nd_command = NVDIMM_BUS_INTEL_FW_ACTIVATE_BUSINFO,
|
||||
.nd_family = NVDIMM_BUS_FAMILY_INTEL,
|
||||
.nd_size_out =
|
||||
sizeof(struct nd_intel_bus_fw_activate_businfo),
|
||||
.nd_fw_size =
|
||||
sizeof(struct nd_intel_bus_fw_activate_businfo),
|
||||
},
|
||||
};
|
||||
int rc;
|
||||
|
||||
rc = nd_desc->ndctl(nd_desc, NULL, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd),
|
||||
NULL);
|
||||
*info = nd_cmd.cmd;
|
||||
return rc;
|
||||
}
|
||||
|
||||
/* The fw_ops expect to be called with the nvdimm_bus_lock() held */
|
||||
static enum nvdimm_fwa_state intel_bus_fwa_state(
|
||||
struct nvdimm_bus_descriptor *nd_desc)
|
||||
{
|
||||
struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc);
|
||||
struct nd_intel_bus_fw_activate_businfo info;
|
||||
struct device *dev = acpi_desc->dev;
|
||||
enum nvdimm_fwa_state state;
|
||||
int rc;
|
||||
|
||||
/*
|
||||
* It should not be possible for platform firmware to return
|
||||
* busy because activate is a synchronous operation. Treat it
|
||||
* similar to invalid, i.e. always refresh / poll the status.
|
||||
*/
|
||||
switch (acpi_desc->fwa_state) {
|
||||
case NVDIMM_FWA_INVALID:
|
||||
case NVDIMM_FWA_BUSY:
|
||||
break;
|
||||
default:
|
||||
/* check if capability needs to be refreshed */
|
||||
if (acpi_desc->fwa_cap == NVDIMM_FWA_CAP_INVALID)
|
||||
break;
|
||||
return acpi_desc->fwa_state;
|
||||
}
|
||||
|
||||
/* Refresh with platform firmware */
|
||||
rc = intel_bus_fwa_businfo(nd_desc, &info);
|
||||
if (rc)
|
||||
return NVDIMM_FWA_INVALID;
|
||||
|
||||
switch (info.state) {
|
||||
case ND_INTEL_FWA_IDLE:
|
||||
state = NVDIMM_FWA_IDLE;
|
||||
break;
|
||||
case ND_INTEL_FWA_BUSY:
|
||||
state = NVDIMM_FWA_BUSY;
|
||||
break;
|
||||
case ND_INTEL_FWA_ARMED:
|
||||
if (info.activate_tmo > info.max_quiesce_tmo)
|
||||
state = NVDIMM_FWA_ARM_OVERFLOW;
|
||||
else
|
||||
state = NVDIMM_FWA_ARMED;
|
||||
break;
|
||||
default:
|
||||
dev_err_once(dev, "invalid firmware activate state %d\n",
|
||||
info.state);
|
||||
return NVDIMM_FWA_INVALID;
|
||||
}
|
||||
|
||||
/*
|
||||
* Capability data is available in the same payload as state. It
|
||||
* is expected to be static.
|
||||
*/
|
||||
if (acpi_desc->fwa_cap == NVDIMM_FWA_CAP_INVALID) {
|
||||
if (info.capability & ND_INTEL_BUS_FWA_CAP_FWQUIESCE)
|
||||
acpi_desc->fwa_cap = NVDIMM_FWA_CAP_QUIESCE;
|
||||
else if (info.capability & ND_INTEL_BUS_FWA_CAP_OSQUIESCE) {
|
||||
/*
|
||||
* Skip hibernate cycle by default if platform
|
||||
* indicates that it does not need devices to be
|
||||
* quiesced.
|
||||
*/
|
||||
acpi_desc->fwa_cap = NVDIMM_FWA_CAP_LIVE;
|
||||
} else
|
||||
acpi_desc->fwa_cap = NVDIMM_FWA_CAP_NONE;
|
||||
}
|
||||
|
||||
acpi_desc->fwa_state = state;
|
||||
|
||||
return state;
|
||||
}
|
||||
|
||||
static enum nvdimm_fwa_capability intel_bus_fwa_capability(
|
||||
struct nvdimm_bus_descriptor *nd_desc)
|
||||
{
|
||||
struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc);
|
||||
|
||||
if (acpi_desc->fwa_cap > NVDIMM_FWA_CAP_INVALID)
|
||||
return acpi_desc->fwa_cap;
|
||||
|
||||
if (intel_bus_fwa_state(nd_desc) > NVDIMM_FWA_INVALID)
|
||||
return acpi_desc->fwa_cap;
|
||||
|
||||
return NVDIMM_FWA_CAP_INVALID;
|
||||
}
|
||||
|
||||
static int intel_bus_fwa_activate(struct nvdimm_bus_descriptor *nd_desc)
|
||||
{
|
||||
struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc);
|
||||
struct {
|
||||
struct nd_cmd_pkg pkg;
|
||||
struct nd_intel_bus_fw_activate cmd;
|
||||
} nd_cmd = {
|
||||
.pkg = {
|
||||
.nd_command = NVDIMM_BUS_INTEL_FW_ACTIVATE,
|
||||
.nd_family = NVDIMM_BUS_FAMILY_INTEL,
|
||||
.nd_size_in = sizeof(nd_cmd.cmd.iodev_state),
|
||||
.nd_size_out =
|
||||
sizeof(struct nd_intel_bus_fw_activate),
|
||||
.nd_fw_size =
|
||||
sizeof(struct nd_intel_bus_fw_activate),
|
||||
},
|
||||
/*
|
||||
* Even though activate is run from a suspended context,
|
||||
* for safety, still ask platform firmware to force
|
||||
* quiesce devices by default. Let a module
|
||||
* parameter override that policy.
|
||||
*/
|
||||
.cmd = {
|
||||
.iodev_state = acpi_desc->fwa_noidle
|
||||
? ND_INTEL_BUS_FWA_IODEV_OS_IDLE
|
||||
: ND_INTEL_BUS_FWA_IODEV_FORCE_IDLE,
|
||||
},
|
||||
};
|
||||
int rc;
|
||||
|
||||
switch (intel_bus_fwa_state(nd_desc)) {
|
||||
case NVDIMM_FWA_ARMED:
|
||||
case NVDIMM_FWA_ARM_OVERFLOW:
|
||||
break;
|
||||
default:
|
||||
return -ENXIO;
|
||||
}
|
||||
|
||||
rc = nd_desc->ndctl(nd_desc, NULL, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd),
|
||||
NULL);
|
||||
|
||||
/*
|
||||
* Whether the command succeeded, or failed, the agent checking
|
||||
* for the result needs to query the DIMMs individually.
|
||||
* Increment the activation count to invalidate all the DIMM
|
||||
* states at once (it's otherwise not possible to take
|
||||
* acpi_desc->init_mutex in this context)
|
||||
*/
|
||||
acpi_desc->fwa_state = NVDIMM_FWA_INVALID;
|
||||
acpi_desc->fwa_count++;
|
||||
|
||||
dev_dbg(acpi_desc->dev, "result: %d\n", rc);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
static const struct nvdimm_bus_fw_ops __intel_bus_fw_ops = {
|
||||
.activate_state = intel_bus_fwa_state,
|
||||
.capability = intel_bus_fwa_capability,
|
||||
.activate = intel_bus_fwa_activate,
|
||||
};
|
||||
|
||||
const struct nvdimm_bus_fw_ops *intel_bus_fw_ops = &__intel_bus_fw_ops;
|
||||
|
||||
static int intel_fwa_dimminfo(struct nvdimm *nvdimm,
|
||||
struct nd_intel_fw_activate_dimminfo *info)
|
||||
{
|
||||
struct {
|
||||
struct nd_cmd_pkg pkg;
|
||||
struct nd_intel_fw_activate_dimminfo cmd;
|
||||
} nd_cmd = {
|
||||
.pkg = {
|
||||
.nd_command = NVDIMM_INTEL_FW_ACTIVATE_DIMMINFO,
|
||||
.nd_family = NVDIMM_FAMILY_INTEL,
|
||||
.nd_size_out =
|
||||
sizeof(struct nd_intel_fw_activate_dimminfo),
|
||||
.nd_fw_size =
|
||||
sizeof(struct nd_intel_fw_activate_dimminfo),
|
||||
},
|
||||
};
|
||||
int rc;
|
||||
|
||||
rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL);
|
||||
*info = nd_cmd.cmd;
|
||||
return rc;
|
||||
}
|
||||
|
||||
static enum nvdimm_fwa_state intel_fwa_state(struct nvdimm *nvdimm)
|
||||
{
|
||||
struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm);
|
||||
struct acpi_nfit_desc *acpi_desc = nfit_mem->acpi_desc;
|
||||
struct nd_intel_fw_activate_dimminfo info;
|
||||
int rc;
|
||||
|
||||
/*
|
||||
* Similar to the bus state, since activate is synchronous the
|
||||
* busy state should resolve within the context of 'activate'.
|
||||
*/
|
||||
switch (nfit_mem->fwa_state) {
|
||||
case NVDIMM_FWA_INVALID:
|
||||
case NVDIMM_FWA_BUSY:
|
||||
break;
|
||||
default:
|
||||
/* If no activations occurred the old state is still valid */
|
||||
if (nfit_mem->fwa_count == acpi_desc->fwa_count)
|
||||
return nfit_mem->fwa_state;
|
||||
}
|
||||
|
||||
rc = intel_fwa_dimminfo(nvdimm, &info);
|
||||
if (rc)
|
||||
return NVDIMM_FWA_INVALID;
|
||||
|
||||
switch (info.state) {
|
||||
case ND_INTEL_FWA_IDLE:
|
||||
nfit_mem->fwa_state = NVDIMM_FWA_IDLE;
|
||||
break;
|
||||
case ND_INTEL_FWA_BUSY:
|
||||
nfit_mem->fwa_state = NVDIMM_FWA_BUSY;
|
||||
break;
|
||||
case ND_INTEL_FWA_ARMED:
|
||||
nfit_mem->fwa_state = NVDIMM_FWA_ARMED;
|
||||
break;
|
||||
default:
|
||||
nfit_mem->fwa_state = NVDIMM_FWA_INVALID;
|
||||
break;
|
||||
}
|
||||
|
||||
switch (info.result) {
|
||||
case ND_INTEL_DIMM_FWA_NONE:
|
||||
nfit_mem->fwa_result = NVDIMM_FWA_RESULT_NONE;
|
||||
break;
|
||||
case ND_INTEL_DIMM_FWA_SUCCESS:
|
||||
nfit_mem->fwa_result = NVDIMM_FWA_RESULT_SUCCESS;
|
||||
break;
|
||||
case ND_INTEL_DIMM_FWA_NOTSTAGED:
|
||||
nfit_mem->fwa_result = NVDIMM_FWA_RESULT_NOTSTAGED;
|
||||
break;
|
||||
case ND_INTEL_DIMM_FWA_NEEDRESET:
|
||||
nfit_mem->fwa_result = NVDIMM_FWA_RESULT_NEEDRESET;
|
||||
break;
|
||||
case ND_INTEL_DIMM_FWA_MEDIAFAILED:
|
||||
case ND_INTEL_DIMM_FWA_ABORT:
|
||||
case ND_INTEL_DIMM_FWA_NOTSUPP:
|
||||
case ND_INTEL_DIMM_FWA_ERROR:
|
||||
default:
|
||||
nfit_mem->fwa_result = NVDIMM_FWA_RESULT_FAIL;
|
||||
break;
|
||||
}
|
||||
|
||||
nfit_mem->fwa_count = acpi_desc->fwa_count;
|
||||
|
||||
return nfit_mem->fwa_state;
|
||||
}
|
||||
|
||||
static enum nvdimm_fwa_result intel_fwa_result(struct nvdimm *nvdimm)
|
||||
{
|
||||
struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm);
|
||||
struct acpi_nfit_desc *acpi_desc = nfit_mem->acpi_desc;
|
||||
|
||||
if (nfit_mem->fwa_count == acpi_desc->fwa_count
|
||||
&& nfit_mem->fwa_result > NVDIMM_FWA_RESULT_INVALID)
|
||||
return nfit_mem->fwa_result;
|
||||
|
||||
if (intel_fwa_state(nvdimm) > NVDIMM_FWA_INVALID)
|
||||
return nfit_mem->fwa_result;
|
||||
|
||||
return NVDIMM_FWA_RESULT_INVALID;
|
||||
}
|
||||
|
||||
static int intel_fwa_arm(struct nvdimm *nvdimm, enum nvdimm_fwa_trigger arm)
|
||||
{
|
||||
struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm);
|
||||
struct acpi_nfit_desc *acpi_desc = nfit_mem->acpi_desc;
|
||||
struct {
|
||||
struct nd_cmd_pkg pkg;
|
||||
struct nd_intel_fw_activate_arm cmd;
|
||||
} nd_cmd = {
|
||||
.pkg = {
|
||||
.nd_command = NVDIMM_INTEL_FW_ACTIVATE_ARM,
|
||||
.nd_family = NVDIMM_FAMILY_INTEL,
|
||||
.nd_size_in = sizeof(nd_cmd.cmd.activate_arm),
|
||||
.nd_size_out =
|
||||
sizeof(struct nd_intel_fw_activate_arm),
|
||||
.nd_fw_size =
|
||||
sizeof(struct nd_intel_fw_activate_arm),
|
||||
},
|
||||
.cmd = {
|
||||
.activate_arm = arm == NVDIMM_FWA_ARM
|
||||
? ND_INTEL_DIMM_FWA_ARM
|
||||
: ND_INTEL_DIMM_FWA_DISARM,
|
||||
},
|
||||
};
|
||||
int rc;
|
||||
|
||||
switch (intel_fwa_state(nvdimm)) {
|
||||
case NVDIMM_FWA_INVALID:
|
||||
return -ENXIO;
|
||||
case NVDIMM_FWA_BUSY:
|
||||
return -EBUSY;
|
||||
case NVDIMM_FWA_IDLE:
|
||||
if (arm == NVDIMM_FWA_DISARM)
|
||||
return 0;
|
||||
break;
|
||||
case NVDIMM_FWA_ARMED:
|
||||
if (arm == NVDIMM_FWA_ARM)
|
||||
return 0;
|
||||
break;
|
||||
default:
|
||||
return -ENXIO;
|
||||
}
|
||||
|
||||
/*
|
||||
* Invalidate the bus-level state, now that we're committed to
|
||||
* changing the 'arm' state.
|
||||
*/
|
||||
acpi_desc->fwa_state = NVDIMM_FWA_INVALID;
|
||||
nfit_mem->fwa_state = NVDIMM_FWA_INVALID;
|
||||
|
||||
rc = nvdimm_ctl(nvdimm, ND_CMD_CALL, &nd_cmd, sizeof(nd_cmd), NULL);
|
||||
|
||||
dev_dbg(acpi_desc->dev, "%s result: %d\n", arm == NVDIMM_FWA_ARM
|
||||
? "arm" : "disarm", rc);
|
||||
return rc;
|
||||
}
|
||||
|
||||
static const struct nvdimm_fw_ops __intel_fw_ops = {
|
||||
.activate_state = intel_fwa_state,
|
||||
.activate_result = intel_fwa_result,
|
||||
.arm = intel_fwa_arm,
|
||||
};
|
||||
|
||||
const struct nvdimm_fw_ops *intel_fw_ops = &__intel_fw_ops;
|
||||
|
|
|
@ -111,4 +111,65 @@ struct nd_intel_master_secure_erase {
|
|||
u8 passphrase[ND_INTEL_PASSPHRASE_SIZE];
|
||||
u32 status;
|
||||
} __packed;
|
||||
|
||||
#define ND_INTEL_FWA_IDLE 0
|
||||
#define ND_INTEL_FWA_ARMED 1
|
||||
#define ND_INTEL_FWA_BUSY 2
|
||||
|
||||
#define ND_INTEL_DIMM_FWA_NONE 0
|
||||
#define ND_INTEL_DIMM_FWA_NOTSTAGED 1
|
||||
#define ND_INTEL_DIMM_FWA_SUCCESS 2
|
||||
#define ND_INTEL_DIMM_FWA_NEEDRESET 3
|
||||
#define ND_INTEL_DIMM_FWA_MEDIAFAILED 4
|
||||
#define ND_INTEL_DIMM_FWA_ABORT 5
|
||||
#define ND_INTEL_DIMM_FWA_NOTSUPP 6
|
||||
#define ND_INTEL_DIMM_FWA_ERROR 7
|
||||
|
||||
struct nd_intel_fw_activate_dimminfo {
|
||||
u32 status;
|
||||
u16 result;
|
||||
u8 state;
|
||||
u8 reserved[7];
|
||||
} __packed;
|
||||
|
||||
#define ND_INTEL_DIMM_FWA_ARM 1
|
||||
#define ND_INTEL_DIMM_FWA_DISARM 0
|
||||
|
||||
struct nd_intel_fw_activate_arm {
|
||||
u8 activate_arm;
|
||||
u32 status;
|
||||
} __packed;
|
||||
|
||||
/* Root device command payloads */
|
||||
#define ND_INTEL_BUS_FWA_CAP_FWQUIESCE (1 << 0)
|
||||
#define ND_INTEL_BUS_FWA_CAP_OSQUIESCE (1 << 1)
|
||||
#define ND_INTEL_BUS_FWA_CAP_RESET (1 << 2)
|
||||
|
||||
struct nd_intel_bus_fw_activate_businfo {
|
||||
u32 status;
|
||||
u16 reserved;
|
||||
u8 state;
|
||||
u8 capability;
|
||||
u64 activate_tmo;
|
||||
u64 cpu_quiesce_tmo;
|
||||
u64 io_quiesce_tmo;
|
||||
u64 max_quiesce_tmo;
|
||||
} __packed;
|
||||
|
||||
#define ND_INTEL_BUS_FWA_STATUS_NOARM (6 | 1 << 16)
|
||||
#define ND_INTEL_BUS_FWA_STATUS_BUSY (6 | 2 << 16)
|
||||
#define ND_INTEL_BUS_FWA_STATUS_NOFW (6 | 3 << 16)
|
||||
#define ND_INTEL_BUS_FWA_STATUS_TMO (6 | 4 << 16)
|
||||
#define ND_INTEL_BUS_FWA_STATUS_NOIDLE (6 | 5 << 16)
|
||||
#define ND_INTEL_BUS_FWA_STATUS_ABORT (6 | 6 << 16)
|
||||
|
||||
#define ND_INTEL_BUS_FWA_IODEV_FORCE_IDLE (0)
|
||||
#define ND_INTEL_BUS_FWA_IODEV_OS_IDLE (1)
|
||||
struct nd_intel_bus_fw_activate {
|
||||
u8 iodev_state;
|
||||
u32 status;
|
||||
} __packed;
|
||||
|
||||
extern const struct nvdimm_fw_ops *intel_fw_ops;
|
||||
extern const struct nvdimm_bus_fw_ops *intel_bus_fw_ops;
|
||||
#endif
|
||||
|
|
|
@ -18,6 +18,7 @@
|
|||
|
||||
/* https://pmem.io/documents/NVDIMM_DSM_Interface-V1.6.pdf */
|
||||
#define UUID_NFIT_DIMM "4309ac30-0d11-11e4-9191-0800200c9a66"
|
||||
#define UUID_INTEL_BUS "c7d8acd4-2df8-4b82-9f65-a325335af149"
|
||||
|
||||
/* https://github.com/HewlettPackard/hpe-nvm/blob/master/Documentation/ */
|
||||
#define UUID_NFIT_DIMM_N_HPE1 "9002c334-acf3-4c0e-9642-a235f0d53bc6"
|
||||
|
@ -33,7 +34,6 @@
|
|||
| ACPI_NFIT_MEM_RESTORE_FAILED | ACPI_NFIT_MEM_FLUSH_FAILED \
|
||||
| ACPI_NFIT_MEM_NOT_ARMED | ACPI_NFIT_MEM_MAP_FAILED)
|
||||
|
||||
#define NVDIMM_FAMILY_MAX NVDIMM_FAMILY_HYPERV
|
||||
#define NVDIMM_CMD_MAX 31
|
||||
|
||||
#define NVDIMM_STANDARD_CMDMASK \
|
||||
|
@ -66,6 +66,13 @@ enum nvdimm_family_cmds {
|
|||
NVDIMM_INTEL_QUERY_OVERWRITE = 26,
|
||||
NVDIMM_INTEL_SET_MASTER_PASSPHRASE = 27,
|
||||
NVDIMM_INTEL_MASTER_SECURE_ERASE = 28,
|
||||
NVDIMM_INTEL_FW_ACTIVATE_DIMMINFO = 29,
|
||||
NVDIMM_INTEL_FW_ACTIVATE_ARM = 30,
|
||||
};
|
||||
|
||||
enum nvdimm_bus_family_cmds {
|
||||
NVDIMM_BUS_INTEL_FW_ACTIVATE_BUSINFO = 1,
|
||||
NVDIMM_BUS_INTEL_FW_ACTIVATE = 2,
|
||||
};
|
||||
|
||||
#define NVDIMM_INTEL_SECURITY_CMDMASK \
|
||||
|
@ -76,13 +83,22 @@ enum nvdimm_family_cmds {
|
|||
| 1 << NVDIMM_INTEL_SET_MASTER_PASSPHRASE \
|
||||
| 1 << NVDIMM_INTEL_MASTER_SECURE_ERASE)
|
||||
|
||||
#define NVDIMM_INTEL_FW_ACTIVATE_CMDMASK \
|
||||
(1 << NVDIMM_INTEL_FW_ACTIVATE_DIMMINFO | 1 << NVDIMM_INTEL_FW_ACTIVATE_ARM)
|
||||
|
||||
#define NVDIMM_BUS_INTEL_FW_ACTIVATE_CMDMASK \
|
||||
(1 << NVDIMM_BUS_INTEL_FW_ACTIVATE_BUSINFO | 1 << NVDIMM_BUS_INTEL_FW_ACTIVATE)
|
||||
|
||||
#define NVDIMM_INTEL_CMDMASK \
|
||||
(NVDIMM_STANDARD_CMDMASK | 1 << NVDIMM_INTEL_GET_MODES \
|
||||
| 1 << NVDIMM_INTEL_GET_FWINFO | 1 << NVDIMM_INTEL_START_FWUPDATE \
|
||||
| 1 << NVDIMM_INTEL_SEND_FWUPDATE | 1 << NVDIMM_INTEL_FINISH_FWUPDATE \
|
||||
| 1 << NVDIMM_INTEL_QUERY_FWUPDATE | 1 << NVDIMM_INTEL_SET_THRESHOLD \
|
||||
| 1 << NVDIMM_INTEL_INJECT_ERROR | 1 << NVDIMM_INTEL_LATCH_SHUTDOWN \
|
||||
| NVDIMM_INTEL_SECURITY_CMDMASK)
|
||||
| NVDIMM_INTEL_SECURITY_CMDMASK | NVDIMM_INTEL_FW_ACTIVATE_CMDMASK)
|
||||
|
||||
#define NVDIMM_INTEL_DENY_CMDMASK \
|
||||
(NVDIMM_INTEL_SECURITY_CMDMASK | NVDIMM_INTEL_FW_ACTIVATE_CMDMASK)
|
||||
|
||||
enum nfit_uuids {
|
||||
/* for simplicity alias the uuid index with the family id */
|
||||
|
@ -91,6 +107,11 @@ enum nfit_uuids {
|
|||
NFIT_DEV_DIMM_N_HPE2 = NVDIMM_FAMILY_HPE2,
|
||||
NFIT_DEV_DIMM_N_MSFT = NVDIMM_FAMILY_MSFT,
|
||||
NFIT_DEV_DIMM_N_HYPERV = NVDIMM_FAMILY_HYPERV,
|
||||
/*
|
||||
* to_nfit_bus_uuid() expects to translate bus uuid family ids
|
||||
* to a UUID index using NVDIMM_FAMILY_MAX as an offset
|
||||
*/
|
||||
NFIT_BUS_INTEL = NVDIMM_FAMILY_MAX + NVDIMM_BUS_FAMILY_INTEL,
|
||||
NFIT_SPA_VOLATILE,
|
||||
NFIT_SPA_PM,
|
||||
NFIT_SPA_DCR,
|
||||
|
@ -199,6 +220,9 @@ struct nfit_mem {
|
|||
struct list_head list;
|
||||
struct acpi_device *adev;
|
||||
struct acpi_nfit_desc *acpi_desc;
|
||||
enum nvdimm_fwa_state fwa_state;
|
||||
enum nvdimm_fwa_result fwa_result;
|
||||
int fwa_count;
|
||||
char id[NFIT_DIMM_ID_LEN+1];
|
||||
struct resource *flush_wpq;
|
||||
unsigned long dsm_mask;
|
||||
|
@ -238,11 +262,17 @@ struct acpi_nfit_desc {
|
|||
unsigned long scrub_flags;
|
||||
unsigned long dimm_cmd_force_en;
|
||||
unsigned long bus_cmd_force_en;
|
||||
unsigned long bus_nfit_cmd_force_en;
|
||||
unsigned long bus_dsm_mask;
|
||||
unsigned long family_dsm_mask[NVDIMM_BUS_FAMILY_MAX + 1];
|
||||
unsigned int platform_cap;
|
||||
unsigned int scrub_tmo;
|
||||
int (*blk_do_io)(struct nd_blk_region *ndbr, resource_size_t dpa,
|
||||
void *iobuf, u64 len, int rw);
|
||||
enum nvdimm_fwa_state fwa_state;
|
||||
enum nvdimm_fwa_capability fwa_cap;
|
||||
int fwa_count;
|
||||
bool fwa_noidle;
|
||||
bool fwa_nosuspend;
|
||||
};
|
||||
|
||||
enum scrub_mode {
|
||||
|
@ -345,4 +375,6 @@ void __acpi_nvdimm_notify(struct device *dev, u32 event);
|
|||
int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
|
||||
unsigned int cmd, void *buf, unsigned int buf_len, int *cmd_rc);
|
||||
void acpi_nfit_desc_init(struct acpi_nfit_desc *acpi_desc, struct device *dev);
|
||||
bool intel_fwa_supported(struct nvdimm_bus *nvdimm_bus);
|
||||
extern struct device_attribute dev_attr_firmware_activate_noidle;
|
||||
#endif /* __NFIT_H__ */
|
||||
|
|
|
@ -80,14 +80,14 @@ bool __generic_fsdax_supported(struct dax_device *dax_dev,
|
|||
int err, id;
|
||||
|
||||
if (blocksize != PAGE_SIZE) {
|
||||
pr_debug("%s: error: unsupported blocksize for dax\n",
|
||||
pr_info("%s: error: unsupported blocksize for dax\n",
|
||||
bdevname(bdev, buf));
|
||||
return false;
|
||||
}
|
||||
|
||||
err = bdev_dax_pgoff(bdev, start, PAGE_SIZE, &pgoff);
|
||||
if (err) {
|
||||
pr_debug("%s: error: unaligned partition for dax\n",
|
||||
pr_info("%s: error: unaligned partition for dax\n",
|
||||
bdevname(bdev, buf));
|
||||
return false;
|
||||
}
|
||||
|
@ -95,7 +95,7 @@ bool __generic_fsdax_supported(struct dax_device *dax_dev,
|
|||
last_page = PFN_DOWN((start + sectors - 1) * 512) * PAGE_SIZE / 512;
|
||||
err = bdev_dax_pgoff(bdev, last_page, PAGE_SIZE, &pgoff_end);
|
||||
if (err) {
|
||||
pr_debug("%s: error: unaligned partition for dax\n",
|
||||
pr_info("%s: error: unaligned partition for dax\n",
|
||||
bdevname(bdev, buf));
|
||||
return false;
|
||||
}
|
||||
|
@ -103,11 +103,11 @@ bool __generic_fsdax_supported(struct dax_device *dax_dev,
|
|||
id = dax_read_lock();
|
||||
len = dax_direct_access(dax_dev, pgoff, 1, &kaddr, &pfn);
|
||||
len2 = dax_direct_access(dax_dev, pgoff_end, 1, &end_kaddr, &end_pfn);
|
||||
dax_read_unlock(id);
|
||||
|
||||
if (len < 1 || len2 < 1) {
|
||||
pr_debug("%s: error: dax access failed (%ld)\n",
|
||||
pr_info("%s: error: dax access failed (%ld)\n",
|
||||
bdevname(bdev, buf), len < 1 ? len : len2);
|
||||
dax_read_unlock(id);
|
||||
return false;
|
||||
}
|
||||
|
||||
|
@ -137,9 +137,10 @@ bool __generic_fsdax_supported(struct dax_device *dax_dev,
|
|||
put_dev_pagemap(end_pgmap);
|
||||
|
||||
}
|
||||
dax_read_unlock(id);
|
||||
|
||||
if (!dax_enabled) {
|
||||
pr_debug("%s: error: dax support not enabled\n",
|
||||
pr_info("%s: error: dax support not enabled\n",
|
||||
bdevname(bdev, buf));
|
||||
return false;
|
||||
}
|
||||
|
|
|
@ -1037,9 +1037,25 @@ static int __nd_ioctl(struct nvdimm_bus *nvdimm_bus, struct nvdimm *nvdimm,
|
|||
dimm_name = "bus";
|
||||
}
|
||||
|
||||
/* Validate command family support against bus declared support */
|
||||
if (cmd == ND_CMD_CALL) {
|
||||
unsigned long *mask;
|
||||
|
||||
if (copy_from_user(&pkg, p, sizeof(pkg)))
|
||||
return -EFAULT;
|
||||
|
||||
if (nvdimm) {
|
||||
if (pkg.nd_family > NVDIMM_FAMILY_MAX)
|
||||
return -EINVAL;
|
||||
mask = &nd_desc->dimm_family_mask;
|
||||
} else {
|
||||
if (pkg.nd_family > NVDIMM_BUS_FAMILY_MAX)
|
||||
return -EINVAL;
|
||||
mask = &nd_desc->bus_family_mask;
|
||||
}
|
||||
|
||||
if (!test_bit(pkg.nd_family, mask))
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (!desc ||
|
||||
|
|
|
@ -4,6 +4,7 @@
|
|||
*/
|
||||
#include <linux/libnvdimm.h>
|
||||
#include <linux/badblocks.h>
|
||||
#include <linux/suspend.h>
|
||||
#include <linux/export.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/blkdev.h>
|
||||
|
@ -389,8 +390,156 @@ static const struct attribute_group nvdimm_bus_attribute_group = {
|
|||
.attrs = nvdimm_bus_attributes,
|
||||
};
|
||||
|
||||
static ssize_t capability_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct nvdimm_bus *nvdimm_bus = to_nvdimm_bus(dev);
|
||||
struct nvdimm_bus_descriptor *nd_desc = nvdimm_bus->nd_desc;
|
||||
enum nvdimm_fwa_capability cap;
|
||||
|
||||
if (!nd_desc->fw_ops)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
nvdimm_bus_lock(dev);
|
||||
cap = nd_desc->fw_ops->capability(nd_desc);
|
||||
nvdimm_bus_unlock(dev);
|
||||
|
||||
switch (cap) {
|
||||
case NVDIMM_FWA_CAP_QUIESCE:
|
||||
return sprintf(buf, "quiesce\n");
|
||||
case NVDIMM_FWA_CAP_LIVE:
|
||||
return sprintf(buf, "live\n");
|
||||
default:
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
}
|
||||
|
||||
static DEVICE_ATTR_RO(capability);
|
||||
|
||||
static ssize_t activate_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct nvdimm_bus *nvdimm_bus = to_nvdimm_bus(dev);
|
||||
struct nvdimm_bus_descriptor *nd_desc = nvdimm_bus->nd_desc;
|
||||
enum nvdimm_fwa_capability cap;
|
||||
enum nvdimm_fwa_state state;
|
||||
|
||||
if (!nd_desc->fw_ops)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
nvdimm_bus_lock(dev);
|
||||
cap = nd_desc->fw_ops->capability(nd_desc);
|
||||
state = nd_desc->fw_ops->activate_state(nd_desc);
|
||||
nvdimm_bus_unlock(dev);
|
||||
|
||||
if (cap < NVDIMM_FWA_CAP_QUIESCE)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
switch (state) {
|
||||
case NVDIMM_FWA_IDLE:
|
||||
return sprintf(buf, "idle\n");
|
||||
case NVDIMM_FWA_BUSY:
|
||||
return sprintf(buf, "busy\n");
|
||||
case NVDIMM_FWA_ARMED:
|
||||
return sprintf(buf, "armed\n");
|
||||
case NVDIMM_FWA_ARM_OVERFLOW:
|
||||
return sprintf(buf, "overflow\n");
|
||||
default:
|
||||
return -ENXIO;
|
||||
}
|
||||
}
|
||||
|
||||
static int exec_firmware_activate(void *data)
|
||||
{
|
||||
struct nvdimm_bus_descriptor *nd_desc = data;
|
||||
|
||||
return nd_desc->fw_ops->activate(nd_desc);
|
||||
}
|
||||
|
||||
static ssize_t activate_store(struct device *dev,
|
||||
struct device_attribute *attr, const char *buf, size_t len)
|
||||
{
|
||||
struct nvdimm_bus *nvdimm_bus = to_nvdimm_bus(dev);
|
||||
struct nvdimm_bus_descriptor *nd_desc = nvdimm_bus->nd_desc;
|
||||
enum nvdimm_fwa_state state;
|
||||
bool quiesce;
|
||||
ssize_t rc;
|
||||
|
||||
if (!nd_desc->fw_ops)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (sysfs_streq(buf, "live"))
|
||||
quiesce = false;
|
||||
else if (sysfs_streq(buf, "quiesce"))
|
||||
quiesce = true;
|
||||
else
|
||||
return -EINVAL;
|
||||
|
||||
nvdimm_bus_lock(dev);
|
||||
state = nd_desc->fw_ops->activate_state(nd_desc);
|
||||
|
||||
switch (state) {
|
||||
case NVDIMM_FWA_BUSY:
|
||||
rc = -EBUSY;
|
||||
break;
|
||||
case NVDIMM_FWA_ARMED:
|
||||
case NVDIMM_FWA_ARM_OVERFLOW:
|
||||
if (quiesce)
|
||||
rc = hibernate_quiet_exec(exec_firmware_activate, nd_desc);
|
||||
else
|
||||
rc = nd_desc->fw_ops->activate(nd_desc);
|
||||
break;
|
||||
case NVDIMM_FWA_IDLE:
|
||||
default:
|
||||
rc = -ENXIO;
|
||||
}
|
||||
nvdimm_bus_unlock(dev);
|
||||
|
||||
if (rc == 0)
|
||||
rc = len;
|
||||
return rc;
|
||||
}
|
||||
|
||||
static DEVICE_ATTR_ADMIN_RW(activate);
|
||||
|
||||
static umode_t nvdimm_bus_firmware_visible(struct kobject *kobj, struct attribute *a, int n)
|
||||
{
|
||||
struct device *dev = container_of(kobj, typeof(*dev), kobj);
|
||||
struct nvdimm_bus *nvdimm_bus = to_nvdimm_bus(dev);
|
||||
struct nvdimm_bus_descriptor *nd_desc = nvdimm_bus->nd_desc;
|
||||
enum nvdimm_fwa_capability cap;
|
||||
|
||||
/*
|
||||
* Both 'activate' and 'capability' disappear when no ops
|
||||
* detected, or a negative capability is indicated.
|
||||
*/
|
||||
if (!nd_desc->fw_ops)
|
||||
return 0;
|
||||
|
||||
nvdimm_bus_lock(dev);
|
||||
cap = nd_desc->fw_ops->capability(nd_desc);
|
||||
nvdimm_bus_unlock(dev);
|
||||
|
||||
if (cap < NVDIMM_FWA_CAP_QUIESCE)
|
||||
return 0;
|
||||
|
||||
return a->mode;
|
||||
}
|
||||
static struct attribute *nvdimm_bus_firmware_attributes[] = {
|
||||
&dev_attr_activate.attr,
|
||||
&dev_attr_capability.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
static const struct attribute_group nvdimm_bus_firmware_attribute_group = {
|
||||
.name = "firmware",
|
||||
.attrs = nvdimm_bus_firmware_attributes,
|
||||
.is_visible = nvdimm_bus_firmware_visible,
|
||||
};
|
||||
|
||||
const struct attribute_group *nvdimm_bus_attribute_groups[] = {
|
||||
&nvdimm_bus_attribute_group,
|
||||
&nvdimm_bus_firmware_attribute_group,
|
||||
NULL,
|
||||
};
|
||||
|
||||
|
|
|
@ -363,14 +363,14 @@ __weak ssize_t security_show(struct device *dev,
|
|||
{
|
||||
struct nvdimm *nvdimm = to_nvdimm(dev);
|
||||
|
||||
if (test_bit(NVDIMM_SECURITY_OVERWRITE, &nvdimm->sec.flags))
|
||||
return sprintf(buf, "overwrite\n");
|
||||
if (test_bit(NVDIMM_SECURITY_DISABLED, &nvdimm->sec.flags))
|
||||
return sprintf(buf, "disabled\n");
|
||||
if (test_bit(NVDIMM_SECURITY_UNLOCKED, &nvdimm->sec.flags))
|
||||
return sprintf(buf, "unlocked\n");
|
||||
if (test_bit(NVDIMM_SECURITY_LOCKED, &nvdimm->sec.flags))
|
||||
return sprintf(buf, "locked\n");
|
||||
if (test_bit(NVDIMM_SECURITY_OVERWRITE, &nvdimm->sec.flags))
|
||||
return sprintf(buf, "overwrite\n");
|
||||
return -ENOTTY;
|
||||
}
|
||||
|
||||
|
@ -446,9 +446,124 @@ static const struct attribute_group nvdimm_attribute_group = {
|
|||
.is_visible = nvdimm_visible,
|
||||
};
|
||||
|
||||
static ssize_t result_show(struct device *dev, struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct nvdimm *nvdimm = to_nvdimm(dev);
|
||||
enum nvdimm_fwa_result result;
|
||||
|
||||
if (!nvdimm->fw_ops)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
nvdimm_bus_lock(dev);
|
||||
result = nvdimm->fw_ops->activate_result(nvdimm);
|
||||
nvdimm_bus_unlock(dev);
|
||||
|
||||
switch (result) {
|
||||
case NVDIMM_FWA_RESULT_NONE:
|
||||
return sprintf(buf, "none\n");
|
||||
case NVDIMM_FWA_RESULT_SUCCESS:
|
||||
return sprintf(buf, "success\n");
|
||||
case NVDIMM_FWA_RESULT_FAIL:
|
||||
return sprintf(buf, "fail\n");
|
||||
case NVDIMM_FWA_RESULT_NOTSTAGED:
|
||||
return sprintf(buf, "not_staged\n");
|
||||
case NVDIMM_FWA_RESULT_NEEDRESET:
|
||||
return sprintf(buf, "need_reset\n");
|
||||
default:
|
||||
return -ENXIO;
|
||||
}
|
||||
}
|
||||
static DEVICE_ATTR_ADMIN_RO(result);
|
||||
|
||||
static ssize_t activate_show(struct device *dev, struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct nvdimm *nvdimm = to_nvdimm(dev);
|
||||
enum nvdimm_fwa_state state;
|
||||
|
||||
if (!nvdimm->fw_ops)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
nvdimm_bus_lock(dev);
|
||||
state = nvdimm->fw_ops->activate_state(nvdimm);
|
||||
nvdimm_bus_unlock(dev);
|
||||
|
||||
switch (state) {
|
||||
case NVDIMM_FWA_IDLE:
|
||||
return sprintf(buf, "idle\n");
|
||||
case NVDIMM_FWA_BUSY:
|
||||
return sprintf(buf, "busy\n");
|
||||
case NVDIMM_FWA_ARMED:
|
||||
return sprintf(buf, "armed\n");
|
||||
default:
|
||||
return -ENXIO;
|
||||
}
|
||||
}
|
||||
|
||||
static ssize_t activate_store(struct device *dev, struct device_attribute *attr,
|
||||
const char *buf, size_t len)
|
||||
{
|
||||
struct nvdimm *nvdimm = to_nvdimm(dev);
|
||||
enum nvdimm_fwa_trigger arg;
|
||||
int rc;
|
||||
|
||||
if (!nvdimm->fw_ops)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (sysfs_streq(buf, "arm"))
|
||||
arg = NVDIMM_FWA_ARM;
|
||||
else if (sysfs_streq(buf, "disarm"))
|
||||
arg = NVDIMM_FWA_DISARM;
|
||||
else
|
||||
return -EINVAL;
|
||||
|
||||
nvdimm_bus_lock(dev);
|
||||
rc = nvdimm->fw_ops->arm(nvdimm, arg);
|
||||
nvdimm_bus_unlock(dev);
|
||||
|
||||
if (rc < 0)
|
||||
return rc;
|
||||
return len;
|
||||
}
|
||||
static DEVICE_ATTR_ADMIN_RW(activate);
|
||||
|
||||
static struct attribute *nvdimm_firmware_attributes[] = {
|
||||
&dev_attr_activate.attr,
|
||||
&dev_attr_result.attr,
|
||||
};
|
||||
|
||||
static umode_t nvdimm_firmware_visible(struct kobject *kobj, struct attribute *a, int n)
|
||||
{
|
||||
struct device *dev = container_of(kobj, typeof(*dev), kobj);
|
||||
struct nvdimm_bus *nvdimm_bus = walk_to_nvdimm_bus(dev);
|
||||
struct nvdimm_bus_descriptor *nd_desc = nvdimm_bus->nd_desc;
|
||||
struct nvdimm *nvdimm = to_nvdimm(dev);
|
||||
enum nvdimm_fwa_capability cap;
|
||||
|
||||
if (!nd_desc->fw_ops)
|
||||
return 0;
|
||||
if (!nvdimm->fw_ops)
|
||||
return 0;
|
||||
|
||||
nvdimm_bus_lock(dev);
|
||||
cap = nd_desc->fw_ops->capability(nd_desc);
|
||||
nvdimm_bus_unlock(dev);
|
||||
|
||||
if (cap < NVDIMM_FWA_CAP_QUIESCE)
|
||||
return 0;
|
||||
|
||||
return a->mode;
|
||||
}
|
||||
|
||||
static const struct attribute_group nvdimm_firmware_attribute_group = {
|
||||
.name = "firmware",
|
||||
.attrs = nvdimm_firmware_attributes,
|
||||
.is_visible = nvdimm_firmware_visible,
|
||||
};
|
||||
|
||||
static const struct attribute_group *nvdimm_attribute_groups[] = {
|
||||
&nd_device_attribute_group,
|
||||
&nvdimm_attribute_group,
|
||||
&nvdimm_firmware_attribute_group,
|
||||
NULL,
|
||||
};
|
||||
|
||||
|
@ -467,7 +582,8 @@ struct nvdimm *__nvdimm_create(struct nvdimm_bus *nvdimm_bus,
|
|||
void *provider_data, const struct attribute_group **groups,
|
||||
unsigned long flags, unsigned long cmd_mask, int num_flush,
|
||||
struct resource *flush_wpq, const char *dimm_id,
|
||||
const struct nvdimm_security_ops *sec_ops)
|
||||
const struct nvdimm_security_ops *sec_ops,
|
||||
const struct nvdimm_fw_ops *fw_ops)
|
||||
{
|
||||
struct nvdimm *nvdimm = kzalloc(sizeof(*nvdimm), GFP_KERNEL);
|
||||
struct device *dev;
|
||||
|
@ -497,6 +613,7 @@ struct nvdimm *__nvdimm_create(struct nvdimm_bus *nvdimm_bus,
|
|||
dev->devt = MKDEV(nvdimm_major, nvdimm->id);
|
||||
dev->groups = groups;
|
||||
nvdimm->sec.ops = sec_ops;
|
||||
nvdimm->fw_ops = fw_ops;
|
||||
nvdimm->sec.overwrite_tmo = 0;
|
||||
INIT_DELAYED_WORK(&nvdimm->dwork, nvdimm_security_overwrite_query);
|
||||
/*
|
||||
|
|
|
@ -1309,7 +1309,7 @@ static ssize_t resource_show(struct device *dev,
|
|||
return -ENXIO;
|
||||
return sprintf(buf, "%#llx\n", (unsigned long long) res->start);
|
||||
}
|
||||
static DEVICE_ATTR(resource, 0400, resource_show, NULL);
|
||||
static DEVICE_ATTR_ADMIN_RO(resource);
|
||||
|
||||
static const unsigned long blk_lbasize_supported[] = { 512, 520, 528,
|
||||
4096, 4104, 4160, 4224, 0 };
|
||||
|
|
|
@ -45,6 +45,7 @@ struct nvdimm {
|
|||
struct kernfs_node *overwrite_state;
|
||||
} sec;
|
||||
struct delayed_work dwork;
|
||||
const struct nvdimm_fw_ops *fw_ops;
|
||||
};
|
||||
|
||||
static inline unsigned long nvdimm_security_flags(
|
||||
|
|
|
@ -218,7 +218,7 @@ static ssize_t resource_show(struct device *dev,
|
|||
|
||||
return rc;
|
||||
}
|
||||
static DEVICE_ATTR(resource, 0400, resource_show, NULL);
|
||||
static DEVICE_ATTR_ADMIN_RO(resource);
|
||||
|
||||
static ssize_t size_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
|
|
|
@ -605,7 +605,7 @@ static ssize_t resource_show(struct device *dev,
|
|||
|
||||
return sprintf(buf, "%#llx\n", nd_region->ndr_start);
|
||||
}
|
||||
static DEVICE_ATTR(resource, 0400, resource_show, NULL);
|
||||
static DEVICE_ATTR_ADMIN_RO(resource);
|
||||
|
||||
static ssize_t persistence_domain_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
|
|
|
@ -450,14 +450,19 @@ void __nvdimm_security_overwrite_query(struct nvdimm *nvdimm)
|
|||
else
|
||||
dev_dbg(&nvdimm->dev, "overwrite completed\n");
|
||||
|
||||
if (nvdimm->sec.overwrite_state)
|
||||
sysfs_notify_dirent(nvdimm->sec.overwrite_state);
|
||||
/*
|
||||
* Mark the overwrite work done and update dimm security flags,
|
||||
* then send a sysfs event notification to wake up userspace
|
||||
* poll threads to picked up the changed state.
|
||||
*/
|
||||
nvdimm->sec.overwrite_tmo = 0;
|
||||
clear_bit(NDD_SECURITY_OVERWRITE, &nvdimm->flags);
|
||||
clear_bit(NDD_WORK_PENDING, &nvdimm->flags);
|
||||
put_device(&nvdimm->dev);
|
||||
nvdimm->sec.flags = nvdimm_security_flags(nvdimm, NVDIMM_USER);
|
||||
nvdimm->sec.flags = nvdimm_security_flags(nvdimm, NVDIMM_MASTER);
|
||||
nvdimm->sec.ext_flags = nvdimm_security_flags(nvdimm, NVDIMM_MASTER);
|
||||
if (nvdimm->sec.overwrite_state)
|
||||
sysfs_notify_dirent(nvdimm->sec.overwrite_state);
|
||||
put_device(&nvdimm->dev);
|
||||
}
|
||||
|
||||
void nvdimm_security_overwrite_query(struct work_struct *work)
|
||||
|
|
15
fs/dax.c
15
fs/dax.c
|
@ -488,7 +488,7 @@ static void *grab_mapping_entry(struct xa_state *xas,
|
|||
if (dax_is_conflict(entry))
|
||||
goto fallback;
|
||||
if (!xa_is_value(entry)) {
|
||||
xas_set_err(xas, EIO);
|
||||
xas_set_err(xas, -EIO);
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
|
@ -680,21 +680,20 @@ int dax_invalidate_mapping_entry_sync(struct address_space *mapping,
|
|||
return __dax_invalidate_entry(mapping, index, false);
|
||||
}
|
||||
|
||||
static int copy_user_dax(struct block_device *bdev, struct dax_device *dax_dev,
|
||||
sector_t sector, size_t size, struct page *to,
|
||||
unsigned long vaddr)
|
||||
static int copy_cow_page_dax(struct block_device *bdev, struct dax_device *dax_dev,
|
||||
sector_t sector, struct page *to, unsigned long vaddr)
|
||||
{
|
||||
void *vto, *kaddr;
|
||||
pgoff_t pgoff;
|
||||
long rc;
|
||||
int id;
|
||||
|
||||
rc = bdev_dax_pgoff(bdev, sector, size, &pgoff);
|
||||
rc = bdev_dax_pgoff(bdev, sector, PAGE_SIZE, &pgoff);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
id = dax_read_lock();
|
||||
rc = dax_direct_access(dax_dev, pgoff, PHYS_PFN(size), &kaddr, NULL);
|
||||
rc = dax_direct_access(dax_dev, pgoff, PHYS_PFN(PAGE_SIZE), &kaddr, NULL);
|
||||
if (rc < 0) {
|
||||
dax_read_unlock(id);
|
||||
return rc;
|
||||
|
@ -1305,8 +1304,8 @@ static vm_fault_t dax_iomap_pte_fault(struct vm_fault *vmf, pfn_t *pfnp,
|
|||
clear_user_highpage(vmf->cow_page, vaddr);
|
||||
break;
|
||||
case IOMAP_MAPPED:
|
||||
error = copy_user_dax(iomap.bdev, iomap.dax_dev,
|
||||
sector, PAGE_SIZE, vmf->cow_page, vaddr);
|
||||
error = copy_cow_page_dax(iomap.bdev, iomap.dax_dev,
|
||||
sector, vmf->cow_page, vaddr);
|
||||
break;
|
||||
default:
|
||||
WARN_ON_ONCE(1);
|
||||
|
|
|
@ -76,8 +76,9 @@ typedef int (*ndctl_fn)(struct nvdimm_bus_descriptor *nd_desc,
|
|||
struct device_node;
|
||||
struct nvdimm_bus_descriptor {
|
||||
const struct attribute_group **attr_groups;
|
||||
unsigned long bus_dsm_mask;
|
||||
unsigned long cmd_mask;
|
||||
unsigned long dimm_family_mask;
|
||||
unsigned long bus_family_mask;
|
||||
struct module *module;
|
||||
char *provider_name;
|
||||
struct device_node *of_node;
|
||||
|
@ -85,6 +86,7 @@ struct nvdimm_bus_descriptor {
|
|||
int (*flush_probe)(struct nvdimm_bus_descriptor *nd_desc);
|
||||
int (*clear_to_send)(struct nvdimm_bus_descriptor *nd_desc,
|
||||
struct nvdimm *nvdimm, unsigned int cmd, void *data);
|
||||
const struct nvdimm_bus_fw_ops *fw_ops;
|
||||
};
|
||||
|
||||
struct nd_cmd_desc {
|
||||
|
@ -199,6 +201,49 @@ struct nvdimm_security_ops {
|
|||
int (*query_overwrite)(struct nvdimm *nvdimm);
|
||||
};
|
||||
|
||||
enum nvdimm_fwa_state {
|
||||
NVDIMM_FWA_INVALID,
|
||||
NVDIMM_FWA_IDLE,
|
||||
NVDIMM_FWA_ARMED,
|
||||
NVDIMM_FWA_BUSY,
|
||||
NVDIMM_FWA_ARM_OVERFLOW,
|
||||
};
|
||||
|
||||
enum nvdimm_fwa_trigger {
|
||||
NVDIMM_FWA_ARM,
|
||||
NVDIMM_FWA_DISARM,
|
||||
};
|
||||
|
||||
enum nvdimm_fwa_capability {
|
||||
NVDIMM_FWA_CAP_INVALID,
|
||||
NVDIMM_FWA_CAP_NONE,
|
||||
NVDIMM_FWA_CAP_QUIESCE,
|
||||
NVDIMM_FWA_CAP_LIVE,
|
||||
};
|
||||
|
||||
enum nvdimm_fwa_result {
|
||||
NVDIMM_FWA_RESULT_INVALID,
|
||||
NVDIMM_FWA_RESULT_NONE,
|
||||
NVDIMM_FWA_RESULT_SUCCESS,
|
||||
NVDIMM_FWA_RESULT_NOTSTAGED,
|
||||
NVDIMM_FWA_RESULT_NEEDRESET,
|
||||
NVDIMM_FWA_RESULT_FAIL,
|
||||
};
|
||||
|
||||
struct nvdimm_bus_fw_ops {
|
||||
enum nvdimm_fwa_state (*activate_state)
|
||||
(struct nvdimm_bus_descriptor *nd_desc);
|
||||
enum nvdimm_fwa_capability (*capability)
|
||||
(struct nvdimm_bus_descriptor *nd_desc);
|
||||
int (*activate)(struct nvdimm_bus_descriptor *nd_desc);
|
||||
};
|
||||
|
||||
struct nvdimm_fw_ops {
|
||||
enum nvdimm_fwa_state (*activate_state)(struct nvdimm *nvdimm);
|
||||
enum nvdimm_fwa_result (*activate_result)(struct nvdimm *nvdimm);
|
||||
int (*arm)(struct nvdimm *nvdimm, enum nvdimm_fwa_trigger arg);
|
||||
};
|
||||
|
||||
void badrange_init(struct badrange *badrange);
|
||||
int badrange_add(struct badrange *badrange, u64 addr, u64 length);
|
||||
void badrange_forget(struct badrange *badrange, phys_addr_t start,
|
||||
|
@ -224,14 +269,15 @@ struct nvdimm *__nvdimm_create(struct nvdimm_bus *nvdimm_bus,
|
|||
void *provider_data, const struct attribute_group **groups,
|
||||
unsigned long flags, unsigned long cmd_mask, int num_flush,
|
||||
struct resource *flush_wpq, const char *dimm_id,
|
||||
const struct nvdimm_security_ops *sec_ops);
|
||||
const struct nvdimm_security_ops *sec_ops,
|
||||
const struct nvdimm_fw_ops *fw_ops);
|
||||
static inline struct nvdimm *nvdimm_create(struct nvdimm_bus *nvdimm_bus,
|
||||
void *provider_data, const struct attribute_group **groups,
|
||||
unsigned long flags, unsigned long cmd_mask, int num_flush,
|
||||
struct resource *flush_wpq)
|
||||
{
|
||||
return __nvdimm_create(nvdimm_bus, provider_data, groups, flags,
|
||||
cmd_mask, num_flush, flush_wpq, NULL, NULL);
|
||||
cmd_mask, num_flush, flush_wpq, NULL, NULL, NULL);
|
||||
}
|
||||
|
||||
const struct nd_cmd_desc *nd_cmd_dimm_desc(int cmd);
|
||||
|
|
|
@ -453,6 +453,8 @@ extern bool hibernation_available(void);
|
|||
asmlinkage int swsusp_save(void);
|
||||
extern struct pbe *restore_pblist;
|
||||
int pfn_is_nosave(unsigned long pfn);
|
||||
|
||||
int hibernate_quiet_exec(int (*func)(void *data), void *data);
|
||||
#else /* CONFIG_HIBERNATION */
|
||||
static inline void register_nosave_region(unsigned long b, unsigned long e) {}
|
||||
static inline void register_nosave_region_late(unsigned long b, unsigned long e) {}
|
||||
|
@ -464,6 +466,10 @@ static inline void hibernation_set_ops(const struct platform_hibernation_ops *op
|
|||
static inline int hibernate(void) { return -ENOSYS; }
|
||||
static inline bool system_entering_hibernation(void) { return false; }
|
||||
static inline bool hibernation_available(void) { return false; }
|
||||
|
||||
static inline int hibernate_quiet_exec(int (*func)(void *data), void *data) {
|
||||
return -ENOTSUPP;
|
||||
}
|
||||
#endif /* CONFIG_HIBERNATION */
|
||||
|
||||
#ifdef CONFIG_HIBERNATION_SNAPSHOT_DEV
|
||||
|
|
|
@ -245,6 +245,11 @@ struct nd_cmd_pkg {
|
|||
#define NVDIMM_FAMILY_MSFT 3
|
||||
#define NVDIMM_FAMILY_HYPERV 4
|
||||
#define NVDIMM_FAMILY_PAPR 5
|
||||
#define NVDIMM_FAMILY_MAX NVDIMM_FAMILY_PAPR
|
||||
|
||||
#define NVDIMM_BUS_FAMILY_NFIT 0
|
||||
#define NVDIMM_BUS_FAMILY_INTEL 1
|
||||
#define NVDIMM_BUS_FAMILY_MAX NVDIMM_BUS_FAMILY_INTEL
|
||||
|
||||
#define ND_IOCTL_CALL _IOWR(ND_IOCTL, ND_CMD_CALL,\
|
||||
struct nd_cmd_pkg)
|
||||
|
|
|
@ -795,6 +795,103 @@ int hibernate(void)
|
|||
return error;
|
||||
}
|
||||
|
||||
/**
|
||||
* hibernate_quiet_exec - Execute a function with all devices frozen.
|
||||
* @func: Function to execute.
|
||||
* @data: Data pointer to pass to @func.
|
||||
*
|
||||
* Return the @func return value or an error code if it cannot be executed.
|
||||
*/
|
||||
int hibernate_quiet_exec(int (*func)(void *data), void *data)
|
||||
{
|
||||
int error, nr_calls = 0;
|
||||
|
||||
lock_system_sleep();
|
||||
|
||||
if (!hibernate_acquire()) {
|
||||
error = -EBUSY;
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
pm_prepare_console();
|
||||
|
||||
error = __pm_notifier_call_chain(PM_HIBERNATION_PREPARE, -1, &nr_calls);
|
||||
if (error) {
|
||||
nr_calls--;
|
||||
goto exit;
|
||||
}
|
||||
|
||||
error = freeze_processes();
|
||||
if (error)
|
||||
goto exit;
|
||||
|
||||
lock_device_hotplug();
|
||||
|
||||
pm_suspend_clear_flags();
|
||||
|
||||
error = platform_begin(true);
|
||||
if (error)
|
||||
goto thaw;
|
||||
|
||||
error = freeze_kernel_threads();
|
||||
if (error)
|
||||
goto thaw;
|
||||
|
||||
error = dpm_prepare(PMSG_FREEZE);
|
||||
if (error)
|
||||
goto dpm_complete;
|
||||
|
||||
suspend_console();
|
||||
|
||||
error = dpm_suspend(PMSG_FREEZE);
|
||||
if (error)
|
||||
goto dpm_resume;
|
||||
|
||||
error = dpm_suspend_end(PMSG_FREEZE);
|
||||
if (error)
|
||||
goto dpm_resume;
|
||||
|
||||
error = platform_pre_snapshot(true);
|
||||
if (error)
|
||||
goto skip;
|
||||
|
||||
error = func(data);
|
||||
|
||||
skip:
|
||||
platform_finish(true);
|
||||
|
||||
dpm_resume_start(PMSG_THAW);
|
||||
|
||||
dpm_resume:
|
||||
dpm_resume(PMSG_THAW);
|
||||
|
||||
resume_console();
|
||||
|
||||
dpm_complete:
|
||||
dpm_complete(PMSG_THAW);
|
||||
|
||||
thaw_kernel_threads();
|
||||
|
||||
thaw:
|
||||
platform_end(true);
|
||||
|
||||
unlock_device_hotplug();
|
||||
|
||||
thaw_processes();
|
||||
|
||||
exit:
|
||||
__pm_notifier_call_chain(PM_POST_HIBERNATION, nr_calls, NULL);
|
||||
|
||||
pm_restore_console();
|
||||
|
||||
hibernate_release();
|
||||
|
||||
unlock:
|
||||
unlock_system_sleep();
|
||||
|
||||
return error;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(hibernate_quiet_exec);
|
||||
|
||||
/**
|
||||
* software_resume - Resume from a saved hibernation image.
|
||||
|
|
|
@ -173,6 +173,9 @@ struct nfit_test_fw {
|
|||
u64 version;
|
||||
u32 size_received;
|
||||
u64 end_time;
|
||||
bool armed;
|
||||
bool missed_activate;
|
||||
unsigned long last_activate;
|
||||
};
|
||||
|
||||
struct nfit_test {
|
||||
|
@ -345,7 +348,7 @@ static int nd_intel_test_finish_fw(struct nfit_test *t,
|
|||
__func__, t, nd_cmd, buf_len, idx);
|
||||
|
||||
if (fw->state == FW_STATE_UPDATED) {
|
||||
/* update already done, need cold boot */
|
||||
/* update already done, need activation */
|
||||
nd_cmd->status = 0x20007;
|
||||
return 0;
|
||||
}
|
||||
|
@ -430,6 +433,7 @@ static int nd_intel_test_finish_query(struct nfit_test *t,
|
|||
}
|
||||
dev_dbg(dev, "%s: transition out verify\n", __func__);
|
||||
fw->state = FW_STATE_UPDATED;
|
||||
fw->missed_activate = false;
|
||||
/* fall through */
|
||||
case FW_STATE_UPDATED:
|
||||
nd_cmd->status = 0;
|
||||
|
@ -1178,6 +1182,134 @@ static int nd_intel_test_cmd_master_secure_erase(struct nfit_test *t,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static unsigned long last_activate;
|
||||
|
||||
static int nvdimm_bus_intel_fw_activate_businfo(struct nfit_test *t,
|
||||
struct nd_intel_bus_fw_activate_businfo *nd_cmd,
|
||||
unsigned int buf_len)
|
||||
{
|
||||
int i, armed = 0;
|
||||
int state;
|
||||
u64 tmo;
|
||||
|
||||
for (i = 0; i < NUM_DCR; i++) {
|
||||
struct nfit_test_fw *fw = &t->fw[i];
|
||||
|
||||
if (fw->armed)
|
||||
armed++;
|
||||
}
|
||||
|
||||
/*
|
||||
* Emulate 3 second activation max, and 1 second incremental
|
||||
* quiesce time per dimm requiring multiple activates to get all
|
||||
* DIMMs updated.
|
||||
*/
|
||||
if (armed)
|
||||
state = ND_INTEL_FWA_ARMED;
|
||||
else if (!last_activate || time_after(jiffies, last_activate + 3 * HZ))
|
||||
state = ND_INTEL_FWA_IDLE;
|
||||
else
|
||||
state = ND_INTEL_FWA_BUSY;
|
||||
|
||||
tmo = armed * USEC_PER_SEC;
|
||||
*nd_cmd = (struct nd_intel_bus_fw_activate_businfo) {
|
||||
.capability = ND_INTEL_BUS_FWA_CAP_FWQUIESCE
|
||||
| ND_INTEL_BUS_FWA_CAP_OSQUIESCE
|
||||
| ND_INTEL_BUS_FWA_CAP_RESET,
|
||||
.state = state,
|
||||
.activate_tmo = tmo,
|
||||
.cpu_quiesce_tmo = tmo,
|
||||
.io_quiesce_tmo = tmo,
|
||||
.max_quiesce_tmo = 3 * USEC_PER_SEC,
|
||||
};
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int nvdimm_bus_intel_fw_activate(struct nfit_test *t,
|
||||
struct nd_intel_bus_fw_activate *nd_cmd,
|
||||
unsigned int buf_len)
|
||||
{
|
||||
struct nd_intel_bus_fw_activate_businfo info;
|
||||
u32 status = 0;
|
||||
int i;
|
||||
|
||||
nvdimm_bus_intel_fw_activate_businfo(t, &info, sizeof(info));
|
||||
if (info.state == ND_INTEL_FWA_BUSY)
|
||||
status = ND_INTEL_BUS_FWA_STATUS_BUSY;
|
||||
else if (info.activate_tmo > info.max_quiesce_tmo)
|
||||
status = ND_INTEL_BUS_FWA_STATUS_TMO;
|
||||
else if (info.state == ND_INTEL_FWA_IDLE)
|
||||
status = ND_INTEL_BUS_FWA_STATUS_NOARM;
|
||||
|
||||
dev_dbg(&t->pdev.dev, "status: %d\n", status);
|
||||
nd_cmd->status = status;
|
||||
if (status && status != ND_INTEL_BUS_FWA_STATUS_TMO)
|
||||
return 0;
|
||||
|
||||
last_activate = jiffies;
|
||||
for (i = 0; i < NUM_DCR; i++) {
|
||||
struct nfit_test_fw *fw = &t->fw[i];
|
||||
|
||||
if (!fw->armed)
|
||||
continue;
|
||||
if (fw->state != FW_STATE_UPDATED)
|
||||
fw->missed_activate = true;
|
||||
else
|
||||
fw->state = FW_STATE_NEW;
|
||||
fw->armed = false;
|
||||
fw->last_activate = last_activate;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int nd_intel_test_cmd_fw_activate_dimminfo(struct nfit_test *t,
|
||||
struct nd_intel_fw_activate_dimminfo *nd_cmd,
|
||||
unsigned int buf_len, int dimm)
|
||||
{
|
||||
struct nd_intel_bus_fw_activate_businfo info;
|
||||
struct nfit_test_fw *fw = &t->fw[dimm];
|
||||
u32 result, state;
|
||||
|
||||
nvdimm_bus_intel_fw_activate_businfo(t, &info, sizeof(info));
|
||||
|
||||
if (info.state == ND_INTEL_FWA_BUSY)
|
||||
state = ND_INTEL_FWA_BUSY;
|
||||
else if (info.state == ND_INTEL_FWA_IDLE)
|
||||
state = ND_INTEL_FWA_IDLE;
|
||||
else if (fw->armed)
|
||||
state = ND_INTEL_FWA_ARMED;
|
||||
else
|
||||
state = ND_INTEL_FWA_IDLE;
|
||||
|
||||
result = ND_INTEL_DIMM_FWA_NONE;
|
||||
if (last_activate && fw->last_activate == last_activate &&
|
||||
state == ND_INTEL_FWA_IDLE) {
|
||||
if (fw->missed_activate)
|
||||
result = ND_INTEL_DIMM_FWA_NOTSTAGED;
|
||||
else
|
||||
result = ND_INTEL_DIMM_FWA_SUCCESS;
|
||||
}
|
||||
|
||||
*nd_cmd = (struct nd_intel_fw_activate_dimminfo) {
|
||||
.result = result,
|
||||
.state = state,
|
||||
};
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int nd_intel_test_cmd_fw_activate_arm(struct nfit_test *t,
|
||||
struct nd_intel_fw_activate_arm *nd_cmd,
|
||||
unsigned int buf_len, int dimm)
|
||||
{
|
||||
struct nfit_test_fw *fw = &t->fw[dimm];
|
||||
|
||||
fw->armed = nd_cmd->activate_arm == ND_INTEL_DIMM_FWA_ARM;
|
||||
nd_cmd->status = 0;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int get_dimm(struct nfit_mem *nfit_mem, unsigned int func)
|
||||
{
|
||||
|
@ -1192,6 +1324,29 @@ static int get_dimm(struct nfit_mem *nfit_mem, unsigned int func)
|
|||
return i;
|
||||
}
|
||||
|
||||
static void nfit_ctl_dbg(struct acpi_nfit_desc *acpi_desc,
|
||||
struct nvdimm *nvdimm, unsigned int cmd, void *buf,
|
||||
unsigned int len)
|
||||
{
|
||||
struct nfit_test *t = container_of(acpi_desc, typeof(*t), acpi_desc);
|
||||
unsigned int func = cmd;
|
||||
unsigned int family = 0;
|
||||
|
||||
if (cmd == ND_CMD_CALL) {
|
||||
struct nd_cmd_pkg *pkg = buf;
|
||||
|
||||
len = pkg->nd_size_in;
|
||||
family = pkg->nd_family;
|
||||
buf = pkg->nd_payload;
|
||||
func = pkg->nd_command;
|
||||
}
|
||||
dev_dbg(&t->pdev.dev, "%s family: %d cmd: %d: func: %d input length: %d\n",
|
||||
nvdimm ? nvdimm_name(nvdimm) : "bus", family, cmd, func,
|
||||
len);
|
||||
print_hex_dump_debug("nvdimm in ", DUMP_PREFIX_OFFSET, 16, 4,
|
||||
buf, min(len, 256u), true);
|
||||
}
|
||||
|
||||
static int nfit_test_ctl(struct nvdimm_bus_descriptor *nd_desc,
|
||||
struct nvdimm *nvdimm, unsigned int cmd, void *buf,
|
||||
unsigned int buf_len, int *cmd_rc)
|
||||
|
@ -1205,6 +1360,8 @@ static int nfit_test_ctl(struct nvdimm_bus_descriptor *nd_desc,
|
|||
cmd_rc = &__cmd_rc;
|
||||
*cmd_rc = 0;
|
||||
|
||||
nfit_ctl_dbg(acpi_desc, nvdimm, cmd, buf, buf_len);
|
||||
|
||||
if (nvdimm) {
|
||||
struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm);
|
||||
unsigned long cmd_mask = nvdimm_cmd_mask(nvdimm);
|
||||
|
@ -1224,6 +1381,11 @@ static int nfit_test_ctl(struct nvdimm_bus_descriptor *nd_desc,
|
|||
i = get_dimm(nfit_mem, func);
|
||||
if (i < 0)
|
||||
return i;
|
||||
if (i >= NUM_DCR) {
|
||||
dev_WARN_ONCE(&t->pdev.dev, 1,
|
||||
"ND_CMD_CALL only valid for nfit_test0\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
switch (func) {
|
||||
case NVDIMM_INTEL_GET_SECURITY_STATE:
|
||||
|
@ -1252,11 +1414,11 @@ static int nfit_test_ctl(struct nvdimm_bus_descriptor *nd_desc,
|
|||
break;
|
||||
case NVDIMM_INTEL_OVERWRITE:
|
||||
rc = nd_intel_test_cmd_overwrite(t,
|
||||
buf, buf_len, i - t->dcr_idx);
|
||||
buf, buf_len, i);
|
||||
break;
|
||||
case NVDIMM_INTEL_QUERY_OVERWRITE:
|
||||
rc = nd_intel_test_cmd_query_overwrite(t,
|
||||
buf, buf_len, i - t->dcr_idx);
|
||||
buf, buf_len, i);
|
||||
break;
|
||||
case NVDIMM_INTEL_SET_MASTER_PASSPHRASE:
|
||||
rc = nd_intel_test_cmd_master_set_pass(t,
|
||||
|
@ -1266,54 +1428,59 @@ static int nfit_test_ctl(struct nvdimm_bus_descriptor *nd_desc,
|
|||
rc = nd_intel_test_cmd_master_secure_erase(t,
|
||||
buf, buf_len, i);
|
||||
break;
|
||||
case NVDIMM_INTEL_FW_ACTIVATE_DIMMINFO:
|
||||
rc = nd_intel_test_cmd_fw_activate_dimminfo(
|
||||
t, buf, buf_len, i);
|
||||
break;
|
||||
case NVDIMM_INTEL_FW_ACTIVATE_ARM:
|
||||
rc = nd_intel_test_cmd_fw_activate_arm(
|
||||
t, buf, buf_len, i);
|
||||
break;
|
||||
case ND_INTEL_ENABLE_LSS_STATUS:
|
||||
rc = nd_intel_test_cmd_set_lss_status(t,
|
||||
buf, buf_len);
|
||||
break;
|
||||
case ND_INTEL_FW_GET_INFO:
|
||||
rc = nd_intel_test_get_fw_info(t, buf,
|
||||
buf_len, i - t->dcr_idx);
|
||||
buf_len, i);
|
||||
break;
|
||||
case ND_INTEL_FW_START_UPDATE:
|
||||
rc = nd_intel_test_start_update(t, buf,
|
||||
buf_len, i - t->dcr_idx);
|
||||
buf_len, i);
|
||||
break;
|
||||
case ND_INTEL_FW_SEND_DATA:
|
||||
rc = nd_intel_test_send_data(t, buf,
|
||||
buf_len, i - t->dcr_idx);
|
||||
buf_len, i);
|
||||
break;
|
||||
case ND_INTEL_FW_FINISH_UPDATE:
|
||||
rc = nd_intel_test_finish_fw(t, buf,
|
||||
buf_len, i - t->dcr_idx);
|
||||
buf_len, i);
|
||||
break;
|
||||
case ND_INTEL_FW_FINISH_QUERY:
|
||||
rc = nd_intel_test_finish_query(t, buf,
|
||||
buf_len, i - t->dcr_idx);
|
||||
buf_len, i);
|
||||
break;
|
||||
case ND_INTEL_SMART:
|
||||
rc = nfit_test_cmd_smart(buf, buf_len,
|
||||
&t->smart[i - t->dcr_idx]);
|
||||
&t->smart[i]);
|
||||
break;
|
||||
case ND_INTEL_SMART_THRESHOLD:
|
||||
rc = nfit_test_cmd_smart_threshold(buf,
|
||||
buf_len,
|
||||
&t->smart_threshold[i -
|
||||
t->dcr_idx]);
|
||||
&t->smart_threshold[i]);
|
||||
break;
|
||||
case ND_INTEL_SMART_SET_THRESHOLD:
|
||||
rc = nfit_test_cmd_smart_set_threshold(buf,
|
||||
buf_len,
|
||||
&t->smart_threshold[i -
|
||||
t->dcr_idx],
|
||||
&t->smart[i - t->dcr_idx],
|
||||
&t->smart_threshold[i],
|
||||
&t->smart[i],
|
||||
&t->pdev.dev, t->dimm_dev[i]);
|
||||
break;
|
||||
case ND_INTEL_SMART_INJECT:
|
||||
rc = nfit_test_cmd_smart_inject(buf,
|
||||
buf_len,
|
||||
&t->smart_threshold[i -
|
||||
t->dcr_idx],
|
||||
&t->smart[i - t->dcr_idx],
|
||||
&t->smart_threshold[i],
|
||||
&t->smart[i],
|
||||
&t->pdev.dev, t->dimm_dev[i]);
|
||||
break;
|
||||
default:
|
||||
|
@ -1353,9 +1520,9 @@ static int nfit_test_ctl(struct nvdimm_bus_descriptor *nd_desc,
|
|||
if (!nd_desc)
|
||||
return -ENOTTY;
|
||||
|
||||
if (cmd == ND_CMD_CALL) {
|
||||
if (cmd == ND_CMD_CALL && call_pkg->nd_family
|
||||
== NVDIMM_BUS_FAMILY_NFIT) {
|
||||
func = call_pkg->nd_command;
|
||||
|
||||
buf_len = call_pkg->nd_size_in + call_pkg->nd_size_out;
|
||||
buf = (void *) call_pkg->nd_payload;
|
||||
|
||||
|
@ -1379,7 +1546,26 @@ static int nfit_test_ctl(struct nvdimm_bus_descriptor *nd_desc,
|
|||
default:
|
||||
return -ENOTTY;
|
||||
}
|
||||
} else if (cmd == ND_CMD_CALL && call_pkg->nd_family
|
||||
== NVDIMM_BUS_FAMILY_INTEL) {
|
||||
func = call_pkg->nd_command;
|
||||
buf_len = call_pkg->nd_size_in + call_pkg->nd_size_out;
|
||||
buf = (void *) call_pkg->nd_payload;
|
||||
|
||||
switch (func) {
|
||||
case NVDIMM_BUS_INTEL_FW_ACTIVATE_BUSINFO:
|
||||
rc = nvdimm_bus_intel_fw_activate_businfo(t,
|
||||
buf, buf_len);
|
||||
return rc;
|
||||
case NVDIMM_BUS_INTEL_FW_ACTIVATE:
|
||||
rc = nvdimm_bus_intel_fw_activate(t, buf,
|
||||
buf_len);
|
||||
return rc;
|
||||
default:
|
||||
return -ENOTTY;
|
||||
}
|
||||
} else if (cmd == ND_CMD_CALL)
|
||||
return -ENOTTY;
|
||||
|
||||
if (!nd_desc || !test_bit(cmd, &nd_desc->cmd_mask))
|
||||
return -ENOTTY;
|
||||
|
@ -1805,6 +1991,7 @@ static void nfit_test0_setup(struct nfit_test *t)
|
|||
struct acpi_nfit_flush_address *flush;
|
||||
struct acpi_nfit_capabilities *pcap;
|
||||
unsigned int offset = 0, i;
|
||||
unsigned long *acpi_mask;
|
||||
|
||||
/*
|
||||
* spa0 (interleave first half of dimm0 and dimm1, note storage
|
||||
|
@ -2507,10 +2694,10 @@ static void nfit_test0_setup(struct nfit_test *t)
|
|||
set_bit(ND_CMD_ARS_STATUS, &acpi_desc->bus_cmd_force_en);
|
||||
set_bit(ND_CMD_CLEAR_ERROR, &acpi_desc->bus_cmd_force_en);
|
||||
set_bit(ND_CMD_CALL, &acpi_desc->bus_cmd_force_en);
|
||||
set_bit(NFIT_CMD_TRANSLATE_SPA, &acpi_desc->bus_nfit_cmd_force_en);
|
||||
set_bit(NFIT_CMD_ARS_INJECT_SET, &acpi_desc->bus_nfit_cmd_force_en);
|
||||
set_bit(NFIT_CMD_ARS_INJECT_CLEAR, &acpi_desc->bus_nfit_cmd_force_en);
|
||||
set_bit(NFIT_CMD_ARS_INJECT_GET, &acpi_desc->bus_nfit_cmd_force_en);
|
||||
set_bit(NFIT_CMD_TRANSLATE_SPA, &acpi_desc->bus_dsm_mask);
|
||||
set_bit(NFIT_CMD_ARS_INJECT_SET, &acpi_desc->bus_dsm_mask);
|
||||
set_bit(NFIT_CMD_ARS_INJECT_CLEAR, &acpi_desc->bus_dsm_mask);
|
||||
set_bit(NFIT_CMD_ARS_INJECT_GET, &acpi_desc->bus_dsm_mask);
|
||||
set_bit(ND_INTEL_FW_GET_INFO, &acpi_desc->dimm_cmd_force_en);
|
||||
set_bit(ND_INTEL_FW_START_UPDATE, &acpi_desc->dimm_cmd_force_en);
|
||||
set_bit(ND_INTEL_FW_SEND_DATA, &acpi_desc->dimm_cmd_force_en);
|
||||
|
@ -2531,6 +2718,12 @@ static void nfit_test0_setup(struct nfit_test *t)
|
|||
&acpi_desc->dimm_cmd_force_en);
|
||||
set_bit(NVDIMM_INTEL_MASTER_SECURE_ERASE,
|
||||
&acpi_desc->dimm_cmd_force_en);
|
||||
set_bit(NVDIMM_INTEL_FW_ACTIVATE_DIMMINFO, &acpi_desc->dimm_cmd_force_en);
|
||||
set_bit(NVDIMM_INTEL_FW_ACTIVATE_ARM, &acpi_desc->dimm_cmd_force_en);
|
||||
|
||||
acpi_mask = &acpi_desc->family_dsm_mask[NVDIMM_BUS_FAMILY_INTEL];
|
||||
set_bit(NVDIMM_BUS_INTEL_FW_ACTIVATE_BUSINFO, acpi_mask);
|
||||
set_bit(NVDIMM_BUS_INTEL_FW_ACTIVATE, acpi_mask);
|
||||
}
|
||||
|
||||
static void nfit_test1_setup(struct nfit_test *t)
|
||||
|
@ -2699,14 +2892,18 @@ static int nfit_ctl_test(struct device *dev)
|
|||
struct acpi_nfit_desc *acpi_desc;
|
||||
const u64 test_val = 0x0123456789abcdefULL;
|
||||
unsigned long mask, cmd_size, offset;
|
||||
struct nfit_ctl_test_cmd {
|
||||
struct nd_cmd_pkg pkg;
|
||||
union {
|
||||
struct nd_cmd_get_config_size cfg_size;
|
||||
struct nd_cmd_clear_error clear_err;
|
||||
struct nd_cmd_ars_status ars_stat;
|
||||
struct nd_cmd_ars_cap ars_cap;
|
||||
struct nd_intel_bus_fw_activate_businfo fwa_info;
|
||||
char buf[sizeof(struct nd_cmd_ars_status)
|
||||
+ sizeof(struct nd_ars_record)];
|
||||
} cmds;
|
||||
};
|
||||
} cmd;
|
||||
|
||||
adev = devm_kzalloc(dev, sizeof(*adev), GFP_KERNEL);
|
||||
if (!adev)
|
||||
|
@ -2731,11 +2928,15 @@ static int nfit_ctl_test(struct device *dev)
|
|||
.module = THIS_MODULE,
|
||||
.provider_name = "ACPI.NFIT",
|
||||
.ndctl = acpi_nfit_ctl,
|
||||
.bus_family_mask = 1UL << NVDIMM_BUS_FAMILY_NFIT
|
||||
| 1UL << NVDIMM_BUS_FAMILY_INTEL,
|
||||
},
|
||||
.bus_dsm_mask = 1UL << NFIT_CMD_TRANSLATE_SPA
|
||||
| 1UL << NFIT_CMD_ARS_INJECT_SET
|
||||
| 1UL << NFIT_CMD_ARS_INJECT_CLEAR
|
||||
| 1UL << NFIT_CMD_ARS_INJECT_GET,
|
||||
},
|
||||
.family_dsm_mask[NVDIMM_BUS_FAMILY_INTEL] =
|
||||
NVDIMM_BUS_INTEL_FW_ACTIVATE_CMDMASK,
|
||||
.dev = &adev->dev,
|
||||
};
|
||||
|
||||
|
@ -2766,21 +2967,21 @@ static int nfit_ctl_test(struct device *dev)
|
|||
|
||||
|
||||
/* basic checkout of a typical 'get config size' command */
|
||||
cmd_size = sizeof(cmds.cfg_size);
|
||||
cmds.cfg_size = (struct nd_cmd_get_config_size) {
|
||||
cmd_size = sizeof(cmd.cfg_size);
|
||||
cmd.cfg_size = (struct nd_cmd_get_config_size) {
|
||||
.status = 0,
|
||||
.config_size = SZ_128K,
|
||||
.max_xfer = SZ_4K,
|
||||
};
|
||||
rc = setup_result(cmds.buf, cmd_size);
|
||||
rc = setup_result(cmd.buf, cmd_size);
|
||||
if (rc)
|
||||
return rc;
|
||||
rc = acpi_nfit_ctl(&acpi_desc->nd_desc, nvdimm, ND_CMD_GET_CONFIG_SIZE,
|
||||
cmds.buf, cmd_size, &cmd_rc);
|
||||
cmd.buf, cmd_size, &cmd_rc);
|
||||
|
||||
if (rc < 0 || cmd_rc || cmds.cfg_size.status != 0
|
||||
|| cmds.cfg_size.config_size != SZ_128K
|
||||
|| cmds.cfg_size.max_xfer != SZ_4K) {
|
||||
if (rc < 0 || cmd_rc || cmd.cfg_size.status != 0
|
||||
|| cmd.cfg_size.config_size != SZ_128K
|
||||
|| cmd.cfg_size.max_xfer != SZ_4K) {
|
||||
dev_dbg(dev, "%s: failed at: %d rc: %d cmd_rc: %d\n",
|
||||
__func__, __LINE__, rc, cmd_rc);
|
||||
return -EIO;
|
||||
|
@ -2789,14 +2990,14 @@ static int nfit_ctl_test(struct device *dev)
|
|||
|
||||
/* test ars_status with zero output */
|
||||
cmd_size = offsetof(struct nd_cmd_ars_status, address);
|
||||
cmds.ars_stat = (struct nd_cmd_ars_status) {
|
||||
cmd.ars_stat = (struct nd_cmd_ars_status) {
|
||||
.out_length = 0,
|
||||
};
|
||||
rc = setup_result(cmds.buf, cmd_size);
|
||||
rc = setup_result(cmd.buf, cmd_size);
|
||||
if (rc)
|
||||
return rc;
|
||||
rc = acpi_nfit_ctl(&acpi_desc->nd_desc, NULL, ND_CMD_ARS_STATUS,
|
||||
cmds.buf, cmd_size, &cmd_rc);
|
||||
cmd.buf, cmd_size, &cmd_rc);
|
||||
|
||||
if (rc < 0 || cmd_rc) {
|
||||
dev_dbg(dev, "%s: failed at: %d rc: %d cmd_rc: %d\n",
|
||||
|
@ -2806,16 +3007,16 @@ static int nfit_ctl_test(struct device *dev)
|
|||
|
||||
|
||||
/* test ars_cap with benign extended status */
|
||||
cmd_size = sizeof(cmds.ars_cap);
|
||||
cmds.ars_cap = (struct nd_cmd_ars_cap) {
|
||||
cmd_size = sizeof(cmd.ars_cap);
|
||||
cmd.ars_cap = (struct nd_cmd_ars_cap) {
|
||||
.status = ND_ARS_PERSISTENT << 16,
|
||||
};
|
||||
offset = offsetof(struct nd_cmd_ars_cap, status);
|
||||
rc = setup_result(cmds.buf + offset, cmd_size - offset);
|
||||
rc = setup_result(cmd.buf + offset, cmd_size - offset);
|
||||
if (rc)
|
||||
return rc;
|
||||
rc = acpi_nfit_ctl(&acpi_desc->nd_desc, NULL, ND_CMD_ARS_CAP,
|
||||
cmds.buf, cmd_size, &cmd_rc);
|
||||
cmd.buf, cmd_size, &cmd_rc);
|
||||
|
||||
if (rc < 0 || cmd_rc) {
|
||||
dev_dbg(dev, "%s: failed at: %d rc: %d cmd_rc: %d\n",
|
||||
|
@ -2825,19 +3026,19 @@ static int nfit_ctl_test(struct device *dev)
|
|||
|
||||
|
||||
/* test ars_status with 'status' trimmed from 'out_length' */
|
||||
cmd_size = sizeof(cmds.ars_stat) + sizeof(struct nd_ars_record);
|
||||
cmds.ars_stat = (struct nd_cmd_ars_status) {
|
||||
cmd_size = sizeof(cmd.ars_stat) + sizeof(struct nd_ars_record);
|
||||
cmd.ars_stat = (struct nd_cmd_ars_status) {
|
||||
.out_length = cmd_size - 4,
|
||||
};
|
||||
record = &cmds.ars_stat.records[0];
|
||||
record = &cmd.ars_stat.records[0];
|
||||
*record = (struct nd_ars_record) {
|
||||
.length = test_val,
|
||||
};
|
||||
rc = setup_result(cmds.buf, cmd_size);
|
||||
rc = setup_result(cmd.buf, cmd_size);
|
||||
if (rc)
|
||||
return rc;
|
||||
rc = acpi_nfit_ctl(&acpi_desc->nd_desc, NULL, ND_CMD_ARS_STATUS,
|
||||
cmds.buf, cmd_size, &cmd_rc);
|
||||
cmd.buf, cmd_size, &cmd_rc);
|
||||
|
||||
if (rc < 0 || cmd_rc || record->length != test_val) {
|
||||
dev_dbg(dev, "%s: failed at: %d rc: %d cmd_rc: %d\n",
|
||||
|
@ -2847,19 +3048,19 @@ static int nfit_ctl_test(struct device *dev)
|
|||
|
||||
|
||||
/* test ars_status with 'Output (Size)' including 'status' */
|
||||
cmd_size = sizeof(cmds.ars_stat) + sizeof(struct nd_ars_record);
|
||||
cmds.ars_stat = (struct nd_cmd_ars_status) {
|
||||
cmd_size = sizeof(cmd.ars_stat) + sizeof(struct nd_ars_record);
|
||||
cmd.ars_stat = (struct nd_cmd_ars_status) {
|
||||
.out_length = cmd_size,
|
||||
};
|
||||
record = &cmds.ars_stat.records[0];
|
||||
record = &cmd.ars_stat.records[0];
|
||||
*record = (struct nd_ars_record) {
|
||||
.length = test_val,
|
||||
};
|
||||
rc = setup_result(cmds.buf, cmd_size);
|
||||
rc = setup_result(cmd.buf, cmd_size);
|
||||
if (rc)
|
||||
return rc;
|
||||
rc = acpi_nfit_ctl(&acpi_desc->nd_desc, NULL, ND_CMD_ARS_STATUS,
|
||||
cmds.buf, cmd_size, &cmd_rc);
|
||||
cmd.buf, cmd_size, &cmd_rc);
|
||||
|
||||
if (rc < 0 || cmd_rc || record->length != test_val) {
|
||||
dev_dbg(dev, "%s: failed at: %d rc: %d cmd_rc: %d\n",
|
||||
|
@ -2869,15 +3070,15 @@ static int nfit_ctl_test(struct device *dev)
|
|||
|
||||
|
||||
/* test extended status for get_config_size results in failure */
|
||||
cmd_size = sizeof(cmds.cfg_size);
|
||||
cmds.cfg_size = (struct nd_cmd_get_config_size) {
|
||||
cmd_size = sizeof(cmd.cfg_size);
|
||||
cmd.cfg_size = (struct nd_cmd_get_config_size) {
|
||||
.status = 1 << 16,
|
||||
};
|
||||
rc = setup_result(cmds.buf, cmd_size);
|
||||
rc = setup_result(cmd.buf, cmd_size);
|
||||
if (rc)
|
||||
return rc;
|
||||
rc = acpi_nfit_ctl(&acpi_desc->nd_desc, nvdimm, ND_CMD_GET_CONFIG_SIZE,
|
||||
cmds.buf, cmd_size, &cmd_rc);
|
||||
cmd.buf, cmd_size, &cmd_rc);
|
||||
|
||||
if (rc < 0 || cmd_rc >= 0) {
|
||||
dev_dbg(dev, "%s: failed at: %d rc: %d cmd_rc: %d\n",
|
||||
|
@ -2886,16 +3087,46 @@ static int nfit_ctl_test(struct device *dev)
|
|||
}
|
||||
|
||||
/* test clear error */
|
||||
cmd_size = sizeof(cmds.clear_err);
|
||||
cmds.clear_err = (struct nd_cmd_clear_error) {
|
||||
cmd_size = sizeof(cmd.clear_err);
|
||||
cmd.clear_err = (struct nd_cmd_clear_error) {
|
||||
.length = 512,
|
||||
.cleared = 512,
|
||||
};
|
||||
rc = setup_result(cmds.buf, cmd_size);
|
||||
rc = setup_result(cmd.buf, cmd_size);
|
||||
if (rc)
|
||||
return rc;
|
||||
rc = acpi_nfit_ctl(&acpi_desc->nd_desc, NULL, ND_CMD_CLEAR_ERROR,
|
||||
cmds.buf, cmd_size, &cmd_rc);
|
||||
cmd.buf, cmd_size, &cmd_rc);
|
||||
if (rc < 0 || cmd_rc) {
|
||||
dev_dbg(dev, "%s: failed at: %d rc: %d cmd_rc: %d\n",
|
||||
__func__, __LINE__, rc, cmd_rc);
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
/* test firmware activate bus info */
|
||||
cmd_size = sizeof(cmd.fwa_info);
|
||||
cmd = (struct nfit_ctl_test_cmd) {
|
||||
.pkg = {
|
||||
.nd_command = NVDIMM_BUS_INTEL_FW_ACTIVATE_BUSINFO,
|
||||
.nd_family = NVDIMM_BUS_FAMILY_INTEL,
|
||||
.nd_size_out = cmd_size,
|
||||
.nd_fw_size = cmd_size,
|
||||
},
|
||||
.fwa_info = {
|
||||
.state = ND_INTEL_FWA_IDLE,
|
||||
.capability = ND_INTEL_BUS_FWA_CAP_FWQUIESCE
|
||||
| ND_INTEL_BUS_FWA_CAP_OSQUIESCE,
|
||||
.activate_tmo = 1,
|
||||
.cpu_quiesce_tmo = 1,
|
||||
.io_quiesce_tmo = 1,
|
||||
.max_quiesce_tmo = 1,
|
||||
},
|
||||
};
|
||||
rc = setup_result(cmd.buf, cmd_size);
|
||||
if (rc)
|
||||
return rc;
|
||||
rc = acpi_nfit_ctl(&acpi_desc->nd_desc, NULL, ND_CMD_CALL,
|
||||
&cmd, sizeof(cmd.pkg) + cmd_size, &cmd_rc);
|
||||
if (rc < 0 || cmd_rc) {
|
||||
dev_dbg(dev, "%s: failed at: %d rc: %d cmd_rc: %d\n",
|
||||
__func__, __LINE__, rc, cmd_rc);
|
||||
|
|
Loading…
Reference in New Issue
Block a user