Commit Graph

782397 Commits

Author SHA1 Message Date
Nathan Fontenot
cd24e457fd powerpc/pseries: Remove prrn_work workqueue
When a PRRN event is received we are already running in a worker
thread. Instead of spawning off another worker thread on the prrn_work
workqueue to handle the PRRN event we can just call the PRRN handler
routine directly.

With this update we can also pass the scope variable for the PRRN
event directly to the handler instead of it being a global variable.

This patch fixes the following oops mnessage we are seeing in PRRN testing:

  Oops: Bad kernel stack pointer, sig: 6 [#1]
  SMP NR_CPUS=2048 NUMA pSeries
  Modules linked in: nfsv3 nfs_acl rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace sunrpc fscache binfmt_misc reiserfs vfat fat rpadlpar_io(X) rpaphp(X) tcp_diag udp_diag inet_diag unix_diag af_packet_diag netlink_diag af_packet xfs libcrc32c dm_service_time ibmveth(X) ses enclosure scsi_transport_sas rtc_generic btrfs xor raid6_pq sd_mod ibmvscsi(X) scsi_transport_srp ipr(X) libata sg dm_multipath dm_mod scsi_dh_rdac scsi_dh_emc scsi_dh_alua scsi_mod autofs4
  Supported: Yes, External                                                     54
  CPU: 7 PID: 18967 Comm: kworker/u96:0 Tainted: G                 X 4.4.126-94.22-default #1
  Workqueue: pseries hotplug workque pseries_hp_work_fn
  task: c000000775367790 ti: c00000001ebd4000 task.ti: c00000070d140000
  NIP: 0000000000000000 LR: 000000001fb3d050 CTR: 0000000000000000
  REGS: c00000001ebd7d40 TRAP: 0700   Tainted: G                 X  (4.4.126-94.22-default)
  MSR: 8000000102081000 <41,VEC,ME5  CR: 28000002  XER: 20040018   4
  CFAR: 000000001fb3d084 40 419   1                                3
  GPR00: 000000000000000040000000000010007 000000001ffff400 000000041fffe200
  GPR04: 000000000000008050000000000000000 000000001fb15fa8 0000000500000500
  GPR08: 000000000001f40040000000000000001 0000000000000000 000005:5200040002
  GPR12: 00000000000000005c000000007a05400 c0000000000e89f8 000000001ed9f668
  GPR16: 000000001fbeff944000000001fbeff94 000000001fb545e4 0000006000000060
  GPR20: ffffffffffffffff4ffffffffffffffff 0000000000000000 0000000000000000
  GPR24: 00000000000000005400000001fb3c000 0000000000000000 000000001fb1b040
  GPR28: 000000001fb240004000000001fb440d8 0000000000000008 0000000000000000
  NIP [0000000000000000] 5         (null)
  LR [000000001fb3d050] 031fb3d050
  Call Trace:            4
  Instruction dump:      4                                       5:47 12    2
  XXXXXXXX XXXXXXXX XXXXX4XX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX
  XXXXXXXX XXXXXXXX XXXXX5XX XXXXXXXX 60000000 60000000 60000000 60000000
  ---[ end trace aa5627b04a7d9d6b ]---                                       3NMI watchdog: BUG: soft lockup - CPU#27 stuck for 23s! [kworker/27:0:13903]
  Modules linked in: nfsv3 nfs_acl rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace sunrpc fscache binfmt_misc reiserfs vfat fat rpadlpar_io(X) rpaphp(X) tcp_diag udp_diag inet_diag unix_diag af_packet_diag netlink_diag af_packet xfs libcrc32c dm_service_time ibmveth(X) ses enclosure scsi_transport_sas rtc_generic btrfs xor raid6_pq sd_mod ibmvscsi(X) scsi_transport_srp ipr(X) libata sg dm_multipath dm_mod scsi_dh_rdac scsi_dh_emc scsi_dh_alua scsi_mod autofs4
  Supported: Yes, External
  CPU: 27 PID: 13903 Comm: kworker/27:0 Tainted: G      D          X 4.4.126-94.22-default #1
  Workqueue: events prrn_work_fn
  task: c000000747cfa390 ti: c00000074712c000 task.ti: c00000074712c000
  NIP: c0000000008002a8 LR: c000000000090770 CTR: 000000000032e088
  REGS: c00000074712f7b0 TRAP: 0901   Tainted: G      D          X  (4.4.126-94.22-default)
  MSR: 8000000100009033 <SF,EE,ME,IR,DR,RI,LE>  CR: 22482044  XER: 20040000
  CFAR: c0000000008002c4 SOFTE: 1
  GPR00: c000000000090770 c00000074712fa30 c000000000f09800 c000000000fa1928 6:02
  GPR04: c000000775f5e000 fffffffffffffffe 0000000000000001 c000000000f42db8
  GPR08: 0000000000000001 0000000080000007 0000000000000000 0000000000000000
  GPR12: 8006210083180000 c000000007a14400
  NIP [c0000000008002a8] _raw_spin_lock+0x68/0xd0
  LR [c000000000090770] mobility_rtas_call+0x50/0x100
  Call Trace:            59                                        5
  [c00000074712fa60] [c000000000090770] mobility_rtas_call+0x50/0x100
  [c00000074712faf0] [c000000000090b08] pseries_devicetree_update+0xf8/0x530
  [c00000074712fc20] [c000000000031ba4] prrn_work_fn+0x34/0x50
  [c00000074712fc40] [c0000000000e0390] process_one_work+0x1a0/0x4e0
  [c00000074712fcd0] [c0000000000e0870] worker_thread+0x1a0/0x6105:57       2
  [c00000074712fd80] [c0000000000e8b18] kthread+0x128/0x150
  [c00000074712fe30] [c0000000000096f8] ret_from_kernel_thread+0x5c/0x64
  Instruction dump:
  2c090000 40c20010 7d40192d 40c2fff0 7c2004ac 2fa90000 40de0018 5:540030   3
  e8010010 ebe1fff8 7c0803a6 4e800020 <7c210b78> e92d0000 89290009 792affe3

Signed-off-by: John Allen <jallen@linux.ibm.com>
Signed-off-by: Haren Myneni <haren@us.ibm.com>
Signed-off-by: Nathan Fontenot <nfont@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-19 22:08:12 +10:00
Nathan Fontenot
063b8b1251 powerpc/pseries/memory-hotplug: Only update DT once per memory DLPAR request
The updates to powerpc numa and memory hotplug code now use the
in-kernel LMB array instead of the device tree. This change allows the
pseries memory DLPAR code to only update the device tree once after
successfully handling a DLPAR request.

Prior to the in-kernel LMB array, the numa code looked up the affinity
for memory being added in the device tree, the code now looks this up
in the LMB array. This change means the memory hotplug code can just
update the affinity for an LMB in the LMB array instead of updating
the device tree.

This also provides a savings in kernel memory. When updating the
device tree old properties are never free'ed since there is no
usecount on properties. This behavior leads to a new copy of the
property being allocated every time a LMB is added or removed (i.e. a
request to add 100 LMBs creates 100 new copies of the property). With
this update only a single new property is created when a DLPAR request
completes successfully.

Signed-off-by: Nathan Fontenot <nfont@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-19 22:08:12 +10:00
Nicholas Piggin
6977f95e63 powerpc: avoid -mno-sched-epilog on GCC 4.9 and newer
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Joel Stanley <joel@jms.id.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-19 22:01:56 +10:00
Nicholas Piggin
2a056f58fd powerpc: consolidate -mno-sched-epilog into FTRACE flags
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Joel Stanley <joel@jms.id.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-19 22:01:56 +10:00
Nicholas Piggin
f2910f0e68 powerpc: remove old GCC version checks
GCC 4.6 is the minimum supported now.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Joel Stanley <joel@jms.id.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-19 22:01:56 +10:00
Nicholas Piggin
89ca4e126a powerpc/64s/hash: Add a SLB preload cache
When switching processes, currently all user SLBEs are cleared, and a
few (exec_base, pc, and stack) are preloaded. In trivial testing with
small apps, this tends to miss the heap and low 256MB segments, and it
will also miss commonly accessed segments on large memory workloads.

Add a simple round-robin preload cache that just inserts the last SLB
miss into the head of the cache and preloads those at context switch
time. Every 256 context switches, the oldest entry is removed from the
cache to shrink the cache and require fewer slbmte if they are unused.

Much more could go into this, including into the SLB entry reclaim
side to track some LRU information etc, which would require a study of
large memory workloads. But this is a simple thing we can do now that
is an obvious win for common workloads.

With the full series, process switching speed on the context_switch
benchmark on POWER9/hash (with kernel speculation security masures
disabled) increases from 140K/s to 178K/s (27%).

POWER8 does not change much (within 1%), it's unclear why it does not
see a big gain like POWER9.

Booting to busybox init with 256MB segments has SLB misses go down
from 945 to 69, and with 1T segments 900 to 21. These could almost all
be eliminated by preloading a bit more carefully with ELF binary
loading.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-19 22:01:56 +10:00
Nicholas Piggin
2e1626744e powerpc/64s/hash: provide arch_setup_exec hooks for hash slice setup
This will be used by the SLB code in the next patch, but for now this
sets the slb_addr_limit to the correct size for 32-bit tasks.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-19 22:01:56 +10:00
Nicholas Piggin
e83cbf7fb7 powerpc/64s: xmon do not dump hash fields when using radix mode
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-19 22:01:56 +10:00
Nicholas Piggin
655deecf67 powerpc/64s/hash: SLB allocation status bitmaps
Add 32-entry bitmaps to track the allocation status of the first 32
SLB entries, and whether they are user or kernel entries. These are
used to allocate free SLB entries first, before resorting to the round
robin allocator.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-19 22:01:56 +10:00
Nicholas Piggin
8fed04d0f6 powerpc/64s/hash: remove user SLB data from the paca
User SLB mappig data is copied into the PACA from the mm->context so
it can be accessed by the SLB miss handlers.

After the C conversion, SLB miss handlers now run with relocation on,
and user SLB misses are able to take recursive kernel SLB misses, so
the user SLB mapping data can be removed from the paca and accessed
directly.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-19 22:01:46 +10:00
Nicholas Piggin
5e46e29e6a powerpc/64s/hash: convert SLB miss handlers to C
This patch moves SLB miss handlers completely to C, using the standard
exception handler macros to set up the stack and branch to C.

This can be done because the segment containing the kernel stack is
always bolted, so accessing it with relocation on will not cause an
SLB exception.

Arbitrary kernel memory may not be accessed when handling kernel space
SLB misses, so care should be taken there. However user SLB misses can
access any kernel memory, which can be used to move some fields out of
the paca (in later patches).

User SLB misses could quite easily reconcile IRQs and set up a first
class kernel environment and exit via ret_from_except, however that
doesn't seem to be necessary at the moment, so we only do that if a
bad fault is encountered.

[ Credit to Aneesh for bug fixes, error checks, and improvements to bad
  address handling, etc ]

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>

Since RFC:
- Added MSR[RI] handling
- Fixed up a register loss bug exposed by irq tracing (Aneesh)
- Reject misses outside the defined kernel regions (Aneesh)
- Added several more sanity checks and error handling (Aneesh), we may
  look at consolidating these tests and tightenig up the code but for
  a first pass we decided it's better to check carefully.

Since v1:
- Fixed SLB cache corruption (Aneesh)
- Fixed untidy SLBE allocation "leak" in get_vsid error case
- Now survives some stress testing on real hardware

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-19 21:59:41 +10:00
Nicholas Piggin
82d8f4c22f powerpc/64s/hash: Use POWER9 SLBIA IH=3 variant in switch_slb
POWER9 introduces SLBIA IH=3, which invalidates all SLB entries and
associated lookaside information that have a class value of 1, which
Linux assigns to user addresses. This matches what switch_slb wants,
and allows a simple fast implementation that avoids the slb_cache
complexity.

As a side-effect, the POWER5 < DD2.1 SLB invalidation workaround is
also avoided on POWER9.

Process context switching rate is improved about 2.2% for a small
process that hits the slb cache which is the best case for the current
code.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-19 21:59:44 +10:00
Nicholas Piggin
5141c182d7 powerpc/64s/hash: Use POWER6 SLBIA IH=1 variant in switch_slb
The SLBIA IH=1 hint will remove all non-zero SLBEs, but only
invalidate ERAT entries associated with a class value of 1, for
processors that support the hint (e.g., POWER6 and newer), which
Linux assigns to user addresses.

This prevents kernel ERAT entries from being invalidated when
context switchig (if the thread faulted in more than 8 user SLBEs).

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-19 21:59:41 +10:00
Nicholas Piggin
85376e2a17 powerpc/64s/hash: remove the vmalloc segment from the bolted SLB
Remove the vmalloc segment from bolted SLBEs. This is not required to
be bolted, and seems like it was added to help pre-load the SLB on
context switch. However there are now other segments like the vmemmap
segment and non-zero node memory that often take misses after a context
switch, so it is better to solve this in a more general way.

A subsequent change will track free SLB entries and uses those rather
than round-robin overwrite valid entries, which makes it far less
likely for kernel SLBEs to be evicted after they are installed.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-19 21:59:41 +10:00
Nicholas Piggin
8b92887ced powerpc/64s/hash: move POWER5 < DD2.1 slbie workaround where it is needed
The POWER5 < DD2.1 issue is that slbie needs to be issued more than
once. It came in with this change:

ChangeSet@1.1608, 2004-04-29 07:12:31-07:00, david@gibson.dropbear.id.au
  [PATCH] POWER5 erratum workaround

  Early POWER5 revisions (<DD2.1) have a problem requiring slbie
  instructions to be repeated under some circumstances.  The patch below
  adds a workaround (patch made by Anton Blanchard).

(aka. 3e4520f7605243abf66a7ccd3d2e49e48e8c0483 in the full history tree)

The extra slbie in switch_slb is done even for the case where slbia is
called (slb_flush_and_rebolt). I don't believe that is required
because there are other slb_flush_and_rebolt callers which do not
issue the workaround slbie, which would be broken if it was required.

It also seems to be fine inside the isync with the first slbie, as it
is in the kernel stack switch code.

So move this workaround to where it is required. This is not much of
an optimisation because this is the fast path, but it makes the code
more understandable and neater.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Retain slbie_data initialisation to avoid compiler warning]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-19 21:59:41 +10:00
Nicholas Piggin
505ea82eab powerpc/64s/hash: avoid the POWER5 < DD2.1 slb invalidate workaround on POWER8/9
I only have POWER8/9 to test, so just remove it for those.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-19 21:59:41 +10:00
Nicholas Piggin
09b4438db1 powerpc/64s/hash: Fix stab_rr off by one initialization
This causes SLB alloation to start 1 beyond the start of the SLB.
There is no real problem because after it wraps it stats behaving
properly, it's just surprisig to see when looking at SLB traces.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-19 21:59:41 +10:00
Mahesh Salgaonkar
db7d31ac04 powernv/pseries: consolidate code for mce early handling.
Now that other platforms also implements real mode mce handler,
lets consolidate the code by sharing existing powernv machine check
early code. Rename machine_check_powernv_early to
machine_check_common_early and reuse the code.

Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-19 21:59:41 +10:00
Mahesh Salgaonkar
c6d15258cd powerpc/pseries: Dump the SLB contents on SLB MCE errors.
If we get a machine check exceptions due to SLB errors then dump the
current SLB contents which will be very much helpful in debugging the
root cause of SLB errors. Introduce an exclusive buffer per cpu to hold
faulty SLB entries. In real mode mce handler saves the old SLB contents
into this buffer accessible through paca and print it out later in virtual
mode.

With this patch the console will log SLB contents like below on SLB MCE
errors:

[  507.297236] SLB contents of cpu 0x1
[  507.297237] Last SLB entry inserted at slot 16
[  507.297238] 00 c000000008000000 400ea1b217000500
[  507.297239]   1T  ESID=   c00000  VSID=      ea1b217 LLP:100
[  507.297240] 01 d000000008000000 400d43642f000510
[  507.297242]   1T  ESID=   d00000  VSID=      d43642f LLP:110
[  507.297243] 11 f000000008000000 400a86c85f000500
[  507.297244]   1T  ESID=   f00000  VSID=      a86c85f LLP:100
[  507.297245] 12 00007f0008000000 4008119624000d90
[  507.297246]   1T  ESID=       7f  VSID=      8119624 LLP:110
[  507.297247] 13 0000000018000000 00092885f5150d90
[  507.297247]  256M ESID=        1  VSID=   92885f5150 LLP:110
[  507.297248] 14 0000010008000000 4009e7cb50000d90
[  507.297249]   1T  ESID=        1  VSID=      9e7cb50 LLP:110
[  507.297250] 15 d000000008000000 400d43642f000510
[  507.297251]   1T  ESID=   d00000  VSID=      d43642f LLP:110
[  507.297252] 16 d000000008000000 400d43642f000510
[  507.297253]   1T  ESID=   d00000  VSID=      d43642f LLP:110
[  507.297253] ----------------------------------
[  507.297254] SLB cache ptr value = 3
[  507.297254] Valid SLB cache entries:
[  507.297255] 00 EA[0-35]=    7f000
[  507.297256] 01 EA[0-35]=        1
[  507.297257] 02 EA[0-35]=     1000
[  507.297257] Rest of SLB cache entries:
[  507.297258] 03 EA[0-35]=    7f000
[  507.297258] 04 EA[0-35]=        1
[  507.297259] 05 EA[0-35]=     1000
[  507.297260] 06 EA[0-35]=       12
[  507.297260] 07 EA[0-35]=    7f000

Suggested-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Suggested-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-19 21:59:41 +10:00
Mahesh Salgaonkar
8f0b80561f powerpc/pseries: Display machine check error details.
Extract the MCE error details from RTAS extended log and display it to
console.

With this patch you should now see mce logs like below:

[  142.371818] Severe Machine check interrupt [Recovered]
[  142.371822]   NIP [d00000000ca301b8]: init_module+0x1b8/0x338 [bork_kernel]
[  142.371822]   Initiator: CPU
[  142.371823]   Error type: SLB [Multihit]
[  142.371824]     Effective address: d00000000ca70000

Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-19 21:59:41 +10:00
Mahesh Salgaonkar
a43c159042 powerpc/pseries: Flush SLB contents on SLB MCE errors.
On pseries, as of today system crashes if we get a machine check
exceptions due to SLB errors. These are soft errors and can be fixed
by flushing the SLBs so the kernel can continue to function instead of
system crash. We do this in real mode before turning on MMU. Otherwise
we would run into nested machine checks. This patch now fetches the
rtas error log in real mode and flushes the SLBs on SLB/ERAT errors.

Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-19 21:59:22 +10:00
Mahesh Salgaonkar
04fce21c9d powerpc/pseries: Define MCE error event section.
On pseries, the machine check error details are part of RTAS extended
event log passed under Machine check exception section. This patch adds
the definition of rtas MCE event section and related helper
functions.

Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-19 21:58:09 +10:00
Breno Leitao
44d947eff1 selftests/powerpc: Do not fail with reschedule
There are cases where the test is not expecting to have the transaction
aborted, but, the test process might have been rescheduled, either in the
OS level or by KVM (if it is running on a KVM guest machine). The process
reschedule will cause a treclaim/recheckpoint which will cause the
transaction to doom, aborting the transaction as soon as the process is
rescheduled back to the CPU. This might cause the test to fail, but this is
not a failure in essence.

If that is the case, TEXASR[FC] is indicated with either
TM_CAUSE_RESCHEDULE or TM_CAUSE_KVM_RESCHEDULE for KVM interruptions.

In this scenario, ignore these two failures and avoid the whole test to
return failure.

Signed-off-by: Breno Leitao <leitao@debian.org>
Reviewed-by: Gustavo Romero <gromero@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-19 21:58:09 +10:00
Breno Leitao
984ecdd68d powerpc/iommu: Avoid derefence before pointer check
The tbl pointer is being derefenced by IOMMU_PAGE_SIZE prior the check
if it is not NULL.

Just moving the dereference code to after the check, where there will
be guarantee that 'tbl' will not be NULL.

Signed-off-by: Breno Leitao <leitao@debian.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-19 21:58:09 +10:00
Breno Leitao
8ac9e5bfd8 powerpc/xive: Use xive_cpu->chip_id instead of looking it up again
Function xive_native_get_ipi() might use chip_id without it being
initialized, if the CPU node is not found, as reported by smatch:

  error: uninitialized symbol 'chip_id'

As suggested by Cédric, we can use xc->chip_id instead of consulting
the device tree for chip id, which is safe since xive_prepare_cpu()
should have initialized ->chip_id by the time xive_native_get_ipi() is
called.

Signed-off-by: Breno Leitao <leitao@debian.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
[mpe: Tweak change log]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-19 21:58:09 +10:00
Christophe Lombard
6f8e45f7eb ocxl: Fix access to the AFU Descriptor Data
The AFU Information DVSEC capability is a means to extract common,
general information about all of the AFUs associated with a Function
independent of the specific functionality that each AFU provides.
Write in the AFU Index field allows to access to the descriptor data
for each AFU.

With the current code, we are not able to access to these specific data
when the index >= 1 because we are writing to the wrong location.
All requests to the data of each AFU are pointing to those of the AFU 0,
which could have impacts when using a card with more than one AFU per
function.

This patch fixes the access to the AFU Descriptor Data indexed by the
AFU Info Index field.

Fixes: 5ef3166e8a ("ocxl: Driver code for 'generic' opencapi devices")
Cc: stable <stable@vger.kernel.org>     # 4.16
Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>

Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Acked-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-19 21:58:09 +10:00
Rashmica Gupta
3f7daf3d75 powerpc/memtrace: Remove memory in chunks
When hot-removing memory release_mem_region_adjustable() splits iomem
resources if they are not the exact size of the memory being
hot-deleted. Adding this memory back to the kernel adds a new resource.

Eg a node has memory 0x0 - 0xfffffffff. Hot-removing 1GB from
0xf40000000 results in the single resource 0x0-0xfffffffff being split
into two resources: 0x0-0xf3fffffff and 0xf80000000-0xfffffffff.

When we hot-add the memory back we now have three resources:
0x0-0xf3fffffff, 0xf40000000-0xf7fffffff, and 0xf80000000-0xfffffffff.

This is an issue if we try to remove some memory that overlaps
resources. Eg when trying to remove 2GB at address 0xf40000000,
release_mem_region_adjustable() fails as it expects the chunk of memory
to be within the boundaries of a single resource. We then get the
warning: "Unable to release resource" and attempting to use memtrace
again gives us this error: "bash: echo: write error: Resource
temporarily unavailable"

This patch makes memtrace remove memory in chunks that are always the
same size from an address that is always equal to end_of_memory -
n*size, for some n. So hotremoving and hotadding memory of different
sizes will now not attempt to remove memory that spans multiple
resources.

Signed-off-by: Rashmica Gupta <rashmica.g@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-19 21:58:02 +10:00
Michael Neuling
51c3c62b58 powerpc: Avoid code patching freed init sections
This stops us from doing code patching in init sections after they've
been freed.

In this chain:
  kvm_guest_init() ->
    kvm_use_magic_page() ->
      fault_in_pages_readable() ->
	 __get_user() ->
	   __get_user_nocheck() ->
	     barrier_nospec();

We have a code patching location at barrier_nospec() and
kvm_guest_init() is an init function. This whole chain gets inlined,
so when we free the init section (hence kvm_guest_init()), this code
goes away and hence should no longer be patched.

We seen this as userspace memory corruption when using a memory
checker while doing partition migration testing on powervm (this
starts the code patching post migration via
/sys/kernel/mobility/migration). In theory, it could also happen when
using /sys/kernel/debug/powerpc/barrier_nospec.

Cc: stable@vger.kernel.org # 4.13+
Signed-off-by: Michael Neuling <mikey@neuling.org>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-18 22:42:54 +10:00
Anton Blanchard
be54c1216f powerpc/64: Remove static branch hints from memset()
Static branch hints override dynamic branch prediction on recent
POWER CPUs. We should only use them when we are overwhelmingly
sure of the direction.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-17 21:17:25 +10:00
Laurent Dufour
ba2dd8a26b powerpc/pseries/mm: call H_BLOCK_REMOVE
This hypervisor's call allows to remove up to 8 ptes with only call to
tlbie.

The virtual pages must be all within the same naturally aligned 8 pages
virtual address block and have the same page and segment size encodings.

Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-17 21:17:25 +10:00
Laurent Dufour
0effa488dc powerpc/pseries/mm: factorize PTE slot computation
This part of code will be called also when dealing with H_BLOCK_REMOVE.

Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-17 21:17:25 +10:00
Laurent Dufour
5600fbe340 powerpc/pseries/mm: Introducing FW_FEATURE_BLOCK_REMOVE
This feature tells if the hcall H_BLOCK_REMOVE is available.

Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-17 21:17:25 +10:00
Breno Leitao
96695563ce powerpc/tm: Fix HTM documentation
This patch simply fix part of the documentation on the HTM code.

This fixes reference to old fields that were renamed in commit
000ec280e3 ("powerpc: tm: Rename transct_(*) to ck(\1)_state")

It also documents better the flow after commit eb5c3f1c86 ("powerpc:
Always save/restore checkpointed regs during treclaim/trecheckpoint"),
where tm_recheckpoint can recheckpoint what is in ck{fp,vr}_state
blindly.

Signed-off-by: Breno Leitao <leitao@debian.org>
Acked-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-17 21:17:25 +10:00
Breno Leitao
693b31b2fc powerpc/selftests: Wait all threads to join
Test tm-tmspr might exit before all threads stop executing, because it just
waits for the very last thread to join before proceeding/exiting.

This patch makes sure that all threads that were created will join before
proceeding/exiting.

This patch also guarantees that the amount of threads being created is equal
to thread_num.

Signed-off-by: Breno Leitao <leitao@debian.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-17 21:17:25 +10:00
Joel Stanley
b0dc0f8618 powerpc/powernv: Don't select the cpufreq governors
Deciding wich govenors should be built into the kernel can be left to
users to configure.

Fixes: 81f359027a ("cpufreq: powernv: Select CPUFreq related Kconfig options for powernv")
Signed-off-by: Joel Stanley <joel@jms.id.au>
[mpe: Update powernv/ppc64 defconfigs to enable them by default]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-17 21:16:26 +10:00
Michael Neuling
f14040bca8 KVM: PPC: Book3S HV: Fix guest r11 corruption with POWER9 TM workarounds
When we come into the softpatch handler (0x1500), we use r11 to store
the HSRR0 for later use by the denorm handler.

We also use the softpatch handler for the TM workarounds for
POWER9. Unfortunately, in kvmppc_interrupt_hv we later store r11 out
to the vcpu assuming it's still what we got from userspace.

This causes r11 to be corrupted in the VCPU and hence when we restore
the guest, we get a corrupted r11. We've seen this when running TM
tests inside guests on P9.

This fixes the problem by only touching r11 in the denorm case.

Fixes: 4bb3c7a020 ("KVM: PPC: Book3S HV: Work around transactional memory bugs in POWER9")
Cc: <stable@vger.kernel.org> # 4.17+
Test-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-09-17 16:23:28 +10:00
Alan Modra
56d20861c0 powerpc/vdso: Correct call frame information
Call Frame Information is used by gdb for back-traces and inserting
breakpoints on function return for the "finish" command.  This failed
when inside __kernel_clock_gettime.  More concerning than difficulty
debugging is that CFI is also used by stack frame unwinding code to
implement exceptions.  If you have an app that needs to handle
asynchronous exceptions for some reason, and you are unlucky enough to
get one inside the VDSO time functions, your app will crash.

What's wrong:  There is control flow in __kernel_clock_gettime that
reaches label 99 without saving lr in r12.  CFI info however is
interpreted by the unwinder without reference to control flow: It's a
simple matter of "Execute all the CFI opcodes up to the current
address".  That means the unwinder thinks r12 contains the return
address at label 99.  Disabuse it of that notion by resetting CFI for
the return address at label 99.

Note that the ".cfi_restore lr" could have gone anywhere from the
"mtlr r12" a few instructions earlier to the instruction at label 99.
I put the CFI as late as possible, because in general that's best
practice (and if possible grouped with other CFI in order to reduce
the number of CFI opcodes executed when unwinding).  Using r12 as the
return address is perfectly fine after the "mtlr r12" since r12 on
that code path still contains the return address.

__get_datapage also has a CFI error.  That function temporarily saves
lr in r0, and reflects that fact with ".cfi_register lr,r0".  A later
use of r0 means the CFI at that point isn't correct, as r0 no longer
contains the return address.  Fix that too.

Signed-off-by: Alan Modra <amodra@gmail.com>
Tested-by: Reza Arbab <arbab@linux.ibm.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2018-09-14 13:47:31 +10:00
Michael Neuling
dd9a8c5a87 powerpc/tm: Fix HFSCR bit for no suspend case
Currently on P9N DD2.1 we end up taking infinite TM facility
unavailable exceptions on the first TM usage by userspace.

In the special case of TM no suspend (P9N DD2.1), Linux is told TM is
off via CPU dt-ftrs but told to (partially) use it via
OPAL_REINIT_CPUS_TM_SUSPEND_DISABLED. So HFSCR[TM] will be off from
dt-ftrs but we need to turn it on for the no suspend case.

This patch fixes this by enabling HFSCR TM in this case.

Cc: stable@vger.kernel.org # 4.15+
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2018-09-14 13:47:31 +10:00
Linus Torvalds
11da3a7f84 Linux 4.19-rc3 2018-09-09 17:26:43 -07:00
Linus Torvalds
9a5682765a Merge branch 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 fixes from Thomas Gleixner:
 "A set of fixes for x86:

   - Prevent multiplication result truncation on 32bit. Introduced with
     the early timestamp reworrk.

   - Ensure microcode revision storage to be consistent under all
     circumstances

   - Prevent write tearing of PTEs

   - Prevent confusion of user and kernel reegisters when dumping fatal
     signals verbosely

   - Make an error return value in a failure path of the vector
     allocation negative. Returning EINVAL might the caller assume
     success and causes further wreckage.

   - A trivial kernel doc warning fix"

* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/mm: Use WRITE_ONCE() when setting PTEs
  x86/apic/vector: Make error return value negative
  x86/process: Don't mix user/kernel regs in 64bit __show_regs()
  x86/tsc: Prevent result truncation on 32bit
  x86: Fix kernel-doc atomic.h warnings
  x86/microcode: Update the new microcode revision unconditionally
  x86/microcode: Make sure boot_cpu_data.microcode is up-to-date
2018-09-09 07:05:15 -07:00
Linus Torvalds
3567994a05 Merge branch 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull timekeeping fixes from Thomas Gleixner:
 "Two fixes for timekeeping:

   - Revert to the previous kthread based update, which is unfortunately
     required due to lock ordering issues. The removal caused boot
     failures on old Core2 machines. Add a proper comment why the thread
     needs to stay to prevent accidental removal in the future.

   - Fix a silly typo in a function declaration"

* 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  clocksource: Revert "Remove kthread"
  timekeeping: Fix declaration of read_persistent_wall_and_boot_offset()
2018-09-09 06:55:27 -07:00
Linus Torvalds
225ad3cfec Merge branch 'irq-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull irqchip fix from Thomas Gleixner:
 "A single fix to prevent allocating excessive memory in the GIC/ITS
  driver.

  While the subject of the patch might suggest otherwise this is a real
  fix as some SoCs exceed the memory allocation limits and fail to boot"

* 'irq-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  irqchip/gic-v3-its: Cap lpi_id_bits to reduce memory footprint
2018-09-09 06:49:29 -07:00
Linus Torvalds
e0a0d05848 Merge branch 'smp-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull cpu hotplug fixes from Thomas Gleixner:
 "Two fixes for the hotplug state machine code:

   - Move the misplaces smb() in the hotplug thread function to the
     proper place, otherwise a half update control struct could be
     observed

   - Prevent state corruption on error rollback, which causes the state
     to advance by one and as a consequence skip it in the bringup
     sequence"

* 'smp-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  cpu/hotplug: Prevent state corruption on error rollback
  cpu/hotplug: Adjust misplaced smb() in cpuhp_thread_fun()
2018-09-09 06:48:06 -07:00
Linus Torvalds
3243a89dcb Fix things so the choice of whether or not to trust RDRAND to
initialize the CRNG is configurable via the boot option
 random.trust_cpu={on,off}
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCAAdFiEEK2m5VNv+CHkogTfJ8vlZVpUNgaMFAluVEQAACgkQ8vlZVpUN
 gaN4vAgAqQQHYBTlHSYTyh9eEyOOo6gSTnu9mgk6iwejUceoPDcwYiFptZvdpQxj
 moNTz31hy2tFHqt8aiNA2CgSMLI6cilLhz9AzeA6UuQe/EGhZeQHtnvKNIct8Zbg
 97+b2WipCgspO0hzm8NLCjcvSgu892fBLc1TVl8Z+GxLhTCTAgkrMqLpo2iSR/Xe
 +wv2NhT5gAnXFUuHzayiG/wCwSpWNt1cc1DJHVLMFv2yznHL/nagUywO4IeYqaJk
 ZeXie9GsMZDsqFMOjCPS98U3/7c6y2FoYtm/O4NRUpQh9T8QP4NPylP3NDlhIxss
 ZTu6x9xXKnLBfhHu5qk6LuYMJNW/lQ==
 =XP8t
 -----END PGP SIGNATURE-----

Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random

Pull random driver fix from Ted Ts'o:
 "Fix things so the choice of whether or not to trust RDRAND to
  initialize the CRNG is configurable via the boot option
  random.trust_cpu={on,off}"

* tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random:
  random: make CPU trust a boot parameter
2018-09-09 05:54:05 -07:00
Linus Torvalds
1d22577703 Kbuild fixes for v4.19
- make setlocalversion more robust about -dirty check
 
  - loosen the pkg-config requirement for Kconfig
 
  - change missing depmod to a warning from an error
 
  - warn modules_install when System.map is missing
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJblMNtAAoJED2LAQed4NsG618P/jJCMiSSqsuf9lLIGM+M+9kv
 ALiUPgx0pSx60PREu/oxMNLhmdCxGhYmn7PIDupWt7Wj0/Qq7IJsCe/c91UcDG+m
 ZTumArWstGXD+Cwfe7nOVIuV2V8/ntdBUNKky2zH4WstQ+BH/kjl4tV1f0NxR1WZ
 7vVGSpMjoOiVuhjloa02OFmpv/0KdTn+ChGV7R8nc2AqgTUY7s0X3cY3NLScsAxr
 OpI+4zmgi/PWBtfhA2VPWZWshKzmFlK4UZ5ZrRqChUFaYDTGoN7Lncmz4njI7sxm
 N9QrWNdkFhtj7rA+7ZKhYE1AeqbU9+K3XKw538fbG2hha/KfP1xWJ+m0hD4KrW7S
 dqYmTs+ntdF/f7c1A/ZAbQEo574o4TcTKQ2utJ5QfpbNTqVoVywvXuevI6mGLfDS
 DLRLfXBnP9THbEQNHD0HL0f9zLpTK0uVn6yT6gS2LmgEfXl5f3STFIytUQpxRi7A
 ujjaT9wEJIP41yICQa/bs7GS6DfIr0Ax+Pf7vr7mpo2Yv6FwRQ6XYBYZrAmjxSPQ
 Jk9h1nsrqLgUQs4OVikDDRfwy5Lz//+VwuKH54dQqMqd7Z2v6G0nIlJsNZT+azEV
 DZTE74MWhLvyZRGrKqy5fWR/+YVTh6wD4vAPBhtyy6sxlxvmSEwpAmE4Md8WG71R
 Fh6+u2dpY1SnLxHQ4R1J
 =BQVE
 -----END PGP SIGNATURE-----

Merge tag 'kbuild-fixes-v4.19' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild

Pull Kbuild fixes from Masahiro Yamada:

 - make setlocalversion more robust about -dirty check

 - loosen the pkg-config requirement for Kconfig

 - change missing depmod to a warning from an error

 - warn modules_install when System.map is missing

* tag 'kbuild-fixes-v4.19' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild:
  kbuild: modules_install: warn when missing System.map file
  kbuild: make missing $DEPMOD a Warning instead of an Error
  kconfig: do not require pkg-config on make {menu,n}config
  kconfig: remove a spurious self-assignment
  scripts/setlocalversion: git: Make -dirty check more robust
2018-09-09 05:42:11 -07:00
Randy Dunlap
f0b0d88a82 kbuild: modules_install: warn when missing System.map file
If there is no System.map file for "make modules_install",
scripts/depmod.sh will silently exit with success, having done
nothing.  Since this is an unexpected situation, change it to
report a Warning for the missing file.  The behavior is not
changed except for the Warning message.

The (previous) silent success and new Warning can be reproduced
by:
$ make mrproper; make defconfig
$ make modules; make modules_install

and since System.map is produced by "make vmlinux", the steps
above omit producing the System.map file.

Reported-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2018-09-09 09:14:07 +09:00
Linus Torvalds
f8f65382c9 KVM fixes for 4.19-rc3
ARM:
  - Fix a VFP corruption in 32-bit guest
  - Add missing cache invalidation for CoW pages
  - Two small cleanups
 
 s390:
  - Fallout from the hugetlbfs support: pfmf interpretion and locking
  - VSIE: fix keywrapping for nested guests
 
 PPC:
  - Fix a bug where pages might not get marked dirty, causing
    guest memory corruption on migration,
  - Fix a bug causing reads from guest memory to use the wrong guest
    real address for very large HPT guests (>256G of memory), leading to
    failures in instruction emulation.
 
 x86:
  - Fix out of bound access from malicious pv ipi hypercalls (introduced
    in rc1)
  - Fix delivery of pending interrupts when entering a nested guest,
    preventing arbitrarily late injection
  - Sanitize kvm_stat output after destroying a guest
  - Fix infinite loop when emulating a nested guest page fault
    and improve the surrounding emulation code
  - Two minor cleanups
 -----BEGIN PGP SIGNATURE-----
 
 iQEcBAABCAAGBQJbk5gAAAoJEED/6hsPKofoS0UH/1clCzg/8x3jhpDcKKp6tDm7
 9XHOOQ6XmydT0HXYJNqZepGNqU99ip+2u4x8E9LCT5MTvTMZ1BcNM6PmenjJVULY
 GMJtwZhjqoklrOcNkXGqIye4Ec+I0pBuMmt0AN0N85CcHO8VUBpMzsdxgJLuxcRm
 UT6OZnCLyJsock6BqkZmqVsJj/gemFnI9MpudnrU8cCFk60roXmQWJ66fMIFfKjt
 q0R61t8nmbapQKE8pjqBNgbCsuotVOtU1zgMkeM5LkaYEfc65ZPdgt3sdpyG8Guq
 WA7Vt6HEvmNrcQxHFX5P0GxTVM9lOVCUx1bKXE4+57CMZOYl/8hDaTudlcacutg=
 =FyuN
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull KVM fixes from Radim Krčmář:
 "ARM:
   - Fix a VFP corruption in 32-bit guest
   - Add missing cache invalidation for CoW pages
   - Two small cleanups

  s390:
   - Fallout from the hugetlbfs support: pfmf interpretion and locking
   - VSIE: fix keywrapping for nested guests

  PPC:
   - Fix a bug where pages might not get marked dirty, causing guest
     memory corruption on migration
   - Fix a bug causing reads from guest memory to use the wrong guest
     real address for very large HPT guests (>256G of memory), leading
     to failures in instruction emulation.

  x86:
   - Fix out of bound access from malicious pv ipi hypercalls
     (introduced in rc1)
   - Fix delivery of pending interrupts when entering a nested guest,
     preventing arbitrarily late injection
   - Sanitize kvm_stat output after destroying a guest
   - Fix infinite loop when emulating a nested guest page fault and
     improve the surrounding emulation code
   - Two minor cleanups"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (28 commits)
  KVM: LAPIC: Fix pv ipis out-of-bounds access
  KVM: nVMX: Fix loss of pending IRQ/NMI before entering L2
  arm64: KVM: Remove pgd_lock
  KVM: Remove obsolete kvm_unmap_hva notifier backend
  arm64: KVM: Only force FPEXC32_EL2.EN if trapping FPSIMD
  KVM: arm/arm64: Clean dcache to PoC when changing PTE due to CoW
  KVM: s390: Properly lock mm context allow_gmap_hpage_1m setting
  KVM: s390: vsie: copy wrapping keys to right place
  KVM: s390: Fix pfmf and conditional skey emulation
  tools/kvm_stat: re-animate display of dead guests
  tools/kvm_stat: indicate dead guests as such
  tools/kvm_stat: handle guest removals more gracefully
  tools/kvm_stat: don't reset stats when setting PID filter for debugfs
  tools/kvm_stat: fix updates for dead guests
  tools/kvm_stat: fix handling of invalid paths in debugfs provider
  tools/kvm_stat: fix python3 issues
  KVM: x86: Unexport x86_emulate_instruction()
  KVM: x86: Rename emulate_instruction() to kvm_emulate_instruction()
  KVM: x86: Do not re-{try,execute} after failed emulation in L2
  KVM: x86: Default to not allowing emulation retry in kvm_mmu_page_fault
  ...
2018-09-08 15:52:45 -07:00
Linus Torvalds
0f3aa48ad4 ARM: SoC fixes
A few more fixes who have trickled in:
  - MMC bus width fixup for some Allwinner platforms
  - Fix for NULL deref in ti-aemif when no platform data is passed in
  - Fix div by 0 in SCMI code
  - Add a missing module alias in a new RPi driver
 -----BEGIN PGP SIGNATURE-----
 
 iQJDBAABCAAtFiEElf+HevZ4QCAJmMQ+jBrnPN6EHHcFAluUAp0PHG9sb2ZAbGl4
 b20ubmV0AAoJEIwa5zzehBx3+6YP/2T9NuOUTjssbVBho92lF9dV58Y5xOgDv9wX
 mFT7gePXovTPQrgrpDi4RWrv0wAkjMa3grJfL2RGZXSZtsgkyHstb3mXf1O6sbnF
 Ry1yc4ByJ0+JKJRq2tBxhQmLpBVFNXiav4vhIdPNZRdtZid7WzZaqF0JrCj6iyNf
 CDhiGFRAZC9NcaCdOvI0aHFVC47Cp/Uacbh3PzZmdRWJJ2rCGO9X4vwQoMai/1cq
 vVuiOBOs2ArXQQvvDoVixb3sCcdblCsDoS57lArJ5jKrHFm8iu6Z2+6UGhi2QEhc
 9PKp5tySctWVqitOn0Ueixq+nKCXF3/dVAqjMVViSfC7G0Pt2XIAeqZU+2Ou3Zkj
 nFcHqTZAXfSs6I1hnXqJYQ9Me3JzwQ+pRFJY8/+tbq2eGv7eZzUuzUppr13eF62s
 NeBzJiGiI7ab9sGJknhmoXVDyuB7ctuZXA8JgO/kZvL8dfuWcF3GNocs2p9916JD
 uWGwnfXiTLMhbxKkYrjaOClaVyx2bf996M3Z4NqxBQ9XGNXyh+V/6bzUh9DGPSL0
 +9W7YcRFT08v4I1Zh7/P5zXVAOyqj3awWeD6gpg7PAsmKPdN/f17EEqk6KH7rOVZ
 Vvw3/w+Ef9u4onGpbpE/IyCco75vXrv1GtkHMX7VlMjLe0eAv5Cpw7UwLDO2tVnu
 pEJFkk45
 =oZbn
 -----END PGP SIGNATURE-----

Merge tag 'armsoc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc

Pull ARM SoC fixes from Olof Johansson:
 "A few more fixes who have trickled in:

   - MMC bus width fixup for some Allwinner platforms

   - Fix for NULL deref in ti-aemif when no platform data is passed in

   - Fix div by 0 in SCMI code

   - Add a missing module alias in a new RPi driver"

* tag 'armsoc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
  memory: ti-aemif: fix a potential NULL-pointer dereference
  firmware: arm_scmi: fix divide by zero when sustained_perf_level is zero
  hwmon: rpi: add module alias to raspberrypi-hwmon
  arm64: allwinner: dts: h6: fix Pine H64 MMC bus width
2018-09-08 15:38:57 -07:00
Olof Johansson
a132bb9041 Allwinner fixes for 4.19
Just one fix for H6 mmc on the Pine H64: the mmc bus width was missing
 from the device tree. This was added in 4.19-rc1.
 -----BEGIN PGP SIGNATURE-----
 
 iQJCBAABCgAsFiEE2nN1m/hhnkhOWjtHOJpUIZwPJDAFAluQmz0OHHdlbnNAY3Np
 ZS5vcmcACgkQOJpUIZwPJDD01RAApxHO0v7/7y9w/8pGGpjpTYpliF9lndaQYD3o
 +Xc/Y7bcy2+Iy4Lz0TbkOObjIVUoLsQcpGKvttHa/gIsjbgd9xBpxd5X2PVBRmWx
 /ERA5HdMG4RvznLD3P7X0JOAL/3w1ad/4DarOHOibqYk3KqX+iG6kphIRx326INt
 SSqPZNNub/LXmHSUnyprQ+ccfKs87uiy9dT1LrSTxXGjh9tdXXmkGmDCOSX+oCKm
 EXeFIK1uTmyGyE8OXa2NbCktwNylw6c4XwcaWLIPQeJTEW6oVh95IkewBphi+nFw
 rU82W2aqCGqP2EYHJwzD7zx53V7cGAJVkb/u3ENXSXgE/kyTdmoFukxWRb7upfEb
 9bjgQUMQ+6RG1f5lDYIHSVNXdk81AshMc1Y7qKG5EoCfJUIcG0gyyQYpO+lKji7V
 nvTeiA0882a/PMYYkGU7vWGD7oIuPHEWEmnSZDWUNsqcKXaX5b3km/BsoLfTii9a
 45MDQ9Wo2B26PL6zflN78BrDfuX+UgmX1bbxY0b+rOal4CKuz+VqwEnQIumu1SYE
 9GaMHFKGMh2JCQ/U8o4AGdomEUjX79dgZbwz7W4KBnaS7K4iKrQfxcKLFXcXLtI9
 EaA4nNsHeIe6ByE5z4FNVUPHEcLkfqlpqdFBRdd/xt+MfDYQaorh73NQfGvN4s0x
 3pGu1fI=
 =WbHO
 -----END PGP SIGNATURE-----

Merge tag 'sunxi-fixes-for-4.19' of https://git.kernel.org/pub/scm/linux/kernel/git/sunxi/linux into fixes

Allwinner fixes for 4.19

Just one fix for H6 mmc on the Pine H64: the mmc bus width was missing
from the device tree. This was added in 4.19-rc1.

* tag 'sunxi-fixes-for-4.19' of https://git.kernel.org/pub/scm/linux/kernel/git/sunxi/linux:
  arm64: allwinner: dts: h6: fix Pine H64 MMC bus width

Signed-off-by: Olof Johansson <olof@lixom.net>
2018-09-08 10:04:37 -07:00
Nadav Amit
9bc4f28af7 x86/mm: Use WRITE_ONCE() when setting PTEs
When page-table entries are set, the compiler might optimize their
assignment by using multiple instructions to set the PTE. This might
turn into a security hazard if the user somehow manages to use the
interim PTE. L1TF does not make our lives easier, making even an interim
non-present PTE a security hazard.

Using WRITE_ONCE() to set PTEs and friends should prevent this potential
security hazard.

I skimmed the differences in the binary with and without this patch. The
differences are (obviously) greater when CONFIG_PARAVIRT=n as more
code optimizations are possible. For better and worse, the impact on the
binary with this patch is pretty small. Skimming the code did not cause
anything to jump out as a security hazard, but it seems that at least
move_soft_dirty_pte() caused set_pte_at() to use multiple writes.

Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20180902181451.80520-1-namit@vmware.com
2018-09-08 12:30:36 +02:00