forked from luck/tmp_suning_uos_patched
drm-misc-next for v5.3, try #2:
Cross-subsystem Changes: - Fix device tree bindings in drm-misc-next after a botched merge. Core Changes: - Docbook fix for drm_hdmi_infoframe_set_hdr_metadata. Driver Changes: - mediatek: Fix compiler warning after merging the HDR series. - vc4: Rework binner bo handling. -----BEGIN PGP SIGNATURE----- iQIzBAABCgAdFiEEuXvWqAysSYEJGuVH/lWMcqZwE8MFAlznr6oACgkQ/lWMcqZw E8Mjbw//Rf2KeOyNYOpaUjzUIXjdGNKCSLG+MYbBzJLbdj6hywAi8tS6aS89d1qW CCBzPTUWFktuUVuHqIpZwNTPLndXzPvyC9v1BafKkF6Tkod1usBMaXD1266giAbC pKkJrejqeeQtYNfAQIGDzD/ndxXptw+mwK7DgRvMIQSGYuMCm+p5cG0RBtLV7Ijv fXIromzIQ+YUuOIyGRgmXW9zDUaieztovrLtIzpYALzTPZb5dqrJiuv3SKIiB4EK mlTprRqHbHpYLHHNhFrO2blfi/50+SThEHvUBP8rkMf3nu3nhQSMQrPtxJSfL71e 1nAWvIYkLY7lKid7ugFvsZL+1L0zgG6XnsqHs5/x5x/LGDK1jVCEGG/DdsXVjGFj XH8zdLBi3PrmwbKy/HHCh6QD5Iwtg4qm8Dfjjfil4XNQDI8pK8q8TaVMZETn3YRC 63JtZq8nBnrWgT57N/28apkymsHdz2QK99Yyc+GflFhhHsoNy6LhP+OqzW11rIas ANxZrF5CR8rudtoo2QeMkHcvkbIvDTQOPPuW6LXdXuqkhi91NFmgkxCCecFfpO74 QvTiBQHrlb8zqTMZJ/j6uSBTFNOXI2NxXTKUBMJ2O3FcyVqvpL+HutVPcBuIw3mM FNvCI1M9rVH1qFOZ+t1y9ceebuHPy6xYwuak6fKDwzOwJOmOMFI= =2K7c -----END PGP SIGNATURE----- Merge tag 'drm-misc-next-2019-05-24' of git://anongit.freedesktop.org/drm/drm-misc into drm-next drm-misc-next for v5.3, try #2: UAPI Changes: - Add HDR source metadata property. - Make drm.h compile on GNU/kFreeBSD by including stdint.h - Clarify how the userspace reviewer has to review new kernel UAPI. - Clarify that for using new UAPI, merging to drm-next or drm-misc-next should be enough. Cross-subsystem Changes: - video/hdmi: Add unpack function for DRM infoframes. - Device tree bindings: * Updating a property for Mali Midgard GPUs * Updating a property for STM32 DSI panel * Adding support for FriendlyELEC HD702E 800x1280 panel * Adding support for Evervision VGG804821 800x480 5.0" WVGA TFT panel * Adding support for the EDT ET035012DM6 3.5" 320x240 QVGA 24-bit RGB TFT. * Adding support for Three Five displays TFC S9700RTWV43TR-01B 800x480 panel with resistive touch found on TI's AM335X-EVM. * Adding support for EDT ETM0430G0DH6 480x272 panel. - Add OSD101T2587-53TS driver with DT bindings. - Add Samsung S6E63M0 panel driver with DT bindings. - Add VXT VL050-8048NT-C01 800x480 panel with DT bindings. - Dma-buf: - Make mmap callback actually optional. - Documentation updates. - Fix debugfs refcount inbalance. - Remove unused sync_dump function. - Fix device tree bindings in drm-misc-next after a botched merge. Core Changes: - Add support for HDR infoframes and related EDID parsing. - Remove prime sg_table caching, now done inside dma-buf. - Add shiny new drm_gem_vram helpers for simple VRAM drivers; with some fixes to the new API on top. - Small fix to job cleanup without timeout handler. - Documentation fixes to drm_fourcc. - Replace lookups of drm_format with struct drm_format_info; remove functions that become obsolete by this conversion. - Remove double include in bridge/panel.c and some drivers. - Remove drmP.h include from drm/edid and drm/dp. - Fix null pointer deref in drm_fb_helper_hotplug_event(). - Remove most members from drm_fb_helper_crtc, only mode_set is kept. - Remove race of fb helpers with userspace; only restore mode when userspace is not master. - Move legacy setup from drm_file.c to drm_legacy_misc.c - Rework scheduler job destruction. - drm/bus was removed, remove from TODO. - Add __drm_atomic_helper_crtc_reset() to subclass crtc_state, and convert some drivers to use it (conversion is not complete yet). - Bump vblank timeout wait to 100 ms for atomic. - Docbook fix for drm_hdmi_infoframe_set_hdr_metadata. Driver Changes: - sun4i: Use DRM_GEM_CMA_VMAP_DRIVER_OPS instead of definining manually. - v3d: Small cleanups, adding support for compute shaders, reservation/synchronization fixes and job management refactoring, fixes MMU and debugfs. - lima: Fix null pointer in irq handler on startup, set default timeout for scheduled jobs. - stm/ltdc: Assorted fixes and adding FB modifier support. - amdgpu: Avoid hw reset if guilty job was already signaled. - virtio: Add seqno to fences, add trace events, use correct flags for fence allocation. - Convert AST, bochs, mgag200, vboxvideo, hisilicon to the new drm_gem_vram API. - sun6i_mipi_dsi: Support DSI GENERIC_SHORT_WRITE_2 transfers. - bochs: Small fix to use PTR_RET_OR_ZERO and driver unload. - gma500: header fixes - cirrus: Remove unused files. - mediatek: Fix compiler warning after merging the HDR series. - vc4: Rework binner bo handling. Signed-off-by: Dave Airlie <airlied@redhat.com> From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/052875a5-27ba-3832-60c2-193d950afdff@linux.intel.com
This commit is contained in:
commit
88cd7a2c1b
|
@ -6,6 +6,22 @@ Display bindings for EDT Display Technology Corp. Displays which are
|
|||
compatible with the simple-panel binding, which is specified in
|
||||
simple-panel.txt
|
||||
|
||||
3,5" QVGA TFT Panels
|
||||
--------------------
|
||||
+-----------------+---------------------+-------------------------------------+
|
||||
| Identifier | compatbile | description |
|
||||
+=================+=====================+=====================================+
|
||||
| ET035012DM6 | edt,et035012dm6 | 3.5" QVGA TFT LCD panel |
|
||||
+-----------------+---------------------+-------------------------------------+
|
||||
|
||||
4,3" WVGA TFT Panels
|
||||
--------------------
|
||||
|
||||
+-----------------+---------------------+-------------------------------------+
|
||||
| Identifier | compatbile | description |
|
||||
+=================+=====================+=====================================+
|
||||
| ETM0430G0DH6 | edt,etm0430g0dh6 | 480x272 TFT Display |
|
||||
+-----------------+---------------------+-------------------------------------+
|
||||
|
||||
5,7" WVGA TFT Panels
|
||||
--------------------
|
||||
|
|
|
@ -0,0 +1,12 @@
|
|||
Evervision Electronics Co. Ltd. VGG804821 5.0" WVGA TFT LCD Panel
|
||||
|
||||
Required properties:
|
||||
- compatible: should be "evervision,vgg804821"
|
||||
- power-supply: See simple-panel.txt
|
||||
|
||||
Optional properties:
|
||||
- backlight: See simple-panel.txt
|
||||
- enable-gpios: See simple-panel.txt
|
||||
|
||||
This binding is compatible with the simple-panel binding, which is specified
|
||||
in simple-panel.txt in this directory.
|
|
@ -0,0 +1,32 @@
|
|||
FriendlyELEC HD702E 800x1280 LCD panel
|
||||
|
||||
HD702E lcd is FriendlyELEC developed eDP LCD panel with 800x1280
|
||||
resolution. It has built in Goodix, GT9271 captive touchscreen
|
||||
with backlight adjustable via PWM.
|
||||
|
||||
Required properties:
|
||||
- compatible: should be "friendlyarm,hd702e"
|
||||
- power-supply: regulator to provide the supply voltage
|
||||
|
||||
Optional properties:
|
||||
- backlight: phandle of the backlight device attached to the panel
|
||||
|
||||
Optional nodes:
|
||||
- Video port for LCD panel input.
|
||||
|
||||
This binding is compatible with the simple-panel binding, which is specified
|
||||
in simple-panel.txt in this directory.
|
||||
|
||||
Example:
|
||||
|
||||
panel {
|
||||
compatible ="friendlyarm,hd702e", "simple-panel";
|
||||
backlight = <&backlight>;
|
||||
power-supply = <&vcc3v3_sys>;
|
||||
|
||||
port {
|
||||
panel_in_edp: endpoint {
|
||||
remote-endpoint = <&edp_out_panel>;
|
||||
};
|
||||
};
|
||||
};
|
|
@ -0,0 +1,11 @@
|
|||
One Stop Displays OSD101T2045-53TS 10.1" 1920x1200 panel
|
||||
|
||||
Required properties:
|
||||
- compatible: should be "osddisplays,osd101t2045-53ts"
|
||||
- power-supply: as specified in the base binding
|
||||
|
||||
Optional properties:
|
||||
- backlight: as specified in the base binding
|
||||
|
||||
This binding is compatible with the simple-panel binding, which is specified
|
||||
in simple-panel.txt in this directory.
|
|
@ -0,0 +1,14 @@
|
|||
One Stop Displays OSD101T2587-53TS 10.1" 1920x1200 panel
|
||||
|
||||
The panel is similar to OSD101T2045-53TS, but it needs additional
|
||||
MIPI_DSI_TURN_ON_PERIPHERAL message from the host.
|
||||
|
||||
Required properties:
|
||||
- compatible: should be "osddisplays,osd101t2587-53ts"
|
||||
- power-supply: as specified in the base binding
|
||||
|
||||
Optional properties:
|
||||
- backlight: as specified in the base binding
|
||||
|
||||
This binding is compatible with the simple-panel binding, which is specified
|
||||
in simple-panel.txt in this directory.
|
|
@ -0,0 +1,33 @@
|
|||
Samsung s6e63m0 AMOLED LCD panel
|
||||
|
||||
Required properties:
|
||||
- compatible: "samsung,s6e63m0"
|
||||
- reset-gpios: GPIO spec for reset pin
|
||||
- vdd3-supply: VDD regulator
|
||||
- vci-supply: VCI regulator
|
||||
|
||||
The panel must obey rules for SPI slave device specified in document [1].
|
||||
|
||||
The device node can contain one 'port' child node with one child
|
||||
'endpoint' node, according to the bindings defined in [2]. This
|
||||
node should describe panel's video bus.
|
||||
|
||||
[1]: Documentation/devicetree/bindings/spi/spi-bus.txt
|
||||
[2]: Documentation/devicetree/bindings/media/video-interfaces.txt
|
||||
|
||||
Example:
|
||||
|
||||
s6e63m0: display@0 {
|
||||
compatible = "samsung,s6e63m0";
|
||||
reg = <0>;
|
||||
reset-gpio = <&mp05 5 1>;
|
||||
vdd3-supply = <&ldo12_reg>;
|
||||
vci-supply = <&ldo11_reg>;
|
||||
spi-max-frequency = <1200000>;
|
||||
|
||||
port {
|
||||
lcd_ep: endpoint {
|
||||
remote-endpoint = <&fimd_ep>;
|
||||
};
|
||||
};
|
||||
};
|
|
@ -0,0 +1,15 @@
|
|||
TFC S9700RTWV43TR-01B 7" Three Five Corp 800x480 LCD panel with
|
||||
resistive touch
|
||||
|
||||
The panel is found on TI AM335x-evm.
|
||||
|
||||
Required properties:
|
||||
- compatible: should be "tfc,s9700rtwv43tr-01b"
|
||||
- power-supply: See panel-common.txt
|
||||
|
||||
Optional properties:
|
||||
- enable-gpios: GPIO pin to enable or disable the panel, if there is one
|
||||
- backlight: phandle of the backlight device attached to the panel
|
||||
|
||||
This binding is compatible with the simple-panel binding, which is specified
|
||||
in simple-panel.txt in this directory.
|
|
@ -0,0 +1,12 @@
|
|||
VXT 800x480 color TFT LCD panel
|
||||
|
||||
Required properties:
|
||||
- compatible: should be "vxt,vl050-8048nt-c01"
|
||||
- power-supply: as specified in the base binding
|
||||
|
||||
Optional properties:
|
||||
- backlight: as specified in the base binding
|
||||
- enable-gpios: as specified in the base binding
|
||||
|
||||
This binding is compatible with the simple-panel binding, which is specified
|
||||
in simple-panel.txt in this directory.
|
|
@ -40,6 +40,8 @@ Mandatory nodes specific to STM32 DSI:
|
|||
- panel or bridge node: A node containing the panel or bridge description as
|
||||
documented in [6].
|
||||
- port: panel or bridge port node, connected to the DSI output port (port@1).
|
||||
Optional properties:
|
||||
- phy-dsi-supply: phandle of the regulator that provides the supply voltage.
|
||||
|
||||
Note: You can find more documentation in the following references
|
||||
[1] Documentation/devicetree/bindings/clock/clock-bindings.txt
|
||||
|
@ -101,6 +103,7 @@ Example 2: DSI panel
|
|||
clock-names = "pclk", "ref";
|
||||
resets = <&rcc STM32F4_APB2_RESET(DSI)>;
|
||||
reset-names = "apb";
|
||||
phy-dsi-supply = <®18>;
|
||||
|
||||
ports {
|
||||
#address-cells = <1>;
|
||||
|
|
|
@ -15,6 +15,7 @@ Required properties:
|
|||
+ "arm,mali-t860"
|
||||
+ "arm,mali-t880"
|
||||
* which must be preceded by one of the following vendor specifics:
|
||||
+ "allwinner,sun50i-h6-mali"
|
||||
+ "amlogic,meson-gxm-mali"
|
||||
+ "rockchip,rk3288-mali"
|
||||
+ "rockchip,rk3399-mali"
|
||||
|
@ -31,21 +32,36 @@ Optional properties:
|
|||
|
||||
- clocks : Phandle to clock for the Mali Midgard device.
|
||||
|
||||
- clock-names : Specify the names of the clocks specified in clocks
|
||||
when multiple clocks are present.
|
||||
* core: clock driving the GPU itself (When only one clock is present,
|
||||
assume it's this clock.)
|
||||
* bus: bus clock for the GPU
|
||||
|
||||
- mali-supply : Phandle to regulator for the Mali device. Refer to
|
||||
Documentation/devicetree/bindings/regulator/regulator.txt for details.
|
||||
|
||||
- operating-points-v2 : Refer to Documentation/devicetree/bindings/opp/opp.txt
|
||||
for details.
|
||||
|
||||
- #cooling-cells: Refer to Documentation/devicetree/bindings/thermal/thermal.txt
|
||||
for details.
|
||||
|
||||
- resets : Phandle of the GPU reset line.
|
||||
|
||||
Vendor-specific bindings
|
||||
------------------------
|
||||
|
||||
The Mali GPU is integrated very differently from one SoC to
|
||||
another. In order to accomodate those differences, you have the option
|
||||
another. In order to accommodate those differences, you have the option
|
||||
to specify one more vendor-specific compatible, among:
|
||||
|
||||
- "allwinner,sun50i-h6-mali"
|
||||
Required properties:
|
||||
- clocks : phandles to core and bus clocks
|
||||
- clock-names : must contain "core" and "bus"
|
||||
- resets: phandle to GPU reset line
|
||||
|
||||
- "amlogic,meson-gxm-mali"
|
||||
Required properties:
|
||||
- resets : Should contain phandles of :
|
||||
|
@ -65,6 +81,7 @@ gpu@ffa30000 {
|
|||
mali-supply = <&vdd_gpu>;
|
||||
operating-points-v2 = <&gpu_opp_table>;
|
||||
power-domains = <&power RK3288_PD_GPU>;
|
||||
#cooling-cells = <2>;
|
||||
};
|
||||
|
||||
gpu_opp_table: opp_table0 {
|
||||
|
|
|
@ -287,6 +287,8 @@ patternProperties:
|
|||
description: Everest Semiconductor Co. Ltd.
|
||||
"^everspin,.*":
|
||||
description: Everspin Technologies, Inc.
|
||||
"^evervision,.*":
|
||||
description: Evervision Electronics Co. Ltd.
|
||||
"^exar,.*":
|
||||
description: Exar Corporation
|
||||
"^excito,.*":
|
||||
|
@ -849,6 +851,8 @@ patternProperties:
|
|||
description: Shenzhen Techstar Electronics Co., Ltd.
|
||||
"^terasic,.*":
|
||||
description: Terasic Inc.
|
||||
"^tfc,.*":
|
||||
description: Three Five Corp
|
||||
"^thine,.*":
|
||||
description: THine Electronics, Inc.
|
||||
"^ti,.*":
|
||||
|
@ -923,6 +927,8 @@ patternProperties:
|
|||
description: Voipac Technologies s.r.o.
|
||||
"^vot,.*":
|
||||
description: Vision Optical Technology Co., Ltd.
|
||||
"^vxt,.*":
|
||||
description: VXT Ltd
|
||||
"^wd,.*":
|
||||
description: Western Digital Corp.
|
||||
"^wetek,.*":
|
||||
|
|
|
@ -79,7 +79,6 @@ count for the TTM, which will call your initialization function.
|
|||
|
||||
See the radeon_ttm.c file for an example of usage.
|
||||
|
||||
|
||||
The Graphics Execution Manager (GEM)
|
||||
====================================
|
||||
|
||||
|
@ -380,6 +379,39 @@ GEM CMA Helper Functions Reference
|
|||
.. kernel-doc:: drivers/gpu/drm/drm_gem_cma_helper.c
|
||||
:export:
|
||||
|
||||
VRAM Helper Function Reference
|
||||
==============================
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/drm_vram_helper_common.c
|
||||
:doc: overview
|
||||
|
||||
.. kernel-doc:: include/drm/drm_gem_vram_helper.h
|
||||
:internal:
|
||||
|
||||
GEM VRAM Helper Functions Reference
|
||||
-----------------------------------
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/drm_gem_vram_helper.c
|
||||
:doc: overview
|
||||
|
||||
.. kernel-doc:: include/drm/drm_gem_vram_helper.h
|
||||
:internal:
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/drm_gem_vram_helper.c
|
||||
:export:
|
||||
|
||||
VRAM MM Helper Functions Reference
|
||||
----------------------------------
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/drm_vram_mm_helper.c
|
||||
:doc: overview
|
||||
|
||||
.. kernel-doc:: include/drm/drm_vram_mm_helper.h
|
||||
:internal:
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/drm_vram_mm_helper.c
|
||||
:export:
|
||||
|
||||
VMA Offset Manager
|
||||
==================
|
||||
|
||||
|
|
|
@ -85,16 +85,18 @@ leads to a few additional requirements:
|
|||
- The userspace side must be fully reviewed and tested to the standards of that
|
||||
userspace project. For e.g. mesa this means piglit testcases and review on the
|
||||
mailing list. This is again to ensure that the new interface actually gets the
|
||||
job done.
|
||||
job done. The userspace-side reviewer should also provide at least an
|
||||
Acked-by on the kernel uAPI patch indicating that they've looked at how the
|
||||
kernel side is implementing the new feature being used.
|
||||
|
||||
- The userspace patches must be against the canonical upstream, not some vendor
|
||||
fork. This is to make sure that no one cheats on the review and testing
|
||||
requirements by doing a quick fork.
|
||||
|
||||
- The kernel patch can only be merged after all the above requirements are met,
|
||||
but it **must** be merged **before** the userspace patches land. uAPI always flows
|
||||
from the kernel, doing things the other way round risks divergence of the uAPI
|
||||
definitions and header files.
|
||||
but it **must** be merged to either drm-next or drm-misc-next **before** the
|
||||
userspace patches land. uAPI always flows from the kernel, doing things the
|
||||
other way round risks divergence of the uAPI definitions and header files.
|
||||
|
||||
These are fairly steep requirements, but have grown out from years of shared
|
||||
pain and experience with uAPI added hastily, and almost always regretted about
|
||||
|
|
|
@ -10,25 +10,6 @@ graphics subsystem useful as newbie projects. Or for slow rainy days.
|
|||
Subsystem-wide refactorings
|
||||
===========================
|
||||
|
||||
De-midlayer drivers
|
||||
-------------------
|
||||
|
||||
With the recent ``drm_bus`` cleanup patches for 3.17 it is no longer required
|
||||
to have a ``drm_bus`` structure set up. Drivers can directly set up the
|
||||
``drm_device`` structure instead of relying on bus methods in ``drm_usb.c``
|
||||
and ``drm_pci.c``. The goal is to get rid of the driver's ``->load`` /
|
||||
``->unload`` callbacks and open-code the load/unload sequence properly, using
|
||||
the new two-stage ``drm_device`` setup/teardown.
|
||||
|
||||
Once all existing drivers are converted we can also remove those bus support
|
||||
files for USB and platform devices.
|
||||
|
||||
All you need is a GPU for a non-converted driver (currently almost all of
|
||||
them, but also all the virtual ones used by KVM, so everyone qualifies).
|
||||
|
||||
Contact: Daniel Vetter, Thierry Reding, respective driver maintainers
|
||||
|
||||
|
||||
Remove custom dumb_map_offset implementations
|
||||
---------------------------------------------
|
||||
|
||||
|
@ -300,6 +281,14 @@ it to use drm_mode_hsync() instead.
|
|||
|
||||
Contact: Sean Paul
|
||||
|
||||
drm_fb_helper tasks
|
||||
-------------------
|
||||
|
||||
- drm_fb_helper_restore_fbdev_mode_unlocked() should call restore_fbdev_mode()
|
||||
not the _force variant so it can bail out if there is a master. But first
|
||||
these igt tests need to be fixed: kms_fbcon_fbt@psr and
|
||||
kms_fbcon_fbt@psr-suspend.
|
||||
|
||||
Core refactorings
|
||||
=================
|
||||
|
||||
|
|
|
@ -5413,6 +5413,7 @@ T: git git://anongit.freedesktop.org/drm/drm-misc
|
|||
|
||||
DRM PANEL DRIVERS
|
||||
M: Thierry Reding <thierry.reding@gmail.com>
|
||||
R: Sam Ravnborg <sam@ravnborg.org>
|
||||
L: dri-devel@lists.freedesktop.org
|
||||
T: git git://anongit.freedesktop.org/drm/drm-misc
|
||||
S: Maintained
|
||||
|
@ -5441,7 +5442,6 @@ F: Documentation/gpu/xen-front.rst
|
|||
DRM TTM SUBSYSTEM
|
||||
M: Christian Koenig <christian.koenig@amd.com>
|
||||
M: Huang Rui <ray.huang@amd.com>
|
||||
M: Junwei Zhang <Jerry.Zhang@amd.com>
|
||||
T: git git://people.freedesktop.org/~agd5f/linux
|
||||
S: Maintained
|
||||
L: dri-devel@lists.freedesktop.org
|
||||
|
|
|
@ -90,6 +90,10 @@ static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
|
|||
|
||||
dmabuf = file->private_data;
|
||||
|
||||
/* check if buffer supports mmap */
|
||||
if (!dmabuf->ops->mmap)
|
||||
return -EINVAL;
|
||||
|
||||
/* check for overflowing the buffer's size */
|
||||
if (vma->vm_pgoff + vma_pages(vma) >
|
||||
dmabuf->size >> PAGE_SHIFT)
|
||||
|
@ -404,8 +408,7 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
|
|||
|| !exp_info->ops
|
||||
|| !exp_info->ops->map_dma_buf
|
||||
|| !exp_info->ops->unmap_dma_buf
|
||||
|| !exp_info->ops->release
|
||||
|| !exp_info->ops->mmap)) {
|
||||
|| !exp_info->ops->release)) {
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
|
@ -573,6 +576,7 @@ struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
|
|||
list_add(&attach->node, &dmabuf->attachments);
|
||||
|
||||
mutex_unlock(&dmabuf->lock);
|
||||
|
||||
return attach;
|
||||
|
||||
err_attach:
|
||||
|
@ -595,6 +599,9 @@ void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach)
|
|||
if (WARN_ON(!dmabuf || !attach))
|
||||
return;
|
||||
|
||||
if (attach->sgt)
|
||||
dmabuf->ops->unmap_dma_buf(attach, attach->sgt, attach->dir);
|
||||
|
||||
mutex_lock(&dmabuf->lock);
|
||||
list_del(&attach->node);
|
||||
if (dmabuf->ops->detach)
|
||||
|
@ -630,10 +637,27 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach,
|
|||
if (WARN_ON(!attach || !attach->dmabuf))
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
if (attach->sgt) {
|
||||
/*
|
||||
* Two mappings with different directions for the same
|
||||
* attachment are not allowed.
|
||||
*/
|
||||
if (attach->dir != direction &&
|
||||
attach->dir != DMA_BIDIRECTIONAL)
|
||||
return ERR_PTR(-EBUSY);
|
||||
|
||||
return attach->sgt;
|
||||
}
|
||||
|
||||
sg_table = attach->dmabuf->ops->map_dma_buf(attach, direction);
|
||||
if (!sg_table)
|
||||
sg_table = ERR_PTR(-ENOMEM);
|
||||
|
||||
if (!IS_ERR(sg_table) && attach->dmabuf->ops->cache_sgt_mapping) {
|
||||
attach->sgt = sg_table;
|
||||
attach->dir = direction;
|
||||
}
|
||||
|
||||
return sg_table;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dma_buf_map_attachment);
|
||||
|
@ -657,8 +681,10 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
|
|||
if (WARN_ON(!attach || !attach->dmabuf || !sg_table))
|
||||
return;
|
||||
|
||||
attach->dmabuf->ops->unmap_dma_buf(attach, sg_table,
|
||||
direction);
|
||||
if (attach->sgt == sg_table)
|
||||
return;
|
||||
|
||||
attach->dmabuf->ops->unmap_dma_buf(attach, sg_table, direction);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
|
||||
|
||||
|
@ -906,6 +932,10 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
|
|||
if (WARN_ON(!dmabuf || !vma))
|
||||
return -EINVAL;
|
||||
|
||||
/* check if buffer supports mmap */
|
||||
if (!dmabuf->ops->mmap)
|
||||
return -EINVAL;
|
||||
|
||||
/* check for offset overflow */
|
||||
if (pgoff + vma_pages(vma) < pgoff)
|
||||
return -EOVERFLOW;
|
||||
|
@ -1068,6 +1098,7 @@ static int dma_buf_debug_show(struct seq_file *s, void *unused)
|
|||
fence->ops->get_driver_name(fence),
|
||||
fence->ops->get_timeline_name(fence),
|
||||
dma_fence_is_signaled(fence) ? "" : "un");
|
||||
dma_fence_put(fence);
|
||||
}
|
||||
rcu_read_unlock();
|
||||
|
||||
|
|
|
@ -197,29 +197,3 @@ static __init int sync_debugfs_init(void)
|
|||
return 0;
|
||||
}
|
||||
late_initcall(sync_debugfs_init);
|
||||
|
||||
#define DUMP_CHUNK 256
|
||||
static char sync_dump_buf[64 * 1024];
|
||||
void sync_dump(void)
|
||||
{
|
||||
struct seq_file s = {
|
||||
.buf = sync_dump_buf,
|
||||
.size = sizeof(sync_dump_buf) - 1,
|
||||
};
|
||||
int i;
|
||||
|
||||
sync_info_debugfs_show(&s, NULL);
|
||||
|
||||
for (i = 0; i < s.count; i += DUMP_CHUNK) {
|
||||
if ((s.count - i) > DUMP_CHUNK) {
|
||||
char c = s.buf[i + DUMP_CHUNK];
|
||||
|
||||
s.buf[i + DUMP_CHUNK] = 0;
|
||||
pr_cont("%s", s.buf + i);
|
||||
s.buf[i + DUMP_CHUNK] = c;
|
||||
} else {
|
||||
s.buf[s.count] = 0;
|
||||
pr_cont("%s", s.buf + i);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -68,6 +68,5 @@ void sync_timeline_debug_add(struct sync_timeline *obj);
|
|||
void sync_timeline_debug_remove(struct sync_timeline *obj);
|
||||
void sync_file_debug_add(struct sync_file *fence);
|
||||
void sync_file_debug_remove(struct sync_file *fence);
|
||||
void sync_dump(void);
|
||||
|
||||
#endif /* _LINUX_SYNC_H */
|
||||
|
|
|
@ -161,6 +161,13 @@ config DRM_TTM
|
|||
GPU memory types. Will be enabled automatically if a device driver
|
||||
uses it.
|
||||
|
||||
config DRM_VRAM_HELPER
|
||||
tristate
|
||||
depends on DRM
|
||||
select DRM_TTM
|
||||
help
|
||||
Helpers for VRAM memory management
|
||||
|
||||
config DRM_GEM_CMA_HELPER
|
||||
bool
|
||||
depends on DRM
|
||||
|
|
|
@ -32,6 +32,11 @@ drm-$(CONFIG_AGP) += drm_agpsupport.o
|
|||
drm-$(CONFIG_DEBUG_FS) += drm_debugfs.o drm_debugfs_crc.o
|
||||
drm-$(CONFIG_DRM_LOAD_EDID_FIRMWARE) += drm_edid_load.o
|
||||
|
||||
drm_vram_helper-y := drm_gem_vram_helper.o \
|
||||
drm_vram_helper_common.o \
|
||||
drm_vram_mm_helper.o
|
||||
obj-$(CONFIG_DRM_VRAM_HELPER) += drm_vram_helper.o
|
||||
|
||||
drm_kms_helper-y := drm_crtc_helper.o drm_dp_helper.o drm_dsc.o drm_probe_helper.o \
|
||||
drm_plane_helper.o drm_dp_mst_topology.o drm_atomic_helper.o \
|
||||
drm_kms_helper_common.o drm_dp_dual_mode_helper.o \
|
||||
|
|
|
@ -3341,8 +3341,6 @@ static int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev,
|
|||
if (!ring || !ring->sched.thread)
|
||||
continue;
|
||||
|
||||
drm_sched_stop(&ring->sched);
|
||||
|
||||
/* after all hw jobs are reset, hw fence is meaningless, so force_completion */
|
||||
amdgpu_fence_driver_force_completion(ring);
|
||||
}
|
||||
|
@ -3350,8 +3348,7 @@ static int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev,
|
|||
if(job)
|
||||
drm_sched_increase_karma(&job->base);
|
||||
|
||||
|
||||
|
||||
/* Don't suspend on bare metal if we are not going to HW reset the ASIC */
|
||||
if (!amdgpu_sriov_vf(adev)) {
|
||||
|
||||
if (!need_full_reset)
|
||||
|
@ -3489,38 +3486,21 @@ static int amdgpu_do_asic_reset(struct amdgpu_hive_info *hive,
|
|||
return r;
|
||||
}
|
||||
|
||||
static void amdgpu_device_post_asic_reset(struct amdgpu_device *adev,
|
||||
struct amdgpu_job *job)
|
||||
static bool amdgpu_device_lock_adev(struct amdgpu_device *adev, bool trylock)
|
||||
{
|
||||
int i;
|
||||
if (trylock) {
|
||||
if (!mutex_trylock(&adev->lock_reset))
|
||||
return false;
|
||||
} else
|
||||
mutex_lock(&adev->lock_reset);
|
||||
|
||||
for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
|
||||
struct amdgpu_ring *ring = adev->rings[i];
|
||||
|
||||
if (!ring || !ring->sched.thread)
|
||||
continue;
|
||||
|
||||
if (!adev->asic_reset_res)
|
||||
drm_sched_resubmit_jobs(&ring->sched);
|
||||
|
||||
drm_sched_start(&ring->sched, !adev->asic_reset_res);
|
||||
}
|
||||
|
||||
if (!amdgpu_device_has_dc_support(adev)) {
|
||||
drm_helper_resume_force_mode(adev->ddev);
|
||||
}
|
||||
|
||||
adev->asic_reset_res = 0;
|
||||
}
|
||||
|
||||
static void amdgpu_device_lock_adev(struct amdgpu_device *adev)
|
||||
{
|
||||
mutex_lock(&adev->lock_reset);
|
||||
atomic_inc(&adev->gpu_reset_counter);
|
||||
adev->in_gpu_reset = 1;
|
||||
/* Block kfd: SRIOV would do it separately */
|
||||
if (!amdgpu_sriov_vf(adev))
|
||||
amdgpu_amdkfd_pre_reset(adev);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static void amdgpu_device_unlock_adev(struct amdgpu_device *adev)
|
||||
|
@ -3548,40 +3528,42 @@ static void amdgpu_device_unlock_adev(struct amdgpu_device *adev)
|
|||
int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
|
||||
struct amdgpu_job *job)
|
||||
{
|
||||
int r;
|
||||
struct amdgpu_hive_info *hive = NULL;
|
||||
bool need_full_reset = false;
|
||||
struct amdgpu_device *tmp_adev = NULL;
|
||||
struct list_head device_list, *device_list_handle = NULL;
|
||||
bool need_full_reset, job_signaled;
|
||||
struct amdgpu_hive_info *hive = NULL;
|
||||
struct amdgpu_device *tmp_adev = NULL;
|
||||
int i, r = 0;
|
||||
|
||||
need_full_reset = job_signaled = false;
|
||||
INIT_LIST_HEAD(&device_list);
|
||||
|
||||
dev_info(adev->dev, "GPU reset begin!\n");
|
||||
|
||||
hive = amdgpu_get_xgmi_hive(adev, false);
|
||||
|
||||
/*
|
||||
* In case of XGMI hive disallow concurrent resets to be triggered
|
||||
* by different nodes. No point also since the one node already executing
|
||||
* reset will also reset all the other nodes in the hive.
|
||||
* Here we trylock to avoid chain of resets executing from
|
||||
* either trigger by jobs on different adevs in XGMI hive or jobs on
|
||||
* different schedulers for same device while this TO handler is running.
|
||||
* We always reset all schedulers for device and all devices for XGMI
|
||||
* hive so that should take care of them too.
|
||||
*/
|
||||
hive = amdgpu_get_xgmi_hive(adev, 0);
|
||||
if (hive && adev->gmc.xgmi.num_physical_nodes > 1 &&
|
||||
!mutex_trylock(&hive->reset_lock))
|
||||
|
||||
if (hive && !mutex_trylock(&hive->reset_lock)) {
|
||||
DRM_INFO("Bailing on TDR for s_job:%llx, hive: %llx as another already in progress",
|
||||
job->base.id, hive->hive_id);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Start with adev pre asic reset first for soft reset check.*/
|
||||
amdgpu_device_lock_adev(adev);
|
||||
r = amdgpu_device_pre_asic_reset(adev,
|
||||
job,
|
||||
&need_full_reset);
|
||||
if (r) {
|
||||
/*TODO Should we stop ?*/
|
||||
DRM_ERROR("GPU pre asic reset failed with err, %d for drm dev, %s ",
|
||||
r, adev->ddev->unique);
|
||||
adev->asic_reset_res = r;
|
||||
if (!amdgpu_device_lock_adev(adev, !hive)) {
|
||||
DRM_INFO("Bailing on TDR for s_job:%llx, as another already in progress",
|
||||
job->base.id);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Build list of devices to reset */
|
||||
if (need_full_reset && adev->gmc.xgmi.num_physical_nodes > 1) {
|
||||
if (adev->gmc.xgmi.num_physical_nodes > 1) {
|
||||
if (!hive) {
|
||||
amdgpu_device_unlock_adev(adev);
|
||||
return -ENODEV;
|
||||
|
@ -3598,13 +3580,56 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
|
|||
device_list_handle = &device_list;
|
||||
}
|
||||
|
||||
/* block all schedulers and reset given job's ring */
|
||||
list_for_each_entry(tmp_adev, device_list_handle, gmc.xgmi.head) {
|
||||
for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
|
||||
struct amdgpu_ring *ring = tmp_adev->rings[i];
|
||||
|
||||
if (!ring || !ring->sched.thread)
|
||||
continue;
|
||||
|
||||
drm_sched_stop(&ring->sched, &job->base);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* Must check guilty signal here since after this point all old
|
||||
* HW fences are force signaled.
|
||||
*
|
||||
* job->base holds a reference to parent fence
|
||||
*/
|
||||
if (job && job->base.s_fence->parent &&
|
||||
dma_fence_is_signaled(job->base.s_fence->parent))
|
||||
job_signaled = true;
|
||||
|
||||
if (!amdgpu_device_ip_need_full_reset(adev))
|
||||
device_list_handle = &device_list;
|
||||
|
||||
if (job_signaled) {
|
||||
dev_info(adev->dev, "Guilty job already signaled, skipping HW reset");
|
||||
goto skip_hw_reset;
|
||||
}
|
||||
|
||||
|
||||
/* Guilty job will be freed after this*/
|
||||
r = amdgpu_device_pre_asic_reset(adev,
|
||||
job,
|
||||
&need_full_reset);
|
||||
if (r) {
|
||||
/*TODO Should we stop ?*/
|
||||
DRM_ERROR("GPU pre asic reset failed with err, %d for drm dev, %s ",
|
||||
r, adev->ddev->unique);
|
||||
adev->asic_reset_res = r;
|
||||
}
|
||||
|
||||
retry: /* Rest of adevs pre asic reset from XGMI hive. */
|
||||
list_for_each_entry(tmp_adev, device_list_handle, gmc.xgmi.head) {
|
||||
|
||||
if (tmp_adev == adev)
|
||||
continue;
|
||||
|
||||
amdgpu_device_lock_adev(tmp_adev);
|
||||
amdgpu_device_lock_adev(tmp_adev, false);
|
||||
r = amdgpu_device_pre_asic_reset(tmp_adev,
|
||||
NULL,
|
||||
&need_full_reset);
|
||||
|
@ -3628,9 +3653,28 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
|
|||
goto retry;
|
||||
}
|
||||
|
||||
skip_hw_reset:
|
||||
|
||||
/* Post ASIC reset for all devs .*/
|
||||
list_for_each_entry(tmp_adev, device_list_handle, gmc.xgmi.head) {
|
||||
amdgpu_device_post_asic_reset(tmp_adev, tmp_adev == adev ? job : NULL);
|
||||
for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
|
||||
struct amdgpu_ring *ring = tmp_adev->rings[i];
|
||||
|
||||
if (!ring || !ring->sched.thread)
|
||||
continue;
|
||||
|
||||
/* No point to resubmit jobs if we didn't HW reset*/
|
||||
if (!tmp_adev->asic_reset_res && !job_signaled)
|
||||
drm_sched_resubmit_jobs(&ring->sched);
|
||||
|
||||
drm_sched_start(&ring->sched, !tmp_adev->asic_reset_res);
|
||||
}
|
||||
|
||||
if (!amdgpu_device_has_dc_support(tmp_adev) && !job_signaled) {
|
||||
drm_helper_resume_force_mode(tmp_adev->ddev);
|
||||
}
|
||||
|
||||
tmp_adev->asic_reset_res = 0;
|
||||
|
||||
if (r) {
|
||||
/* bad news, how to tell it to userspace ? */
|
||||
|
@ -3643,7 +3687,7 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
|
|||
amdgpu_device_unlock_adev(tmp_adev);
|
||||
}
|
||||
|
||||
if (hive && adev->gmc.xgmi.num_physical_nodes > 1)
|
||||
if (hive)
|
||||
mutex_unlock(&hive->reset_lock);
|
||||
|
||||
if (r)
|
||||
|
|
|
@ -121,6 +121,7 @@ static int amdgpufb_create_pinned_object(struct amdgpu_fbdev *rfbdev,
|
|||
struct drm_mode_fb_cmd2 *mode_cmd,
|
||||
struct drm_gem_object **gobj_p)
|
||||
{
|
||||
const struct drm_format_info *info;
|
||||
struct amdgpu_device *adev = rfbdev->adev;
|
||||
struct drm_gem_object *gobj = NULL;
|
||||
struct amdgpu_bo *abo = NULL;
|
||||
|
@ -131,7 +132,8 @@ static int amdgpufb_create_pinned_object(struct amdgpu_fbdev *rfbdev,
|
|||
int height = mode_cmd->height;
|
||||
u32 cpp;
|
||||
|
||||
cpp = drm_format_plane_cpp(mode_cmd->pixel_format, 0);
|
||||
info = drm_get_format_info(adev->ddev, mode_cmd);
|
||||
cpp = info->cpp[0];
|
||||
|
||||
/* need to align pitch with crtc limits */
|
||||
mode_cmd->pitches[0] = amdgpu_align_pitch(adev, mode_cmd->width, cpp,
|
||||
|
|
|
@ -463,23 +463,6 @@ static struct drm_crtc_state *malidp_crtc_duplicate_state(struct drm_crtc *crtc)
|
|||
return &state->base;
|
||||
}
|
||||
|
||||
static void malidp_crtc_reset(struct drm_crtc *crtc)
|
||||
{
|
||||
struct malidp_crtc_state *state = NULL;
|
||||
|
||||
if (crtc->state) {
|
||||
state = to_malidp_crtc_state(crtc->state);
|
||||
__drm_atomic_helper_crtc_destroy_state(crtc->state);
|
||||
}
|
||||
|
||||
kfree(state);
|
||||
state = kzalloc(sizeof(*state), GFP_KERNEL);
|
||||
if (state) {
|
||||
crtc->state = &state->base;
|
||||
crtc->state->crtc = crtc;
|
||||
}
|
||||
}
|
||||
|
||||
static void malidp_crtc_destroy_state(struct drm_crtc *crtc,
|
||||
struct drm_crtc_state *state)
|
||||
{
|
||||
|
@ -493,6 +476,17 @@ static void malidp_crtc_destroy_state(struct drm_crtc *crtc,
|
|||
kfree(mali_state);
|
||||
}
|
||||
|
||||
static void malidp_crtc_reset(struct drm_crtc *crtc)
|
||||
{
|
||||
struct malidp_crtc_state *state =
|
||||
kzalloc(sizeof(*state), GFP_KERNEL);
|
||||
|
||||
if (crtc->state)
|
||||
malidp_crtc_destroy_state(crtc, crtc->state);
|
||||
|
||||
__drm_atomic_helper_crtc_reset(crtc, &state->base);
|
||||
}
|
||||
|
||||
static int malidp_crtc_enable_vblank(struct drm_crtc *crtc)
|
||||
{
|
||||
struct malidp_drm *malidp = crtc_to_malidp_device(crtc);
|
||||
|
|
|
@ -382,7 +382,8 @@ static void malidp500_modeset(struct malidp_hw_device *hwdev, struct videomode *
|
|||
|
||||
int malidp_format_get_bpp(u32 fmt)
|
||||
{
|
||||
int bpp = drm_format_plane_cpp(fmt, 0) * 8;
|
||||
const struct drm_format_info *info = drm_format_info(fmt);
|
||||
int bpp = info->cpp[0] * 8;
|
||||
|
||||
if (bpp == 0) {
|
||||
switch (fmt) {
|
||||
|
|
|
@ -158,7 +158,7 @@ malidp_mw_encoder_atomic_check(struct drm_encoder *encoder,
|
|||
return -EINVAL;
|
||||
}
|
||||
|
||||
n_planes = drm_format_num_planes(fb->format->format);
|
||||
n_planes = fb->format->num_planes;
|
||||
for (i = 0; i < n_planes; i++) {
|
||||
struct drm_gem_cma_object *obj = drm_fb_cma_get_gem_obj(fb, i);
|
||||
/* memory write buffers are never rotated */
|
||||
|
|
|
@ -227,14 +227,13 @@ bool malidp_format_mod_supported(struct drm_device *drm,
|
|||
|
||||
if (modifier & AFBC_SPLIT) {
|
||||
if (!info->is_yuv) {
|
||||
if (drm_format_plane_cpp(format, 0) <= 2) {
|
||||
if (info->cpp[0] <= 2) {
|
||||
DRM_DEBUG_KMS("RGB formats <= 16bpp are not supported with SPLIT\n");
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
if ((drm_format_horz_chroma_subsampling(format) != 1) ||
|
||||
(drm_format_vert_chroma_subsampling(format) != 1)) {
|
||||
if ((info->hsub != 1) || (info->vsub != 1)) {
|
||||
if (!(format == DRM_FORMAT_YUV420_10BIT &&
|
||||
(map->features & MALIDP_DEVICE_AFBC_YUV_420_10_SUPPORT_SPLIT))) {
|
||||
DRM_DEBUG_KMS("Formats which are sub-sampled should never be split\n");
|
||||
|
@ -244,8 +243,7 @@ bool malidp_format_mod_supported(struct drm_device *drm,
|
|||
}
|
||||
|
||||
if (modifier & AFBC_CBR) {
|
||||
if ((drm_format_horz_chroma_subsampling(format) == 1) ||
|
||||
(drm_format_vert_chroma_subsampling(format) == 1)) {
|
||||
if ((info->hsub == 1) || (info->vsub == 1)) {
|
||||
DRM_DEBUG_KMS("Formats which are not sub-sampled should not have CBR set\n");
|
||||
return false;
|
||||
}
|
||||
|
|
|
@ -87,6 +87,7 @@ struct armada_framebuffer *armada_framebuffer_create(struct drm_device *dev,
|
|||
struct drm_framebuffer *armada_fb_create(struct drm_device *dev,
|
||||
struct drm_file *dfile, const struct drm_mode_fb_cmd2 *mode)
|
||||
{
|
||||
const struct drm_format_info *info = drm_get_format_info(dev, mode);
|
||||
struct armada_gem_object *obj;
|
||||
struct armada_framebuffer *dfb;
|
||||
int ret;
|
||||
|
@ -97,7 +98,7 @@ struct drm_framebuffer *armada_fb_create(struct drm_device *dev,
|
|||
mode->pitches[2]);
|
||||
|
||||
/* We can only handle a single plane at the moment */
|
||||
if (drm_format_num_planes(mode->pixel_format) > 1 &&
|
||||
if (info->num_planes > 1 &&
|
||||
(mode->handles[0] != mode->handles[1] ||
|
||||
mode->handles[0] != mode->handles[2])) {
|
||||
ret = -EINVAL;
|
||||
|
|
|
@ -2,9 +2,8 @@
|
|||
config DRM_AST
|
||||
tristate "AST server chips"
|
||||
depends on DRM && PCI && MMU
|
||||
select DRM_TTM
|
||||
select DRM_KMS_HELPER
|
||||
select DRM_TTM
|
||||
select DRM_VRAM_HELPER
|
||||
help
|
||||
Say yes for experimental AST GPU driver. Do not enable
|
||||
this driver without having a working -modesetting,
|
||||
|
|
|
@ -205,13 +205,7 @@ static struct pci_driver ast_pci_driver = {
|
|||
|
||||
static const struct file_operations ast_fops = {
|
||||
.owner = THIS_MODULE,
|
||||
.open = drm_open,
|
||||
.release = drm_release,
|
||||
.unlocked_ioctl = drm_ioctl,
|
||||
.mmap = ast_mmap,
|
||||
.poll = drm_poll,
|
||||
.compat_ioctl = drm_compat_ioctl,
|
||||
.read = drm_read,
|
||||
DRM_VRAM_MM_FILE_OPERATIONS
|
||||
};
|
||||
|
||||
static struct drm_driver driver = {
|
||||
|
@ -228,10 +222,7 @@ static struct drm_driver driver = {
|
|||
.minor = DRIVER_MINOR,
|
||||
.patchlevel = DRIVER_PATCHLEVEL,
|
||||
|
||||
.gem_free_object_unlocked = ast_gem_free_object,
|
||||
.dumb_create = ast_dumb_create,
|
||||
.dumb_map_offset = ast_dumb_mmap_offset,
|
||||
|
||||
DRM_GEM_VRAM_DRIVER
|
||||
};
|
||||
|
||||
static int __init ast_init(void)
|
||||
|
|
|
@ -31,13 +31,10 @@
|
|||
#include <drm/drm_encoder.h>
|
||||
#include <drm/drm_fb_helper.h>
|
||||
|
||||
#include <drm/ttm/ttm_bo_api.h>
|
||||
#include <drm/ttm/ttm_bo_driver.h>
|
||||
#include <drm/ttm/ttm_placement.h>
|
||||
#include <drm/ttm/ttm_memory.h>
|
||||
#include <drm/ttm/ttm_module.h>
|
||||
|
||||
#include <drm/drm_gem.h>
|
||||
#include <drm/drm_gem_vram_helper.h>
|
||||
|
||||
#include <drm/drm_vram_mm_helper.h>
|
||||
|
||||
#include <linux/i2c.h>
|
||||
#include <linux/i2c-algo-bit.h>
|
||||
|
@ -103,10 +100,6 @@ struct ast_private {
|
|||
|
||||
int fb_mtrr;
|
||||
|
||||
struct {
|
||||
struct ttm_bo_device bdev;
|
||||
} ttm;
|
||||
|
||||
struct drm_gem_object *cursor_cache;
|
||||
uint64_t cursor_cache_gpu_addr;
|
||||
/* Acces to this cache is protected by the crtc->mutex of the only crtc
|
||||
|
@ -263,7 +256,6 @@ struct ast_fbdev {
|
|||
struct ast_framebuffer afb;
|
||||
void *sysram;
|
||||
int size;
|
||||
struct ttm_bo_kmap_obj mapping;
|
||||
int x1, y1, x2, y2; /* dirty rect */
|
||||
spinlock_t dirty_lock;
|
||||
};
|
||||
|
@ -321,73 +313,16 @@ void ast_fbdev_fini(struct drm_device *dev);
|
|||
void ast_fbdev_set_suspend(struct drm_device *dev, int state);
|
||||
void ast_fbdev_set_base(struct ast_private *ast, unsigned long gpu_addr);
|
||||
|
||||
struct ast_bo {
|
||||
struct ttm_buffer_object bo;
|
||||
struct ttm_placement placement;
|
||||
struct ttm_bo_kmap_obj kmap;
|
||||
struct drm_gem_object gem;
|
||||
struct ttm_place placements[3];
|
||||
int pin_count;
|
||||
};
|
||||
#define gem_to_ast_bo(gobj) container_of((gobj), struct ast_bo, gem)
|
||||
|
||||
static inline struct ast_bo *
|
||||
ast_bo(struct ttm_buffer_object *bo)
|
||||
{
|
||||
return container_of(bo, struct ast_bo, bo);
|
||||
}
|
||||
|
||||
|
||||
#define to_ast_obj(x) container_of(x, struct ast_gem_object, base)
|
||||
|
||||
#define AST_MM_ALIGN_SHIFT 4
|
||||
#define AST_MM_ALIGN_MASK ((1 << AST_MM_ALIGN_SHIFT) - 1)
|
||||
|
||||
extern int ast_dumb_create(struct drm_file *file,
|
||||
struct drm_device *dev,
|
||||
struct drm_mode_create_dumb *args);
|
||||
|
||||
extern void ast_gem_free_object(struct drm_gem_object *obj);
|
||||
extern int ast_dumb_mmap_offset(struct drm_file *file,
|
||||
struct drm_device *dev,
|
||||
uint32_t handle,
|
||||
uint64_t *offset);
|
||||
|
||||
int ast_mm_init(struct ast_private *ast);
|
||||
void ast_mm_fini(struct ast_private *ast);
|
||||
|
||||
int ast_bo_create(struct drm_device *dev, int size, int align,
|
||||
uint32_t flags, struct ast_bo **pastbo);
|
||||
|
||||
int ast_gem_create(struct drm_device *dev,
|
||||
u32 size, bool iskernel,
|
||||
struct drm_gem_object **obj);
|
||||
|
||||
int ast_bo_pin(struct ast_bo *bo, u32 pl_flag, u64 *gpu_addr);
|
||||
int ast_bo_unpin(struct ast_bo *bo);
|
||||
|
||||
static inline int ast_bo_reserve(struct ast_bo *bo, bool no_wait)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = ttm_bo_reserve(&bo->bo, true, no_wait, NULL);
|
||||
if (ret) {
|
||||
if (ret != -ERESTARTSYS && ret != -EBUSY)
|
||||
DRM_ERROR("reserve failed %p\n", bo);
|
||||
return ret;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void ast_bo_unreserve(struct ast_bo *bo)
|
||||
{
|
||||
ttm_bo_unreserve(&bo->bo);
|
||||
}
|
||||
|
||||
void ast_ttm_placement(struct ast_bo *bo, int domain);
|
||||
int ast_bo_push_sysram(struct ast_bo *bo);
|
||||
int ast_mmap(struct file *filp, struct vm_area_struct *vma);
|
||||
|
||||
/* ast post */
|
||||
void ast_enable_vga(struct drm_device *dev);
|
||||
void ast_enable_mmio(struct drm_device *dev);
|
||||
|
|
|
@ -49,25 +49,25 @@ static void ast_dirty_update(struct ast_fbdev *afbdev,
|
|||
{
|
||||
int i;
|
||||
struct drm_gem_object *obj;
|
||||
struct ast_bo *bo;
|
||||
struct drm_gem_vram_object *gbo;
|
||||
int src_offset, dst_offset;
|
||||
int bpp = afbdev->afb.base.format->cpp[0];
|
||||
int ret = -EBUSY;
|
||||
u8 *dst;
|
||||
bool unmap = false;
|
||||
bool store_for_later = false;
|
||||
int x2, y2;
|
||||
unsigned long flags;
|
||||
|
||||
obj = afbdev->afb.obj;
|
||||
bo = gem_to_ast_bo(obj);
|
||||
gbo = drm_gem_vram_of_gem(obj);
|
||||
|
||||
/*
|
||||
* try and reserve the BO, if we fail with busy
|
||||
* then the BO is being moved and we should
|
||||
* store up the damage until later.
|
||||
/* Try to lock the BO. If we fail with -EBUSY then
|
||||
* the BO is being moved and we should store up the
|
||||
* damage until later.
|
||||
*/
|
||||
if (drm_can_sleep())
|
||||
ret = ast_bo_reserve(bo, true);
|
||||
ret = drm_gem_vram_lock(gbo, true);
|
||||
if (ret) {
|
||||
if (ret != -EBUSY)
|
||||
return;
|
||||
|
@ -101,25 +101,32 @@ static void ast_dirty_update(struct ast_fbdev *afbdev,
|
|||
afbdev->x2 = afbdev->y2 = 0;
|
||||
spin_unlock_irqrestore(&afbdev->dirty_lock, flags);
|
||||
|
||||
if (!bo->kmap.virtual) {
|
||||
ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &bo->kmap);
|
||||
if (ret) {
|
||||
dst = drm_gem_vram_kmap(gbo, false, NULL);
|
||||
if (IS_ERR(dst)) {
|
||||
DRM_ERROR("failed to kmap fb updates\n");
|
||||
goto out;
|
||||
} else if (!dst) {
|
||||
dst = drm_gem_vram_kmap(gbo, true, NULL);
|
||||
if (IS_ERR(dst)) {
|
||||
DRM_ERROR("failed to kmap fb updates\n");
|
||||
ast_bo_unreserve(bo);
|
||||
return;
|
||||
goto out;
|
||||
}
|
||||
unmap = true;
|
||||
}
|
||||
|
||||
for (i = y; i <= y2; i++) {
|
||||
/* assume equal stride for now */
|
||||
src_offset = dst_offset = i * afbdev->afb.base.pitches[0] + (x * bpp);
|
||||
memcpy_toio(bo->kmap.virtual + src_offset, afbdev->sysram + src_offset, (x2 - x + 1) * bpp);
|
||||
|
||||
src_offset = dst_offset =
|
||||
i * afbdev->afb.base.pitches[0] + (x * bpp);
|
||||
memcpy_toio(dst + dst_offset, afbdev->sysram + src_offset,
|
||||
(x2 - x + 1) * bpp);
|
||||
}
|
||||
if (unmap)
|
||||
ttm_bo_kunmap(&bo->kmap);
|
||||
|
||||
ast_bo_unreserve(bo);
|
||||
if (unmap)
|
||||
drm_gem_vram_kunmap(gbo);
|
||||
|
||||
out:
|
||||
drm_gem_vram_unlock(gbo);
|
||||
}
|
||||
|
||||
static void ast_fillrect(struct fb_info *info,
|
||||
|
|
|
@ -593,7 +593,7 @@ int ast_gem_create(struct drm_device *dev,
|
|||
u32 size, bool iskernel,
|
||||
struct drm_gem_object **obj)
|
||||
{
|
||||
struct ast_bo *astbo;
|
||||
struct drm_gem_vram_object *gbo;
|
||||
int ret;
|
||||
|
||||
*obj = NULL;
|
||||
|
@ -602,80 +602,13 @@ int ast_gem_create(struct drm_device *dev,
|
|||
if (size == 0)
|
||||
return -EINVAL;
|
||||
|
||||
ret = ast_bo_create(dev, size, 0, 0, &astbo);
|
||||
if (ret) {
|
||||
gbo = drm_gem_vram_create(dev, &dev->vram_mm->bdev, size, 0, false);
|
||||
if (IS_ERR(gbo)) {
|
||||
ret = PTR_ERR(gbo);
|
||||
if (ret != -ERESTARTSYS)
|
||||
DRM_ERROR("failed to allocate GEM object\n");
|
||||
return ret;
|
||||
}
|
||||
*obj = &astbo->gem;
|
||||
*obj = &gbo->gem;
|
||||
return 0;
|
||||
}
|
||||
|
||||
int ast_dumb_create(struct drm_file *file,
|
||||
struct drm_device *dev,
|
||||
struct drm_mode_create_dumb *args)
|
||||
{
|
||||
int ret;
|
||||
struct drm_gem_object *gobj;
|
||||
u32 handle;
|
||||
|
||||
args->pitch = args->width * ((args->bpp + 7) / 8);
|
||||
args->size = args->pitch * args->height;
|
||||
|
||||
ret = ast_gem_create(dev, args->size, false,
|
||||
&gobj);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = drm_gem_handle_create(file, gobj, &handle);
|
||||
drm_gem_object_put_unlocked(gobj);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
args->handle = handle;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void ast_bo_unref(struct ast_bo **bo)
|
||||
{
|
||||
if ((*bo) == NULL)
|
||||
return;
|
||||
ttm_bo_put(&((*bo)->bo));
|
||||
*bo = NULL;
|
||||
}
|
||||
|
||||
void ast_gem_free_object(struct drm_gem_object *obj)
|
||||
{
|
||||
struct ast_bo *ast_bo = gem_to_ast_bo(obj);
|
||||
|
||||
ast_bo_unref(&ast_bo);
|
||||
}
|
||||
|
||||
|
||||
static inline u64 ast_bo_mmap_offset(struct ast_bo *bo)
|
||||
{
|
||||
return drm_vma_node_offset_addr(&bo->bo.vma_node);
|
||||
}
|
||||
int
|
||||
ast_dumb_mmap_offset(struct drm_file *file,
|
||||
struct drm_device *dev,
|
||||
uint32_t handle,
|
||||
uint64_t *offset)
|
||||
{
|
||||
struct drm_gem_object *obj;
|
||||
struct ast_bo *bo;
|
||||
|
||||
obj = drm_gem_object_lookup(file, handle);
|
||||
if (obj == NULL)
|
||||
return -ENOENT;
|
||||
|
||||
bo = gem_to_ast_bo(obj);
|
||||
*offset = ast_bo_mmap_offset(bo);
|
||||
|
||||
drm_gem_object_put_unlocked(obj);
|
||||
|
||||
return 0;
|
||||
|
||||
}
|
||||
|
||||
|
|
|
@ -521,7 +521,6 @@ static void ast_crtc_dpms(struct drm_crtc *crtc, int mode)
|
|||
}
|
||||
}
|
||||
|
||||
/* ast is different - we will force move buffers out of VRAM */
|
||||
static int ast_crtc_do_set_base(struct drm_crtc *crtc,
|
||||
struct drm_framebuffer *fb,
|
||||
int x, int y, int atomic)
|
||||
|
@ -529,50 +528,54 @@ static int ast_crtc_do_set_base(struct drm_crtc *crtc,
|
|||
struct ast_private *ast = crtc->dev->dev_private;
|
||||
struct drm_gem_object *obj;
|
||||
struct ast_framebuffer *ast_fb;
|
||||
struct ast_bo *bo;
|
||||
struct drm_gem_vram_object *gbo;
|
||||
int ret;
|
||||
u64 gpu_addr;
|
||||
s64 gpu_addr;
|
||||
void *base;
|
||||
|
||||
/* push the previous fb to system ram */
|
||||
if (!atomic && fb) {
|
||||
ast_fb = to_ast_framebuffer(fb);
|
||||
obj = ast_fb->obj;
|
||||
bo = gem_to_ast_bo(obj);
|
||||
ret = ast_bo_reserve(bo, false);
|
||||
if (ret)
|
||||
return ret;
|
||||
ast_bo_push_sysram(bo);
|
||||
ast_bo_unreserve(bo);
|
||||
gbo = drm_gem_vram_of_gem(obj);
|
||||
|
||||
/* unmap if console */
|
||||
if (&ast->fbdev->afb == ast_fb)
|
||||
drm_gem_vram_kunmap(gbo);
|
||||
drm_gem_vram_unpin(gbo);
|
||||
}
|
||||
|
||||
ast_fb = to_ast_framebuffer(crtc->primary->fb);
|
||||
obj = ast_fb->obj;
|
||||
bo = gem_to_ast_bo(obj);
|
||||
gbo = drm_gem_vram_of_gem(obj);
|
||||
|
||||
ret = ast_bo_reserve(bo, false);
|
||||
ret = drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = ast_bo_pin(bo, TTM_PL_FLAG_VRAM, &gpu_addr);
|
||||
if (ret) {
|
||||
ast_bo_unreserve(bo);
|
||||
return ret;
|
||||
gpu_addr = drm_gem_vram_offset(gbo);
|
||||
if (gpu_addr < 0) {
|
||||
ret = (int)gpu_addr;
|
||||
goto err_drm_gem_vram_unpin;
|
||||
}
|
||||
|
||||
if (&ast->fbdev->afb == ast_fb) {
|
||||
/* if pushing console in kmap it */
|
||||
ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &bo->kmap);
|
||||
if (ret)
|
||||
base = drm_gem_vram_kmap(gbo, true, NULL);
|
||||
if (IS_ERR(base)) {
|
||||
ret = PTR_ERR(base);
|
||||
DRM_ERROR("failed to kmap fbcon\n");
|
||||
else
|
||||
} else {
|
||||
ast_fbdev_set_base(ast, gpu_addr);
|
||||
}
|
||||
}
|
||||
ast_bo_unreserve(bo);
|
||||
|
||||
ast_set_offset_reg(crtc);
|
||||
ast_set_start_address_crt1(crtc, (u32)gpu_addr);
|
||||
|
||||
return 0;
|
||||
|
||||
err_drm_gem_vram_unpin:
|
||||
drm_gem_vram_unpin(gbo);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int ast_crtc_mode_set_base(struct drm_crtc *crtc, int x, int y,
|
||||
|
@ -618,21 +621,18 @@ static int ast_crtc_mode_set(struct drm_crtc *crtc,
|
|||
|
||||
static void ast_crtc_disable(struct drm_crtc *crtc)
|
||||
{
|
||||
int ret;
|
||||
|
||||
DRM_DEBUG_KMS("\n");
|
||||
ast_crtc_dpms(crtc, DRM_MODE_DPMS_OFF);
|
||||
if (crtc->primary->fb) {
|
||||
struct ast_private *ast = crtc->dev->dev_private;
|
||||
struct ast_framebuffer *ast_fb = to_ast_framebuffer(crtc->primary->fb);
|
||||
struct drm_gem_object *obj = ast_fb->obj;
|
||||
struct ast_bo *bo = gem_to_ast_bo(obj);
|
||||
struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(obj);
|
||||
|
||||
ret = ast_bo_reserve(bo, false);
|
||||
if (ret)
|
||||
return;
|
||||
|
||||
ast_bo_push_sysram(bo);
|
||||
ast_bo_unreserve(bo);
|
||||
/* unmap if console */
|
||||
if (&ast->fbdev->afb == ast_fb)
|
||||
drm_gem_vram_kunmap(gbo);
|
||||
drm_gem_vram_unpin(gbo);
|
||||
}
|
||||
crtc->primary->fb = NULL;
|
||||
}
|
||||
|
@ -918,28 +918,32 @@ static int ast_cursor_init(struct drm_device *dev)
|
|||
int size;
|
||||
int ret;
|
||||
struct drm_gem_object *obj;
|
||||
struct ast_bo *bo;
|
||||
uint64_t gpu_addr;
|
||||
struct drm_gem_vram_object *gbo;
|
||||
s64 gpu_addr;
|
||||
void *base;
|
||||
|
||||
size = (AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE) * AST_DEFAULT_HWC_NUM;
|
||||
|
||||
ret = ast_gem_create(dev, size, true, &obj);
|
||||
if (ret)
|
||||
return ret;
|
||||
bo = gem_to_ast_bo(obj);
|
||||
ret = ast_bo_reserve(bo, false);
|
||||
if (unlikely(ret != 0))
|
||||
goto fail;
|
||||
|
||||
ret = ast_bo_pin(bo, TTM_PL_FLAG_VRAM, &gpu_addr);
|
||||
ast_bo_unreserve(bo);
|
||||
gbo = drm_gem_vram_of_gem(obj);
|
||||
ret = drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM);
|
||||
if (ret)
|
||||
goto fail;
|
||||
gpu_addr = drm_gem_vram_offset(gbo);
|
||||
if (gpu_addr < 0) {
|
||||
drm_gem_vram_unpin(gbo);
|
||||
ret = (int)gpu_addr;
|
||||
goto fail;
|
||||
}
|
||||
|
||||
/* kmap the object */
|
||||
ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &ast->cache_kmap);
|
||||
if (ret)
|
||||
base = drm_gem_vram_kmap_at(gbo, true, NULL, &ast->cache_kmap);
|
||||
if (IS_ERR(base)) {
|
||||
ret = PTR_ERR(base);
|
||||
goto fail;
|
||||
}
|
||||
|
||||
ast->cursor_cache = obj;
|
||||
ast->cursor_cache_gpu_addr = gpu_addr;
|
||||
|
@ -952,7 +956,9 @@ static int ast_cursor_init(struct drm_device *dev)
|
|||
static void ast_cursor_fini(struct drm_device *dev)
|
||||
{
|
||||
struct ast_private *ast = dev->dev_private;
|
||||
ttm_bo_kunmap(&ast->cache_kmap);
|
||||
struct drm_gem_vram_object *gbo =
|
||||
drm_gem_vram_of_gem(ast->cursor_cache);
|
||||
drm_gem_vram_kunmap_at(gbo, &ast->cache_kmap);
|
||||
drm_gem_object_put_unlocked(ast->cursor_cache);
|
||||
}
|
||||
|
||||
|
@ -1173,8 +1179,8 @@ static int ast_cursor_set(struct drm_crtc *crtc,
|
|||
struct ast_private *ast = crtc->dev->dev_private;
|
||||
struct ast_crtc *ast_crtc = to_ast_crtc(crtc);
|
||||
struct drm_gem_object *obj;
|
||||
struct ast_bo *bo;
|
||||
uint64_t gpu_addr;
|
||||
struct drm_gem_vram_object *gbo;
|
||||
s64 gpu_addr;
|
||||
u32 csum;
|
||||
int ret;
|
||||
struct ttm_bo_kmap_obj uobj_map;
|
||||
|
@ -1193,19 +1199,27 @@ static int ast_cursor_set(struct drm_crtc *crtc,
|
|||
DRM_ERROR("Cannot find cursor object %x for crtc\n", handle);
|
||||
return -ENOENT;
|
||||
}
|
||||
bo = gem_to_ast_bo(obj);
|
||||
gbo = drm_gem_vram_of_gem(obj);
|
||||
|
||||
ret = ast_bo_reserve(bo, false);
|
||||
ret = drm_gem_vram_lock(gbo, false);
|
||||
if (ret)
|
||||
goto fail;
|
||||
|
||||
ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &uobj_map);
|
||||
|
||||
src = ttm_kmap_obj_virtual(&uobj_map, &src_isiomem);
|
||||
dst = ttm_kmap_obj_virtual(&ast->cache_kmap, &dst_isiomem);
|
||||
|
||||
memset(&uobj_map, 0, sizeof(uobj_map));
|
||||
src = drm_gem_vram_kmap_at(gbo, true, &src_isiomem, &uobj_map);
|
||||
if (IS_ERR(src)) {
|
||||
ret = PTR_ERR(src);
|
||||
goto fail_unlock;
|
||||
}
|
||||
if (src_isiomem == true)
|
||||
DRM_ERROR("src cursor bo should be in main memory\n");
|
||||
|
||||
dst = drm_gem_vram_kmap_at(drm_gem_vram_of_gem(ast->cursor_cache),
|
||||
false, &dst_isiomem, &ast->cache_kmap);
|
||||
if (IS_ERR(dst)) {
|
||||
ret = PTR_ERR(dst);
|
||||
goto fail_unlock;
|
||||
}
|
||||
if (dst_isiomem == false)
|
||||
DRM_ERROR("dst bo should be in VRAM\n");
|
||||
|
||||
|
@ -1214,11 +1228,14 @@ static int ast_cursor_set(struct drm_crtc *crtc,
|
|||
/* do data transfer to cursor cache */
|
||||
csum = copy_cursor_image(src, dst, width, height);
|
||||
|
||||
drm_gem_vram_kunmap_at(gbo, &uobj_map);
|
||||
drm_gem_vram_unlock(gbo);
|
||||
|
||||
/* write checksum + signature */
|
||||
ttm_bo_kunmap(&uobj_map);
|
||||
ast_bo_unreserve(bo);
|
||||
{
|
||||
u8 *dst = (u8 *)ast->cache_kmap.virtual + (AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE)*ast->next_cursor + AST_HWC_SIZE;
|
||||
u8 *dst = drm_gem_vram_kmap_at(drm_gem_vram_of_gem(ast->cursor_cache),
|
||||
false, NULL, &ast->cache_kmap);
|
||||
dst += (AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE)*ast->next_cursor + AST_HWC_SIZE;
|
||||
writel(csum, dst);
|
||||
writel(width, dst + AST_HWC_SIGNATURE_SizeX);
|
||||
writel(height, dst + AST_HWC_SIGNATURE_SizeY);
|
||||
|
@ -1244,6 +1261,9 @@ static int ast_cursor_set(struct drm_crtc *crtc,
|
|||
|
||||
drm_gem_object_put_unlocked(obj);
|
||||
return 0;
|
||||
|
||||
fail_unlock:
|
||||
drm_gem_vram_unlock(gbo);
|
||||
fail:
|
||||
drm_gem_object_put_unlocked(obj);
|
||||
return ret;
|
||||
|
@ -1257,7 +1277,9 @@ static int ast_cursor_move(struct drm_crtc *crtc,
|
|||
int x_offset, y_offset;
|
||||
u8 *sig;
|
||||
|
||||
sig = (u8 *)ast->cache_kmap.virtual + (AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE)*ast->next_cursor + AST_HWC_SIZE;
|
||||
sig = drm_gem_vram_kmap_at(drm_gem_vram_of_gem(ast->cursor_cache),
|
||||
false, NULL, &ast->cache_kmap);
|
||||
sig += (AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE)*ast->next_cursor + AST_HWC_SIZE;
|
||||
writel(x, sig + AST_HWC_SIGNATURE_X);
|
||||
writel(y, sig + AST_HWC_SIGNATURE_Y);
|
||||
|
||||
|
|
|
@ -26,168 +26,21 @@
|
|||
* Authors: Dave Airlie <airlied@redhat.com>
|
||||
*/
|
||||
#include <drm/drmP.h>
|
||||
#include <drm/ttm/ttm_page_alloc.h>
|
||||
|
||||
#include "ast_drv.h"
|
||||
|
||||
static inline struct ast_private *
|
||||
ast_bdev(struct ttm_bo_device *bd)
|
||||
{
|
||||
return container_of(bd, struct ast_private, ttm.bdev);
|
||||
}
|
||||
|
||||
static void ast_bo_ttm_destroy(struct ttm_buffer_object *tbo)
|
||||
{
|
||||
struct ast_bo *bo;
|
||||
|
||||
bo = container_of(tbo, struct ast_bo, bo);
|
||||
|
||||
drm_gem_object_release(&bo->gem);
|
||||
kfree(bo);
|
||||
}
|
||||
|
||||
static bool ast_ttm_bo_is_ast_bo(struct ttm_buffer_object *bo)
|
||||
{
|
||||
if (bo->destroy == &ast_bo_ttm_destroy)
|
||||
return true;
|
||||
return false;
|
||||
}
|
||||
|
||||
static int
|
||||
ast_bo_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
|
||||
struct ttm_mem_type_manager *man)
|
||||
{
|
||||
switch (type) {
|
||||
case TTM_PL_SYSTEM:
|
||||
man->flags = TTM_MEMTYPE_FLAG_MAPPABLE;
|
||||
man->available_caching = TTM_PL_MASK_CACHING;
|
||||
man->default_caching = TTM_PL_FLAG_CACHED;
|
||||
break;
|
||||
case TTM_PL_VRAM:
|
||||
man->func = &ttm_bo_manager_func;
|
||||
man->flags = TTM_MEMTYPE_FLAG_FIXED |
|
||||
TTM_MEMTYPE_FLAG_MAPPABLE;
|
||||
man->available_caching = TTM_PL_FLAG_UNCACHED |
|
||||
TTM_PL_FLAG_WC;
|
||||
man->default_caching = TTM_PL_FLAG_WC;
|
||||
break;
|
||||
default:
|
||||
DRM_ERROR("Unsupported memory type %u\n", (unsigned)type);
|
||||
return -EINVAL;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void
|
||||
ast_bo_evict_flags(struct ttm_buffer_object *bo, struct ttm_placement *pl)
|
||||
{
|
||||
struct ast_bo *astbo = ast_bo(bo);
|
||||
|
||||
if (!ast_ttm_bo_is_ast_bo(bo))
|
||||
return;
|
||||
|
||||
ast_ttm_placement(astbo, TTM_PL_FLAG_SYSTEM);
|
||||
*pl = astbo->placement;
|
||||
}
|
||||
|
||||
static int ast_bo_verify_access(struct ttm_buffer_object *bo, struct file *filp)
|
||||
{
|
||||
struct ast_bo *astbo = ast_bo(bo);
|
||||
|
||||
return drm_vma_node_verify_access(&astbo->gem.vma_node,
|
||||
filp->private_data);
|
||||
}
|
||||
|
||||
static int ast_ttm_io_mem_reserve(struct ttm_bo_device *bdev,
|
||||
struct ttm_mem_reg *mem)
|
||||
{
|
||||
struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type];
|
||||
struct ast_private *ast = ast_bdev(bdev);
|
||||
|
||||
mem->bus.addr = NULL;
|
||||
mem->bus.offset = 0;
|
||||
mem->bus.size = mem->num_pages << PAGE_SHIFT;
|
||||
mem->bus.base = 0;
|
||||
mem->bus.is_iomem = false;
|
||||
if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE))
|
||||
return -EINVAL;
|
||||
switch (mem->mem_type) {
|
||||
case TTM_PL_SYSTEM:
|
||||
/* system memory */
|
||||
return 0;
|
||||
case TTM_PL_VRAM:
|
||||
mem->bus.offset = mem->start << PAGE_SHIFT;
|
||||
mem->bus.base = pci_resource_start(ast->dev->pdev, 0);
|
||||
mem->bus.is_iomem = true;
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
break;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void ast_ttm_io_mem_free(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem)
|
||||
{
|
||||
}
|
||||
|
||||
static void ast_ttm_backend_destroy(struct ttm_tt *tt)
|
||||
{
|
||||
ttm_tt_fini(tt);
|
||||
kfree(tt);
|
||||
}
|
||||
|
||||
static struct ttm_backend_func ast_tt_backend_func = {
|
||||
.destroy = &ast_ttm_backend_destroy,
|
||||
};
|
||||
|
||||
|
||||
static struct ttm_tt *ast_ttm_tt_create(struct ttm_buffer_object *bo,
|
||||
uint32_t page_flags)
|
||||
{
|
||||
struct ttm_tt *tt;
|
||||
|
||||
tt = kzalloc(sizeof(struct ttm_tt), GFP_KERNEL);
|
||||
if (tt == NULL)
|
||||
return NULL;
|
||||
tt->func = &ast_tt_backend_func;
|
||||
if (ttm_tt_init(tt, bo, page_flags)) {
|
||||
kfree(tt);
|
||||
return NULL;
|
||||
}
|
||||
return tt;
|
||||
}
|
||||
|
||||
struct ttm_bo_driver ast_bo_driver = {
|
||||
.ttm_tt_create = ast_ttm_tt_create,
|
||||
.init_mem_type = ast_bo_init_mem_type,
|
||||
.eviction_valuable = ttm_bo_eviction_valuable,
|
||||
.evict_flags = ast_bo_evict_flags,
|
||||
.move = NULL,
|
||||
.verify_access = ast_bo_verify_access,
|
||||
.io_mem_reserve = &ast_ttm_io_mem_reserve,
|
||||
.io_mem_free = &ast_ttm_io_mem_free,
|
||||
};
|
||||
|
||||
int ast_mm_init(struct ast_private *ast)
|
||||
{
|
||||
struct drm_vram_mm *vmm;
|
||||
int ret;
|
||||
struct drm_device *dev = ast->dev;
|
||||
struct ttm_bo_device *bdev = &ast->ttm.bdev;
|
||||
|
||||
ret = ttm_bo_device_init(&ast->ttm.bdev,
|
||||
&ast_bo_driver,
|
||||
dev->anon_inode->i_mapping,
|
||||
true);
|
||||
if (ret) {
|
||||
DRM_ERROR("Error initialising bo driver; %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = ttm_bo_init_mm(bdev, TTM_PL_VRAM,
|
||||
ast->vram_size >> PAGE_SHIFT);
|
||||
if (ret) {
|
||||
DRM_ERROR("Failed ttm VRAM init: %d\n", ret);
|
||||
vmm = drm_vram_helper_alloc_mm(
|
||||
dev, pci_resource_start(dev->pdev, 0),
|
||||
ast->vram_size, &drm_gem_vram_mm_funcs);
|
||||
if (IS_ERR(vmm)) {
|
||||
ret = PTR_ERR(vmm);
|
||||
DRM_ERROR("Error initializing VRAM MM; %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -203,148 +56,9 @@ void ast_mm_fini(struct ast_private *ast)
|
|||
{
|
||||
struct drm_device *dev = ast->dev;
|
||||
|
||||
ttm_bo_device_release(&ast->ttm.bdev);
|
||||
drm_vram_helper_release_mm(dev);
|
||||
|
||||
arch_phys_wc_del(ast->fb_mtrr);
|
||||
arch_io_free_memtype_wc(pci_resource_start(dev->pdev, 0),
|
||||
pci_resource_len(dev->pdev, 0));
|
||||
}
|
||||
|
||||
void ast_ttm_placement(struct ast_bo *bo, int domain)
|
||||
{
|
||||
u32 c = 0;
|
||||
unsigned i;
|
||||
|
||||
bo->placement.placement = bo->placements;
|
||||
bo->placement.busy_placement = bo->placements;
|
||||
if (domain & TTM_PL_FLAG_VRAM)
|
||||
bo->placements[c++].flags = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED | TTM_PL_FLAG_VRAM;
|
||||
if (domain & TTM_PL_FLAG_SYSTEM)
|
||||
bo->placements[c++].flags = TTM_PL_FLAG_CACHED | TTM_PL_FLAG_SYSTEM;
|
||||
if (!c)
|
||||
bo->placements[c++].flags = TTM_PL_FLAG_CACHED | TTM_PL_FLAG_SYSTEM;
|
||||
bo->placement.num_placement = c;
|
||||
bo->placement.num_busy_placement = c;
|
||||
for (i = 0; i < c; ++i) {
|
||||
bo->placements[i].fpfn = 0;
|
||||
bo->placements[i].lpfn = 0;
|
||||
}
|
||||
}
|
||||
|
||||
int ast_bo_create(struct drm_device *dev, int size, int align,
|
||||
uint32_t flags, struct ast_bo **pastbo)
|
||||
{
|
||||
struct ast_private *ast = dev->dev_private;
|
||||
struct ast_bo *astbo;
|
||||
size_t acc_size;
|
||||
int ret;
|
||||
|
||||
astbo = kzalloc(sizeof(struct ast_bo), GFP_KERNEL);
|
||||
if (!astbo)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = drm_gem_object_init(dev, &astbo->gem, size);
|
||||
if (ret)
|
||||
goto error;
|
||||
|
||||
astbo->bo.bdev = &ast->ttm.bdev;
|
||||
|
||||
ast_ttm_placement(astbo, TTM_PL_FLAG_VRAM | TTM_PL_FLAG_SYSTEM);
|
||||
|
||||
acc_size = ttm_bo_dma_acc_size(&ast->ttm.bdev, size,
|
||||
sizeof(struct ast_bo));
|
||||
|
||||
ret = ttm_bo_init(&ast->ttm.bdev, &astbo->bo, size,
|
||||
ttm_bo_type_device, &astbo->placement,
|
||||
align >> PAGE_SHIFT, false, acc_size,
|
||||
NULL, NULL, ast_bo_ttm_destroy);
|
||||
if (ret)
|
||||
goto error;
|
||||
|
||||
*pastbo = astbo;
|
||||
return 0;
|
||||
error:
|
||||
kfree(astbo);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static inline u64 ast_bo_gpu_offset(struct ast_bo *bo)
|
||||
{
|
||||
return bo->bo.offset;
|
||||
}
|
||||
|
||||
int ast_bo_pin(struct ast_bo *bo, u32 pl_flag, u64 *gpu_addr)
|
||||
{
|
||||
struct ttm_operation_ctx ctx = { false, false };
|
||||
int i, ret;
|
||||
|
||||
if (bo->pin_count) {
|
||||
bo->pin_count++;
|
||||
if (gpu_addr)
|
||||
*gpu_addr = ast_bo_gpu_offset(bo);
|
||||
}
|
||||
|
||||
ast_ttm_placement(bo, pl_flag);
|
||||
for (i = 0; i < bo->placement.num_placement; i++)
|
||||
bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT;
|
||||
ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
bo->pin_count = 1;
|
||||
if (gpu_addr)
|
||||
*gpu_addr = ast_bo_gpu_offset(bo);
|
||||
return 0;
|
||||
}
|
||||
|
||||
int ast_bo_unpin(struct ast_bo *bo)
|
||||
{
|
||||
struct ttm_operation_ctx ctx = { false, false };
|
||||
int i;
|
||||
if (!bo->pin_count) {
|
||||
DRM_ERROR("unpin bad %p\n", bo);
|
||||
return 0;
|
||||
}
|
||||
bo->pin_count--;
|
||||
if (bo->pin_count)
|
||||
return 0;
|
||||
|
||||
for (i = 0; i < bo->placement.num_placement ; i++)
|
||||
bo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT;
|
||||
return ttm_bo_validate(&bo->bo, &bo->placement, &ctx);
|
||||
}
|
||||
|
||||
int ast_bo_push_sysram(struct ast_bo *bo)
|
||||
{
|
||||
struct ttm_operation_ctx ctx = { false, false };
|
||||
int i, ret;
|
||||
if (!bo->pin_count) {
|
||||
DRM_ERROR("unpin bad %p\n", bo);
|
||||
return 0;
|
||||
}
|
||||
bo->pin_count--;
|
||||
if (bo->pin_count)
|
||||
return 0;
|
||||
|
||||
if (bo->kmap.virtual)
|
||||
ttm_bo_kunmap(&bo->kmap);
|
||||
|
||||
ast_ttm_placement(bo, TTM_PL_FLAG_SYSTEM);
|
||||
for (i = 0; i < bo->placement.num_placement ; i++)
|
||||
bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT;
|
||||
|
||||
ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx);
|
||||
if (ret) {
|
||||
DRM_ERROR("pushing to VRAM failed\n");
|
||||
return ret;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
int ast_mmap(struct file *filp, struct vm_area_struct *vma)
|
||||
{
|
||||
struct drm_file *file_priv = filp->private_data;
|
||||
struct ast_private *ast = file_priv->minor->dev->dev_private;
|
||||
|
||||
return ttm_bo_mmap(filp, vma, &ast->ttm.bdev);
|
||||
}
|
||||
|
|
|
@ -603,8 +603,6 @@ static int atmel_hlcdc_plane_atomic_check(struct drm_plane *p,
|
|||
const struct drm_display_mode *mode;
|
||||
struct drm_crtc_state *crtc_state;
|
||||
unsigned int tmp;
|
||||
int hsub = 1;
|
||||
int vsub = 1;
|
||||
int ret;
|
||||
int i;
|
||||
|
||||
|
@ -642,13 +640,10 @@ static int atmel_hlcdc_plane_atomic_check(struct drm_plane *p,
|
|||
if (state->nplanes > ATMEL_HLCDC_LAYER_MAX_PLANES)
|
||||
return -EINVAL;
|
||||
|
||||
hsub = drm_format_horz_chroma_subsampling(fb->format->format);
|
||||
vsub = drm_format_vert_chroma_subsampling(fb->format->format);
|
||||
|
||||
for (i = 0; i < state->nplanes; i++) {
|
||||
unsigned int offset = 0;
|
||||
int xdiv = i ? hsub : 1;
|
||||
int ydiv = i ? vsub : 1;
|
||||
int xdiv = i ? fb->format->hsub : 1;
|
||||
int ydiv = i ? fb->format->vsub : 1;
|
||||
|
||||
state->bpp[i] = fb->format->cpp[i];
|
||||
if (!state->bpp[i])
|
||||
|
|
|
@ -3,7 +3,7 @@ config DRM_BOCHS
|
|||
tristate "DRM Support for bochs dispi vga interface (qemu stdvga)"
|
||||
depends on DRM && PCI && MMU
|
||||
select DRM_KMS_HELPER
|
||||
select DRM_TTM
|
||||
select DRM_VRAM_HELPER
|
||||
help
|
||||
Choose this option for qemu.
|
||||
If M is selected the module will be called bochs-drm.
|
||||
|
|
|
@ -10,9 +10,9 @@
|
|||
#include <drm/drm_simple_kms_helper.h>
|
||||
|
||||
#include <drm/drm_gem.h>
|
||||
#include <drm/drm_gem_vram_helper.h>
|
||||
|
||||
#include <drm/ttm/ttm_bo_driver.h>
|
||||
#include <drm/ttm/ttm_page_alloc.h>
|
||||
#include <drm/drm_vram_mm_helper.h>
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
|
@ -73,38 +73,8 @@ struct bochs_device {
|
|||
struct drm_device *dev;
|
||||
struct drm_simple_display_pipe pipe;
|
||||
struct drm_connector connector;
|
||||
|
||||
/* ttm */
|
||||
struct {
|
||||
struct ttm_bo_device bdev;
|
||||
bool initialized;
|
||||
} ttm;
|
||||
};
|
||||
|
||||
struct bochs_bo {
|
||||
struct ttm_buffer_object bo;
|
||||
struct ttm_placement placement;
|
||||
struct ttm_bo_kmap_obj kmap;
|
||||
struct drm_gem_object gem;
|
||||
struct ttm_place placements[3];
|
||||
int pin_count;
|
||||
};
|
||||
|
||||
static inline struct bochs_bo *bochs_bo(struct ttm_buffer_object *bo)
|
||||
{
|
||||
return container_of(bo, struct bochs_bo, bo);
|
||||
}
|
||||
|
||||
static inline struct bochs_bo *gem_to_bochs_bo(struct drm_gem_object *gem)
|
||||
{
|
||||
return container_of(gem, struct bochs_bo, gem);
|
||||
}
|
||||
|
||||
static inline u64 bochs_bo_mmap_offset(struct bochs_bo *bo)
|
||||
{
|
||||
return drm_vma_node_offset_addr(&bo->bo.vma_node);
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
/* bochs_hw.c */
|
||||
|
@ -122,26 +92,6 @@ int bochs_hw_load_edid(struct bochs_device *bochs);
|
|||
/* bochs_mm.c */
|
||||
int bochs_mm_init(struct bochs_device *bochs);
|
||||
void bochs_mm_fini(struct bochs_device *bochs);
|
||||
int bochs_mmap(struct file *filp, struct vm_area_struct *vma);
|
||||
|
||||
int bochs_gem_create(struct drm_device *dev, u32 size, bool iskernel,
|
||||
struct drm_gem_object **obj);
|
||||
int bochs_gem_init_object(struct drm_gem_object *obj);
|
||||
void bochs_gem_free_object(struct drm_gem_object *obj);
|
||||
int bochs_dumb_create(struct drm_file *file, struct drm_device *dev,
|
||||
struct drm_mode_create_dumb *args);
|
||||
int bochs_dumb_mmap_offset(struct drm_file *file, struct drm_device *dev,
|
||||
uint32_t handle, uint64_t *offset);
|
||||
|
||||
int bochs_bo_pin(struct bochs_bo *bo, u32 pl_flag);
|
||||
int bochs_bo_unpin(struct bochs_bo *bo);
|
||||
|
||||
int bochs_gem_prime_pin(struct drm_gem_object *obj);
|
||||
void bochs_gem_prime_unpin(struct drm_gem_object *obj);
|
||||
void *bochs_gem_prime_vmap(struct drm_gem_object *obj);
|
||||
void bochs_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
|
||||
int bochs_gem_prime_mmap(struct drm_gem_object *obj,
|
||||
struct vm_area_struct *vma);
|
||||
|
||||
/* bochs_kms.c */
|
||||
int bochs_kms_init(struct bochs_device *bochs);
|
||||
|
|
|
@ -10,6 +10,7 @@
|
|||
#include <linux/slab.h>
|
||||
#include <drm/drm_fb_helper.h>
|
||||
#include <drm/drm_probe_helper.h>
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
|
||||
#include "bochs.h"
|
||||
|
||||
|
@ -63,14 +64,7 @@ static int bochs_load(struct drm_device *dev)
|
|||
|
||||
static const struct file_operations bochs_fops = {
|
||||
.owner = THIS_MODULE,
|
||||
.open = drm_open,
|
||||
.release = drm_release,
|
||||
.unlocked_ioctl = drm_ioctl,
|
||||
.compat_ioctl = drm_compat_ioctl,
|
||||
.poll = drm_poll,
|
||||
.read = drm_read,
|
||||
.llseek = no_llseek,
|
||||
.mmap = bochs_mmap,
|
||||
DRM_VRAM_MM_FILE_OPERATIONS
|
||||
};
|
||||
|
||||
static struct drm_driver bochs_driver = {
|
||||
|
@ -82,17 +76,8 @@ static struct drm_driver bochs_driver = {
|
|||
.date = "20130925",
|
||||
.major = 1,
|
||||
.minor = 0,
|
||||
.gem_free_object_unlocked = bochs_gem_free_object,
|
||||
.dumb_create = bochs_dumb_create,
|
||||
.dumb_map_offset = bochs_dumb_mmap_offset,
|
||||
|
||||
.gem_prime_export = drm_gem_prime_export,
|
||||
.gem_prime_import = drm_gem_prime_import,
|
||||
.gem_prime_pin = bochs_gem_prime_pin,
|
||||
.gem_prime_unpin = bochs_gem_prime_unpin,
|
||||
.gem_prime_vmap = bochs_gem_prime_vmap,
|
||||
.gem_prime_vunmap = bochs_gem_prime_vunmap,
|
||||
.gem_prime_mmap = bochs_gem_prime_mmap,
|
||||
DRM_GEM_VRAM_DRIVER,
|
||||
DRM_GEM_VRAM_DRIVER_PRIME,
|
||||
};
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
@ -174,6 +159,7 @@ static void bochs_pci_remove(struct pci_dev *pdev)
|
|||
{
|
||||
struct drm_device *dev = pci_get_drvdata(pdev);
|
||||
|
||||
drm_atomic_helper_shutdown(dev);
|
||||
drm_dev_unregister(dev);
|
||||
bochs_unload(dev);
|
||||
drm_dev_put(dev);
|
||||
|
|
|
@ -30,16 +30,16 @@ static const uint32_t bochs_formats[] = {
|
|||
static void bochs_plane_update(struct bochs_device *bochs,
|
||||
struct drm_plane_state *state)
|
||||
{
|
||||
struct bochs_bo *bo;
|
||||
struct drm_gem_vram_object *gbo;
|
||||
|
||||
if (!state->fb || !bochs->stride)
|
||||
return;
|
||||
|
||||
bo = gem_to_bochs_bo(state->fb->obj[0]);
|
||||
gbo = drm_gem_vram_of_gem(state->fb->obj[0]);
|
||||
bochs_hw_setbase(bochs,
|
||||
state->crtc_x,
|
||||
state->crtc_y,
|
||||
bo->bo.offset);
|
||||
gbo->bo.offset);
|
||||
bochs_hw_setformat(bochs, state->fb->format);
|
||||
}
|
||||
|
||||
|
@ -72,23 +72,23 @@ static void bochs_pipe_update(struct drm_simple_display_pipe *pipe,
|
|||
static int bochs_pipe_prepare_fb(struct drm_simple_display_pipe *pipe,
|
||||
struct drm_plane_state *new_state)
|
||||
{
|
||||
struct bochs_bo *bo;
|
||||
struct drm_gem_vram_object *gbo;
|
||||
|
||||
if (!new_state->fb)
|
||||
return 0;
|
||||
bo = gem_to_bochs_bo(new_state->fb->obj[0]);
|
||||
return bochs_bo_pin(bo, TTM_PL_FLAG_VRAM);
|
||||
gbo = drm_gem_vram_of_gem(new_state->fb->obj[0]);
|
||||
return drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM);
|
||||
}
|
||||
|
||||
static void bochs_pipe_cleanup_fb(struct drm_simple_display_pipe *pipe,
|
||||
struct drm_plane_state *old_state)
|
||||
{
|
||||
struct bochs_bo *bo;
|
||||
struct drm_gem_vram_object *gbo;
|
||||
|
||||
if (!old_state->fb)
|
||||
return;
|
||||
bo = gem_to_bochs_bo(old_state->fb->obj[0]);
|
||||
bochs_bo_unpin(bo);
|
||||
gbo = drm_gem_vram_of_gem(old_state->fb->obj[0]);
|
||||
drm_gem_vram_unpin(gbo);
|
||||
}
|
||||
|
||||
static const struct drm_simple_display_pipe_funcs bochs_pipe_funcs = {
|
||||
|
|
|
@ -7,435 +7,22 @@
|
|||
|
||||
#include "bochs.h"
|
||||
|
||||
static void bochs_ttm_placement(struct bochs_bo *bo, int domain);
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
static inline struct bochs_device *bochs_bdev(struct ttm_bo_device *bd)
|
||||
{
|
||||
return container_of(bd, struct bochs_device, ttm.bdev);
|
||||
}
|
||||
|
||||
static void bochs_bo_ttm_destroy(struct ttm_buffer_object *tbo)
|
||||
{
|
||||
struct bochs_bo *bo;
|
||||
|
||||
bo = container_of(tbo, struct bochs_bo, bo);
|
||||
drm_gem_object_release(&bo->gem);
|
||||
kfree(bo);
|
||||
}
|
||||
|
||||
static bool bochs_ttm_bo_is_bochs_bo(struct ttm_buffer_object *bo)
|
||||
{
|
||||
if (bo->destroy == &bochs_bo_ttm_destroy)
|
||||
return true;
|
||||
return false;
|
||||
}
|
||||
|
||||
static int bochs_bo_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
|
||||
struct ttm_mem_type_manager *man)
|
||||
{
|
||||
switch (type) {
|
||||
case TTM_PL_SYSTEM:
|
||||
man->flags = TTM_MEMTYPE_FLAG_MAPPABLE;
|
||||
man->available_caching = TTM_PL_MASK_CACHING;
|
||||
man->default_caching = TTM_PL_FLAG_CACHED;
|
||||
break;
|
||||
case TTM_PL_VRAM:
|
||||
man->func = &ttm_bo_manager_func;
|
||||
man->flags = TTM_MEMTYPE_FLAG_FIXED |
|
||||
TTM_MEMTYPE_FLAG_MAPPABLE;
|
||||
man->available_caching = TTM_PL_FLAG_UNCACHED |
|
||||
TTM_PL_FLAG_WC;
|
||||
man->default_caching = TTM_PL_FLAG_WC;
|
||||
break;
|
||||
default:
|
||||
DRM_ERROR("Unsupported memory type %u\n", (unsigned)type);
|
||||
return -EINVAL;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void
|
||||
bochs_bo_evict_flags(struct ttm_buffer_object *bo, struct ttm_placement *pl)
|
||||
{
|
||||
struct bochs_bo *bochsbo = bochs_bo(bo);
|
||||
|
||||
if (!bochs_ttm_bo_is_bochs_bo(bo))
|
||||
return;
|
||||
|
||||
bochs_ttm_placement(bochsbo, TTM_PL_FLAG_SYSTEM);
|
||||
*pl = bochsbo->placement;
|
||||
}
|
||||
|
||||
static int bochs_bo_verify_access(struct ttm_buffer_object *bo,
|
||||
struct file *filp)
|
||||
{
|
||||
struct bochs_bo *bochsbo = bochs_bo(bo);
|
||||
|
||||
return drm_vma_node_verify_access(&bochsbo->gem.vma_node,
|
||||
filp->private_data);
|
||||
}
|
||||
|
||||
static int bochs_ttm_io_mem_reserve(struct ttm_bo_device *bdev,
|
||||
struct ttm_mem_reg *mem)
|
||||
{
|
||||
struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type];
|
||||
struct bochs_device *bochs = bochs_bdev(bdev);
|
||||
|
||||
mem->bus.addr = NULL;
|
||||
mem->bus.offset = 0;
|
||||
mem->bus.size = mem->num_pages << PAGE_SHIFT;
|
||||
mem->bus.base = 0;
|
||||
mem->bus.is_iomem = false;
|
||||
if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE))
|
||||
return -EINVAL;
|
||||
switch (mem->mem_type) {
|
||||
case TTM_PL_SYSTEM:
|
||||
/* system memory */
|
||||
return 0;
|
||||
case TTM_PL_VRAM:
|
||||
mem->bus.offset = mem->start << PAGE_SHIFT;
|
||||
mem->bus.base = bochs->fb_base;
|
||||
mem->bus.is_iomem = true;
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
break;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void bochs_ttm_io_mem_free(struct ttm_bo_device *bdev,
|
||||
struct ttm_mem_reg *mem)
|
||||
{
|
||||
}
|
||||
|
||||
static void bochs_ttm_backend_destroy(struct ttm_tt *tt)
|
||||
{
|
||||
ttm_tt_fini(tt);
|
||||
kfree(tt);
|
||||
}
|
||||
|
||||
static struct ttm_backend_func bochs_tt_backend_func = {
|
||||
.destroy = &bochs_ttm_backend_destroy,
|
||||
};
|
||||
|
||||
static struct ttm_tt *bochs_ttm_tt_create(struct ttm_buffer_object *bo,
|
||||
uint32_t page_flags)
|
||||
{
|
||||
struct ttm_tt *tt;
|
||||
|
||||
tt = kzalloc(sizeof(struct ttm_tt), GFP_KERNEL);
|
||||
if (tt == NULL)
|
||||
return NULL;
|
||||
tt->func = &bochs_tt_backend_func;
|
||||
if (ttm_tt_init(tt, bo, page_flags)) {
|
||||
kfree(tt);
|
||||
return NULL;
|
||||
}
|
||||
return tt;
|
||||
}
|
||||
|
||||
static struct ttm_bo_driver bochs_bo_driver = {
|
||||
.ttm_tt_create = bochs_ttm_tt_create,
|
||||
.init_mem_type = bochs_bo_init_mem_type,
|
||||
.eviction_valuable = ttm_bo_eviction_valuable,
|
||||
.evict_flags = bochs_bo_evict_flags,
|
||||
.move = NULL,
|
||||
.verify_access = bochs_bo_verify_access,
|
||||
.io_mem_reserve = &bochs_ttm_io_mem_reserve,
|
||||
.io_mem_free = &bochs_ttm_io_mem_free,
|
||||
};
|
||||
|
||||
int bochs_mm_init(struct bochs_device *bochs)
|
||||
{
|
||||
struct ttm_bo_device *bdev = &bochs->ttm.bdev;
|
||||
int ret;
|
||||
struct drm_vram_mm *vmm;
|
||||
|
||||
ret = ttm_bo_device_init(&bochs->ttm.bdev,
|
||||
&bochs_bo_driver,
|
||||
bochs->dev->anon_inode->i_mapping,
|
||||
true);
|
||||
if (ret) {
|
||||
DRM_ERROR("Error initialising bo driver; %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = ttm_bo_init_mm(bdev, TTM_PL_VRAM,
|
||||
bochs->fb_size >> PAGE_SHIFT);
|
||||
if (ret) {
|
||||
DRM_ERROR("Failed ttm VRAM init: %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
bochs->ttm.initialized = true;
|
||||
return 0;
|
||||
vmm = drm_vram_helper_alloc_mm(bochs->dev, bochs->fb_base,
|
||||
bochs->fb_size,
|
||||
&drm_gem_vram_mm_funcs);
|
||||
return PTR_ERR_OR_ZERO(vmm);
|
||||
}
|
||||
|
||||
void bochs_mm_fini(struct bochs_device *bochs)
|
||||
{
|
||||
if (!bochs->ttm.initialized)
|
||||
if (!bochs->dev->vram_mm)
|
||||
return;
|
||||
|
||||
ttm_bo_device_release(&bochs->ttm.bdev);
|
||||
bochs->ttm.initialized = false;
|
||||
}
|
||||
|
||||
static void bochs_ttm_placement(struct bochs_bo *bo, int domain)
|
||||
{
|
||||
unsigned i;
|
||||
u32 c = 0;
|
||||
bo->placement.placement = bo->placements;
|
||||
bo->placement.busy_placement = bo->placements;
|
||||
if (domain & TTM_PL_FLAG_VRAM) {
|
||||
bo->placements[c++].flags = TTM_PL_FLAG_WC
|
||||
| TTM_PL_FLAG_UNCACHED
|
||||
| TTM_PL_FLAG_VRAM;
|
||||
}
|
||||
if (domain & TTM_PL_FLAG_SYSTEM) {
|
||||
bo->placements[c++].flags = TTM_PL_MASK_CACHING
|
||||
| TTM_PL_FLAG_SYSTEM;
|
||||
}
|
||||
if (!c) {
|
||||
bo->placements[c++].flags = TTM_PL_MASK_CACHING
|
||||
| TTM_PL_FLAG_SYSTEM;
|
||||
}
|
||||
for (i = 0; i < c; ++i) {
|
||||
bo->placements[i].fpfn = 0;
|
||||
bo->placements[i].lpfn = 0;
|
||||
}
|
||||
bo->placement.num_placement = c;
|
||||
bo->placement.num_busy_placement = c;
|
||||
}
|
||||
|
||||
int bochs_bo_pin(struct bochs_bo *bo, u32 pl_flag)
|
||||
{
|
||||
struct ttm_operation_ctx ctx = { false, false };
|
||||
int i, ret;
|
||||
|
||||
if (bo->pin_count) {
|
||||
bo->pin_count++;
|
||||
return 0;
|
||||
}
|
||||
|
||||
bochs_ttm_placement(bo, pl_flag);
|
||||
for (i = 0; i < bo->placement.num_placement; i++)
|
||||
bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT;
|
||||
ret = ttm_bo_reserve(&bo->bo, true, false, NULL);
|
||||
if (ret)
|
||||
return ret;
|
||||
ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx);
|
||||
ttm_bo_unreserve(&bo->bo);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
bo->pin_count = 1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
int bochs_bo_unpin(struct bochs_bo *bo)
|
||||
{
|
||||
struct ttm_operation_ctx ctx = { false, false };
|
||||
int i, ret;
|
||||
|
||||
if (!bo->pin_count) {
|
||||
DRM_ERROR("unpin bad %p\n", bo);
|
||||
return 0;
|
||||
}
|
||||
bo->pin_count--;
|
||||
|
||||
if (bo->pin_count)
|
||||
return 0;
|
||||
|
||||
for (i = 0; i < bo->placement.num_placement; i++)
|
||||
bo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT;
|
||||
ret = ttm_bo_reserve(&bo->bo, true, false, NULL);
|
||||
if (ret)
|
||||
return ret;
|
||||
ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx);
|
||||
ttm_bo_unreserve(&bo->bo);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int bochs_mmap(struct file *filp, struct vm_area_struct *vma)
|
||||
{
|
||||
struct drm_file *file_priv = filp->private_data;
|
||||
struct bochs_device *bochs = file_priv->minor->dev->dev_private;
|
||||
|
||||
return ttm_bo_mmap(filp, vma, &bochs->ttm.bdev);
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
static int bochs_bo_create(struct drm_device *dev, int size, int align,
|
||||
uint32_t flags, struct bochs_bo **pbochsbo)
|
||||
{
|
||||
struct bochs_device *bochs = dev->dev_private;
|
||||
struct bochs_bo *bochsbo;
|
||||
size_t acc_size;
|
||||
int ret;
|
||||
|
||||
bochsbo = kzalloc(sizeof(struct bochs_bo), GFP_KERNEL);
|
||||
if (!bochsbo)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = drm_gem_object_init(dev, &bochsbo->gem, size);
|
||||
if (ret) {
|
||||
kfree(bochsbo);
|
||||
return ret;
|
||||
}
|
||||
|
||||
bochsbo->bo.bdev = &bochs->ttm.bdev;
|
||||
bochsbo->bo.bdev->dev_mapping = dev->anon_inode->i_mapping;
|
||||
|
||||
bochs_ttm_placement(bochsbo, TTM_PL_FLAG_VRAM | TTM_PL_FLAG_SYSTEM);
|
||||
|
||||
acc_size = ttm_bo_dma_acc_size(&bochs->ttm.bdev, size,
|
||||
sizeof(struct bochs_bo));
|
||||
|
||||
ret = ttm_bo_init(&bochs->ttm.bdev, &bochsbo->bo, size,
|
||||
ttm_bo_type_device, &bochsbo->placement,
|
||||
align >> PAGE_SHIFT, false, acc_size,
|
||||
NULL, NULL, bochs_bo_ttm_destroy);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
*pbochsbo = bochsbo;
|
||||
return 0;
|
||||
}
|
||||
|
||||
int bochs_gem_create(struct drm_device *dev, u32 size, bool iskernel,
|
||||
struct drm_gem_object **obj)
|
||||
{
|
||||
struct bochs_bo *bochsbo;
|
||||
int ret;
|
||||
|
||||
*obj = NULL;
|
||||
|
||||
size = PAGE_ALIGN(size);
|
||||
if (size == 0)
|
||||
return -EINVAL;
|
||||
|
||||
ret = bochs_bo_create(dev, size, 0, 0, &bochsbo);
|
||||
if (ret) {
|
||||
if (ret != -ERESTARTSYS)
|
||||
DRM_ERROR("failed to allocate GEM object\n");
|
||||
return ret;
|
||||
}
|
||||
*obj = &bochsbo->gem;
|
||||
return 0;
|
||||
}
|
||||
|
||||
int bochs_dumb_create(struct drm_file *file, struct drm_device *dev,
|
||||
struct drm_mode_create_dumb *args)
|
||||
{
|
||||
struct drm_gem_object *gobj;
|
||||
u32 handle;
|
||||
int ret;
|
||||
|
||||
args->pitch = args->width * ((args->bpp + 7) / 8);
|
||||
args->size = args->pitch * args->height;
|
||||
|
||||
ret = bochs_gem_create(dev, args->size, false,
|
||||
&gobj);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = drm_gem_handle_create(file, gobj, &handle);
|
||||
drm_gem_object_put_unlocked(gobj);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
args->handle = handle;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void bochs_bo_unref(struct bochs_bo **bo)
|
||||
{
|
||||
struct ttm_buffer_object *tbo;
|
||||
|
||||
if ((*bo) == NULL)
|
||||
return;
|
||||
|
||||
tbo = &((*bo)->bo);
|
||||
ttm_bo_put(tbo);
|
||||
*bo = NULL;
|
||||
}
|
||||
|
||||
void bochs_gem_free_object(struct drm_gem_object *obj)
|
||||
{
|
||||
struct bochs_bo *bochs_bo = gem_to_bochs_bo(obj);
|
||||
|
||||
bochs_bo_unref(&bochs_bo);
|
||||
}
|
||||
|
||||
int bochs_dumb_mmap_offset(struct drm_file *file, struct drm_device *dev,
|
||||
uint32_t handle, uint64_t *offset)
|
||||
{
|
||||
struct drm_gem_object *obj;
|
||||
struct bochs_bo *bo;
|
||||
|
||||
obj = drm_gem_object_lookup(file, handle);
|
||||
if (obj == NULL)
|
||||
return -ENOENT;
|
||||
|
||||
bo = gem_to_bochs_bo(obj);
|
||||
*offset = bochs_bo_mmap_offset(bo);
|
||||
|
||||
drm_gem_object_put_unlocked(obj);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
int bochs_gem_prime_pin(struct drm_gem_object *obj)
|
||||
{
|
||||
struct bochs_bo *bo = gem_to_bochs_bo(obj);
|
||||
|
||||
return bochs_bo_pin(bo, TTM_PL_FLAG_VRAM);
|
||||
}
|
||||
|
||||
void bochs_gem_prime_unpin(struct drm_gem_object *obj)
|
||||
{
|
||||
struct bochs_bo *bo = gem_to_bochs_bo(obj);
|
||||
|
||||
bochs_bo_unpin(bo);
|
||||
}
|
||||
|
||||
void *bochs_gem_prime_vmap(struct drm_gem_object *obj)
|
||||
{
|
||||
struct bochs_bo *bo = gem_to_bochs_bo(obj);
|
||||
bool is_iomem;
|
||||
int ret;
|
||||
|
||||
ret = bochs_bo_pin(bo, TTM_PL_FLAG_VRAM);
|
||||
if (ret)
|
||||
return NULL;
|
||||
ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &bo->kmap);
|
||||
if (ret) {
|
||||
bochs_bo_unpin(bo);
|
||||
return NULL;
|
||||
}
|
||||
return ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
|
||||
}
|
||||
|
||||
void bochs_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
|
||||
{
|
||||
struct bochs_bo *bo = gem_to_bochs_bo(obj);
|
||||
|
||||
ttm_bo_kunmap(&bo->kmap);
|
||||
bochs_bo_unpin(bo);
|
||||
}
|
||||
|
||||
int bochs_gem_prime_mmap(struct drm_gem_object *obj,
|
||||
struct vm_area_struct *vma)
|
||||
{
|
||||
struct bochs_bo *bo = gem_to_bochs_bo(obj);
|
||||
|
||||
bo->gem.vma_node.vm_node.start = bo->bo.vma_node.vm_node.start;
|
||||
return drm_gem_prime_mmap(obj, vma);
|
||||
drm_vram_helper_release_mm(bochs->dev);
|
||||
}
|
||||
|
|
|
@ -9,13 +9,12 @@
|
|||
*/
|
||||
|
||||
#include <drm/drmP.h>
|
||||
#include <drm/drm_panel.h>
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_connector.h>
|
||||
#include <drm/drm_encoder.h>
|
||||
#include <drm/drm_modeset_helper_vtables.h>
|
||||
#include <drm/drm_probe_helper.h>
|
||||
#include <drm/drm_panel.h>
|
||||
#include <drm/drm_probe_helper.h>
|
||||
|
||||
struct panel_bridge {
|
||||
struct drm_bridge bridge;
|
||||
|
|
|
@ -1,250 +0,0 @@
|
|||
/*
|
||||
* Copyright 2012 Red Hat
|
||||
*
|
||||
* This file is subject to the terms and conditions of the GNU General
|
||||
* Public License version 2. See the file COPYING in the main
|
||||
* directory of this archive for more details.
|
||||
*
|
||||
* Authors: Matthew Garrett
|
||||
* Dave Airlie
|
||||
*/
|
||||
#ifndef __CIRRUS_DRV_H__
|
||||
#define __CIRRUS_DRV_H__
|
||||
|
||||
#include <video/vga.h>
|
||||
|
||||
#include <drm/drm_encoder.h>
|
||||
#include <drm/drm_fb_helper.h>
|
||||
|
||||
#include <drm/ttm/ttm_bo_api.h>
|
||||
#include <drm/ttm/ttm_bo_driver.h>
|
||||
#include <drm/ttm/ttm_placement.h>
|
||||
#include <drm/ttm/ttm_memory.h>
|
||||
#include <drm/ttm/ttm_module.h>
|
||||
|
||||
#include <drm/drm_gem.h>
|
||||
|
||||
#define DRIVER_AUTHOR "Matthew Garrett"
|
||||
|
||||
#define DRIVER_NAME "cirrus"
|
||||
#define DRIVER_DESC "qemu Cirrus emulation"
|
||||
#define DRIVER_DATE "20110418"
|
||||
|
||||
#define DRIVER_MAJOR 1
|
||||
#define DRIVER_MINOR 0
|
||||
#define DRIVER_PATCHLEVEL 0
|
||||
|
||||
#define CIRRUSFB_CONN_LIMIT 1
|
||||
|
||||
#define RREG8(reg) ioread8(((void __iomem *)cdev->rmmio) + (reg))
|
||||
#define WREG8(reg, v) iowrite8(v, ((void __iomem *)cdev->rmmio) + (reg))
|
||||
#define RREG32(reg) ioread32(((void __iomem *)cdev->rmmio) + (reg))
|
||||
#define WREG32(reg, v) iowrite32(v, ((void __iomem *)cdev->rmmio) + (reg))
|
||||
|
||||
#define SEQ_INDEX 4
|
||||
#define SEQ_DATA 5
|
||||
|
||||
#define WREG_SEQ(reg, v) \
|
||||
do { \
|
||||
WREG8(SEQ_INDEX, reg); \
|
||||
WREG8(SEQ_DATA, v); \
|
||||
} while (0) \
|
||||
|
||||
#define CRT_INDEX 0x14
|
||||
#define CRT_DATA 0x15
|
||||
|
||||
#define WREG_CRT(reg, v) \
|
||||
do { \
|
||||
WREG8(CRT_INDEX, reg); \
|
||||
WREG8(CRT_DATA, v); \
|
||||
} while (0) \
|
||||
|
||||
#define GFX_INDEX 0xe
|
||||
#define GFX_DATA 0xf
|
||||
|
||||
#define WREG_GFX(reg, v) \
|
||||
do { \
|
||||
WREG8(GFX_INDEX, reg); \
|
||||
WREG8(GFX_DATA, v); \
|
||||
} while (0) \
|
||||
|
||||
/*
|
||||
* Cirrus has a "hidden" DAC register that can be accessed by writing to
|
||||
* the pixel mask register to reset the state, then reading from the register
|
||||
* four times. The next write will then pass to the DAC
|
||||
*/
|
||||
#define VGA_DAC_MASK 0x6
|
||||
|
||||
#define WREG_HDR(v) \
|
||||
do { \
|
||||
RREG8(VGA_DAC_MASK); \
|
||||
RREG8(VGA_DAC_MASK); \
|
||||
RREG8(VGA_DAC_MASK); \
|
||||
RREG8(VGA_DAC_MASK); \
|
||||
WREG8(VGA_DAC_MASK, v); \
|
||||
} while (0) \
|
||||
|
||||
|
||||
#define CIRRUS_MAX_FB_HEIGHT 4096
|
||||
#define CIRRUS_MAX_FB_WIDTH 4096
|
||||
|
||||
#define CIRRUS_DPMS_CLEARED (-1)
|
||||
|
||||
#define to_cirrus_crtc(x) container_of(x, struct cirrus_crtc, base)
|
||||
#define to_cirrus_encoder(x) container_of(x, struct cirrus_encoder, base)
|
||||
|
||||
struct cirrus_crtc {
|
||||
struct drm_crtc base;
|
||||
int last_dpms;
|
||||
bool enabled;
|
||||
};
|
||||
|
||||
struct cirrus_fbdev;
|
||||
struct cirrus_mode_info {
|
||||
struct cirrus_crtc *crtc;
|
||||
/* pointer to fbdev info structure */
|
||||
struct cirrus_fbdev *gfbdev;
|
||||
};
|
||||
|
||||
struct cirrus_encoder {
|
||||
struct drm_encoder base;
|
||||
int last_dpms;
|
||||
};
|
||||
|
||||
struct cirrus_connector {
|
||||
struct drm_connector base;
|
||||
};
|
||||
|
||||
struct cirrus_mc {
|
||||
resource_size_t vram_size;
|
||||
resource_size_t vram_base;
|
||||
};
|
||||
|
||||
struct cirrus_device {
|
||||
struct drm_device *dev;
|
||||
unsigned long flags;
|
||||
|
||||
resource_size_t rmmio_base;
|
||||
resource_size_t rmmio_size;
|
||||
void __iomem *rmmio;
|
||||
|
||||
struct cirrus_mc mc;
|
||||
struct cirrus_mode_info mode_info;
|
||||
|
||||
int num_crtc;
|
||||
int fb_mtrr;
|
||||
|
||||
struct {
|
||||
struct ttm_bo_device bdev;
|
||||
} ttm;
|
||||
bool mm_inited;
|
||||
};
|
||||
|
||||
|
||||
struct cirrus_fbdev {
|
||||
struct drm_fb_helper helper; /* must be first */
|
||||
struct drm_framebuffer *gfb;
|
||||
void *sysram;
|
||||
int size;
|
||||
int x1, y1, x2, y2; /* dirty rect */
|
||||
spinlock_t dirty_lock;
|
||||
};
|
||||
|
||||
struct cirrus_bo {
|
||||
struct ttm_buffer_object bo;
|
||||
struct ttm_placement placement;
|
||||
struct ttm_bo_kmap_obj kmap;
|
||||
struct drm_gem_object gem;
|
||||
struct ttm_place placements[3];
|
||||
int pin_count;
|
||||
};
|
||||
#define gem_to_cirrus_bo(gobj) container_of((gobj), struct cirrus_bo, gem)
|
||||
|
||||
static inline struct cirrus_bo *
|
||||
cirrus_bo(struct ttm_buffer_object *bo)
|
||||
{
|
||||
return container_of(bo, struct cirrus_bo, bo);
|
||||
}
|
||||
|
||||
|
||||
#define to_cirrus_obj(x) container_of(x, struct cirrus_gem_object, base)
|
||||
|
||||
/* cirrus_main.c */
|
||||
int cirrus_device_init(struct cirrus_device *cdev,
|
||||
struct drm_device *ddev,
|
||||
struct pci_dev *pdev,
|
||||
uint32_t flags);
|
||||
void cirrus_device_fini(struct cirrus_device *cdev);
|
||||
void cirrus_gem_free_object(struct drm_gem_object *obj);
|
||||
int cirrus_dumb_mmap_offset(struct drm_file *file,
|
||||
struct drm_device *dev,
|
||||
uint32_t handle,
|
||||
uint64_t *offset);
|
||||
int cirrus_gem_create(struct drm_device *dev,
|
||||
u32 size, bool iskernel,
|
||||
struct drm_gem_object **obj);
|
||||
int cirrus_dumb_create(struct drm_file *file,
|
||||
struct drm_device *dev,
|
||||
struct drm_mode_create_dumb *args);
|
||||
|
||||
int cirrus_framebuffer_init(struct drm_device *dev,
|
||||
struct drm_framebuffer *gfb,
|
||||
const struct drm_mode_fb_cmd2 *mode_cmd,
|
||||
struct drm_gem_object *obj);
|
||||
|
||||
bool cirrus_check_framebuffer(struct cirrus_device *cdev, int width, int height,
|
||||
int bpp, int pitch);
|
||||
|
||||
/* cirrus_display.c */
|
||||
int cirrus_modeset_init(struct cirrus_device *cdev);
|
||||
void cirrus_modeset_fini(struct cirrus_device *cdev);
|
||||
|
||||
/* cirrus_fbdev.c */
|
||||
int cirrus_fbdev_init(struct cirrus_device *cdev);
|
||||
void cirrus_fbdev_fini(struct cirrus_device *cdev);
|
||||
|
||||
|
||||
|
||||
/* cirrus_irq.c */
|
||||
void cirrus_driver_irq_preinstall(struct drm_device *dev);
|
||||
int cirrus_driver_irq_postinstall(struct drm_device *dev);
|
||||
void cirrus_driver_irq_uninstall(struct drm_device *dev);
|
||||
irqreturn_t cirrus_driver_irq_handler(int irq, void *arg);
|
||||
|
||||
/* cirrus_kms.c */
|
||||
int cirrus_driver_load(struct drm_device *dev, unsigned long flags);
|
||||
void cirrus_driver_unload(struct drm_device *dev);
|
||||
extern struct drm_ioctl_desc cirrus_ioctls[];
|
||||
extern int cirrus_max_ioctl;
|
||||
|
||||
int cirrus_mm_init(struct cirrus_device *cirrus);
|
||||
void cirrus_mm_fini(struct cirrus_device *cirrus);
|
||||
void cirrus_ttm_placement(struct cirrus_bo *bo, int domain);
|
||||
int cirrus_bo_create(struct drm_device *dev, int size, int align,
|
||||
uint32_t flags, struct cirrus_bo **pcirrusbo);
|
||||
int cirrus_mmap(struct file *filp, struct vm_area_struct *vma);
|
||||
|
||||
static inline int cirrus_bo_reserve(struct cirrus_bo *bo, bool no_wait)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = ttm_bo_reserve(&bo->bo, true, no_wait, NULL);
|
||||
if (ret) {
|
||||
if (ret != -ERESTARTSYS && ret != -EBUSY)
|
||||
DRM_ERROR("reserve failed %p\n", bo);
|
||||
return ret;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void cirrus_bo_unreserve(struct cirrus_bo *bo)
|
||||
{
|
||||
ttm_bo_unreserve(&bo->bo);
|
||||
}
|
||||
|
||||
int cirrus_bo_push_sysram(struct cirrus_bo *bo);
|
||||
int cirrus_bo_pin(struct cirrus_bo *bo, u32 pl_flag, u64 *gpu_addr);
|
||||
|
||||
extern int cirrus_bpp;
|
||||
|
||||
#endif /* __CIRRUS_DRV_H__ */
|
|
@ -1,337 +0,0 @@
|
|||
/*
|
||||
* Copyright 2012 Red Hat Inc.
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a
|
||||
* copy of this software and associated documentation files (the
|
||||
* "Software"), to deal in the Software without restriction, including
|
||||
* without limitation the rights to use, copy, modify, merge, publish,
|
||||
* distribute, sub license, and/or sell copies of the Software, and to
|
||||
* permit persons to whom the Software is furnished to do so, subject to
|
||||
* the following conditions:
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
* FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
|
||||
* THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
|
||||
* DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
|
||||
* OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
|
||||
* USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
*
|
||||
* The above copyright notice and this permission notice (including the
|
||||
* next paragraph) shall be included in all copies or substantial portions
|
||||
* of the Software.
|
||||
*
|
||||
*/
|
||||
/*
|
||||
* Authors: Dave Airlie <airlied@redhat.com>
|
||||
*/
|
||||
#include <drm/drmP.h>
|
||||
#include <drm/ttm/ttm_page_alloc.h>
|
||||
|
||||
#include "cirrus_drv.h"
|
||||
|
||||
static inline struct cirrus_device *
|
||||
cirrus_bdev(struct ttm_bo_device *bd)
|
||||
{
|
||||
return container_of(bd, struct cirrus_device, ttm.bdev);
|
||||
}
|
||||
|
||||
static void cirrus_bo_ttm_destroy(struct ttm_buffer_object *tbo)
|
||||
{
|
||||
struct cirrus_bo *bo;
|
||||
|
||||
bo = container_of(tbo, struct cirrus_bo, bo);
|
||||
|
||||
drm_gem_object_release(&bo->gem);
|
||||
kfree(bo);
|
||||
}
|
||||
|
||||
static bool cirrus_ttm_bo_is_cirrus_bo(struct ttm_buffer_object *bo)
|
||||
{
|
||||
if (bo->destroy == &cirrus_bo_ttm_destroy)
|
||||
return true;
|
||||
return false;
|
||||
}
|
||||
|
||||
static int
|
||||
cirrus_bo_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
|
||||
struct ttm_mem_type_manager *man)
|
||||
{
|
||||
switch (type) {
|
||||
case TTM_PL_SYSTEM:
|
||||
man->flags = TTM_MEMTYPE_FLAG_MAPPABLE;
|
||||
man->available_caching = TTM_PL_MASK_CACHING;
|
||||
man->default_caching = TTM_PL_FLAG_CACHED;
|
||||
break;
|
||||
case TTM_PL_VRAM:
|
||||
man->func = &ttm_bo_manager_func;
|
||||
man->flags = TTM_MEMTYPE_FLAG_FIXED |
|
||||
TTM_MEMTYPE_FLAG_MAPPABLE;
|
||||
man->available_caching = TTM_PL_FLAG_UNCACHED |
|
||||
TTM_PL_FLAG_WC;
|
||||
man->default_caching = TTM_PL_FLAG_WC;
|
||||
break;
|
||||
default:
|
||||
DRM_ERROR("Unsupported memory type %u\n", (unsigned)type);
|
||||
return -EINVAL;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void
|
||||
cirrus_bo_evict_flags(struct ttm_buffer_object *bo, struct ttm_placement *pl)
|
||||
{
|
||||
struct cirrus_bo *cirrusbo = cirrus_bo(bo);
|
||||
|
||||
if (!cirrus_ttm_bo_is_cirrus_bo(bo))
|
||||
return;
|
||||
|
||||
cirrus_ttm_placement(cirrusbo, TTM_PL_FLAG_SYSTEM);
|
||||
*pl = cirrusbo->placement;
|
||||
}
|
||||
|
||||
static int cirrus_bo_verify_access(struct ttm_buffer_object *bo, struct file *filp)
|
||||
{
|
||||
struct cirrus_bo *cirrusbo = cirrus_bo(bo);
|
||||
|
||||
return drm_vma_node_verify_access(&cirrusbo->gem.vma_node,
|
||||
filp->private_data);
|
||||
}
|
||||
|
||||
static int cirrus_ttm_io_mem_reserve(struct ttm_bo_device *bdev,
|
||||
struct ttm_mem_reg *mem)
|
||||
{
|
||||
struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type];
|
||||
struct cirrus_device *cirrus = cirrus_bdev(bdev);
|
||||
|
||||
mem->bus.addr = NULL;
|
||||
mem->bus.offset = 0;
|
||||
mem->bus.size = mem->num_pages << PAGE_SHIFT;
|
||||
mem->bus.base = 0;
|
||||
mem->bus.is_iomem = false;
|
||||
if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE))
|
||||
return -EINVAL;
|
||||
switch (mem->mem_type) {
|
||||
case TTM_PL_SYSTEM:
|
||||
/* system memory */
|
||||
return 0;
|
||||
case TTM_PL_VRAM:
|
||||
mem->bus.offset = mem->start << PAGE_SHIFT;
|
||||
mem->bus.base = pci_resource_start(cirrus->dev->pdev, 0);
|
||||
mem->bus.is_iomem = true;
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
break;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void cirrus_ttm_io_mem_free(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem)
|
||||
{
|
||||
}
|
||||
|
||||
static void cirrus_ttm_backend_destroy(struct ttm_tt *tt)
|
||||
{
|
||||
ttm_tt_fini(tt);
|
||||
kfree(tt);
|
||||
}
|
||||
|
||||
static struct ttm_backend_func cirrus_tt_backend_func = {
|
||||
.destroy = &cirrus_ttm_backend_destroy,
|
||||
};
|
||||
|
||||
|
||||
static struct ttm_tt *cirrus_ttm_tt_create(struct ttm_buffer_object *bo,
|
||||
uint32_t page_flags)
|
||||
{
|
||||
struct ttm_tt *tt;
|
||||
|
||||
tt = kzalloc(sizeof(struct ttm_tt), GFP_KERNEL);
|
||||
if (tt == NULL)
|
||||
return NULL;
|
||||
tt->func = &cirrus_tt_backend_func;
|
||||
if (ttm_tt_init(tt, bo, page_flags)) {
|
||||
kfree(tt);
|
||||
return NULL;
|
||||
}
|
||||
return tt;
|
||||
}
|
||||
|
||||
struct ttm_bo_driver cirrus_bo_driver = {
|
||||
.ttm_tt_create = cirrus_ttm_tt_create,
|
||||
.init_mem_type = cirrus_bo_init_mem_type,
|
||||
.eviction_valuable = ttm_bo_eviction_valuable,
|
||||
.evict_flags = cirrus_bo_evict_flags,
|
||||
.move = NULL,
|
||||
.verify_access = cirrus_bo_verify_access,
|
||||
.io_mem_reserve = &cirrus_ttm_io_mem_reserve,
|
||||
.io_mem_free = &cirrus_ttm_io_mem_free,
|
||||
};
|
||||
|
||||
int cirrus_mm_init(struct cirrus_device *cirrus)
|
||||
{
|
||||
int ret;
|
||||
struct drm_device *dev = cirrus->dev;
|
||||
struct ttm_bo_device *bdev = &cirrus->ttm.bdev;
|
||||
|
||||
ret = ttm_bo_device_init(&cirrus->ttm.bdev,
|
||||
&cirrus_bo_driver,
|
||||
dev->anon_inode->i_mapping,
|
||||
true);
|
||||
if (ret) {
|
||||
DRM_ERROR("Error initialising bo driver; %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = ttm_bo_init_mm(bdev, TTM_PL_VRAM,
|
||||
cirrus->mc.vram_size >> PAGE_SHIFT);
|
||||
if (ret) {
|
||||
DRM_ERROR("Failed ttm VRAM init: %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
arch_io_reserve_memtype_wc(pci_resource_start(dev->pdev, 0),
|
||||
pci_resource_len(dev->pdev, 0));
|
||||
|
||||
cirrus->fb_mtrr = arch_phys_wc_add(pci_resource_start(dev->pdev, 0),
|
||||
pci_resource_len(dev->pdev, 0));
|
||||
|
||||
cirrus->mm_inited = true;
|
||||
return 0;
|
||||
}
|
||||
|
||||
void cirrus_mm_fini(struct cirrus_device *cirrus)
|
||||
{
|
||||
struct drm_device *dev = cirrus->dev;
|
||||
|
||||
if (!cirrus->mm_inited)
|
||||
return;
|
||||
|
||||
ttm_bo_device_release(&cirrus->ttm.bdev);
|
||||
|
||||
arch_phys_wc_del(cirrus->fb_mtrr);
|
||||
cirrus->fb_mtrr = 0;
|
||||
arch_io_free_memtype_wc(pci_resource_start(dev->pdev, 0),
|
||||
pci_resource_len(dev->pdev, 0));
|
||||
}
|
||||
|
||||
void cirrus_ttm_placement(struct cirrus_bo *bo, int domain)
|
||||
{
|
||||
u32 c = 0;
|
||||
unsigned i;
|
||||
bo->placement.placement = bo->placements;
|
||||
bo->placement.busy_placement = bo->placements;
|
||||
if (domain & TTM_PL_FLAG_VRAM)
|
||||
bo->placements[c++].flags = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED | TTM_PL_FLAG_VRAM;
|
||||
if (domain & TTM_PL_FLAG_SYSTEM)
|
||||
bo->placements[c++].flags = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
|
||||
if (!c)
|
||||
bo->placements[c++].flags = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
|
||||
bo->placement.num_placement = c;
|
||||
bo->placement.num_busy_placement = c;
|
||||
for (i = 0; i < c; ++i) {
|
||||
bo->placements[i].fpfn = 0;
|
||||
bo->placements[i].lpfn = 0;
|
||||
}
|
||||
}
|
||||
|
||||
int cirrus_bo_create(struct drm_device *dev, int size, int align,
|
||||
uint32_t flags, struct cirrus_bo **pcirrusbo)
|
||||
{
|
||||
struct cirrus_device *cirrus = dev->dev_private;
|
||||
struct cirrus_bo *cirrusbo;
|
||||
size_t acc_size;
|
||||
int ret;
|
||||
|
||||
cirrusbo = kzalloc(sizeof(struct cirrus_bo), GFP_KERNEL);
|
||||
if (!cirrusbo)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = drm_gem_object_init(dev, &cirrusbo->gem, size);
|
||||
if (ret) {
|
||||
kfree(cirrusbo);
|
||||
return ret;
|
||||
}
|
||||
|
||||
cirrusbo->bo.bdev = &cirrus->ttm.bdev;
|
||||
|
||||
cirrus_ttm_placement(cirrusbo, TTM_PL_FLAG_VRAM | TTM_PL_FLAG_SYSTEM);
|
||||
|
||||
acc_size = ttm_bo_dma_acc_size(&cirrus->ttm.bdev, size,
|
||||
sizeof(struct cirrus_bo));
|
||||
|
||||
ret = ttm_bo_init(&cirrus->ttm.bdev, &cirrusbo->bo, size,
|
||||
ttm_bo_type_device, &cirrusbo->placement,
|
||||
align >> PAGE_SHIFT, false, acc_size,
|
||||
NULL, NULL, cirrus_bo_ttm_destroy);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
*pcirrusbo = cirrusbo;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline u64 cirrus_bo_gpu_offset(struct cirrus_bo *bo)
|
||||
{
|
||||
return bo->bo.offset;
|
||||
}
|
||||
|
||||
int cirrus_bo_pin(struct cirrus_bo *bo, u32 pl_flag, u64 *gpu_addr)
|
||||
{
|
||||
struct ttm_operation_ctx ctx = { false, false };
|
||||
int i, ret;
|
||||
|
||||
if (bo->pin_count) {
|
||||
bo->pin_count++;
|
||||
if (gpu_addr)
|
||||
*gpu_addr = cirrus_bo_gpu_offset(bo);
|
||||
}
|
||||
|
||||
cirrus_ttm_placement(bo, pl_flag);
|
||||
for (i = 0; i < bo->placement.num_placement; i++)
|
||||
bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT;
|
||||
ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
bo->pin_count = 1;
|
||||
if (gpu_addr)
|
||||
*gpu_addr = cirrus_bo_gpu_offset(bo);
|
||||
return 0;
|
||||
}
|
||||
|
||||
int cirrus_bo_push_sysram(struct cirrus_bo *bo)
|
||||
{
|
||||
struct ttm_operation_ctx ctx = { false, false };
|
||||
int i, ret;
|
||||
if (!bo->pin_count) {
|
||||
DRM_ERROR("unpin bad %p\n", bo);
|
||||
return 0;
|
||||
}
|
||||
bo->pin_count--;
|
||||
if (bo->pin_count)
|
||||
return 0;
|
||||
|
||||
if (bo->kmap.virtual)
|
||||
ttm_bo_kunmap(&bo->kmap);
|
||||
|
||||
cirrus_ttm_placement(bo, TTM_PL_FLAG_SYSTEM);
|
||||
for (i = 0; i < bo->placement.num_placement ; i++)
|
||||
bo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT;
|
||||
|
||||
ret = ttm_bo_validate(&bo->bo, &bo->placement, &ctx);
|
||||
if (ret) {
|
||||
DRM_ERROR("pushing to VRAM failed\n");
|
||||
return ret;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
int cirrus_mmap(struct file *filp, struct vm_area_struct *vma)
|
||||
{
|
||||
struct drm_file *file_priv = filp->private_data;
|
||||
struct cirrus_device *cirrus = file_priv->minor->dev->dev_private;
|
||||
|
||||
return ttm_bo_mmap(filp, vma, &cirrus->ttm.bdev);
|
||||
}
|
|
@ -1423,7 +1423,7 @@ drm_atomic_helper_wait_for_vblanks(struct drm_device *dev,
|
|||
ret = wait_event_timeout(dev->vblank[i].queue,
|
||||
old_state->crtcs[i].last_vblank_count !=
|
||||
drm_crtc_vblank_count(crtc),
|
||||
msecs_to_jiffies(50));
|
||||
msecs_to_jiffies(100));
|
||||
|
||||
WARN(!ret, "[CRTC:%d:%s] vblank wait timed out\n",
|
||||
crtc->base.id, crtc->name);
|
||||
|
|
|
@ -56,6 +56,29 @@
|
|||
* for these functions.
|
||||
*/
|
||||
|
||||
/**
|
||||
* __drm_atomic_helper_crtc_reset - reset state on CRTC
|
||||
* @crtc: drm CRTC
|
||||
* @crtc_state: CRTC state to assign
|
||||
*
|
||||
* Initializes the newly allocated @crtc_state and assigns it to
|
||||
* the &drm_crtc->state pointer of @crtc, usually required when
|
||||
* initializing the drivers or when called from the &drm_crtc_funcs.reset
|
||||
* hook.
|
||||
*
|
||||
* This is useful for drivers that subclass the CRTC state.
|
||||
*/
|
||||
void
|
||||
__drm_atomic_helper_crtc_reset(struct drm_crtc *crtc,
|
||||
struct drm_crtc_state *crtc_state)
|
||||
{
|
||||
if (crtc_state)
|
||||
crtc_state->crtc = crtc;
|
||||
|
||||
crtc->state = crtc_state;
|
||||
}
|
||||
EXPORT_SYMBOL(__drm_atomic_helper_crtc_reset);
|
||||
|
||||
/**
|
||||
* drm_atomic_helper_crtc_reset - default &drm_crtc_funcs.reset hook for CRTCs
|
||||
* @crtc: drm CRTC
|
||||
|
@ -65,14 +88,13 @@
|
|||
*/
|
||||
void drm_atomic_helper_crtc_reset(struct drm_crtc *crtc)
|
||||
{
|
||||
if (crtc->state)
|
||||
__drm_atomic_helper_crtc_destroy_state(crtc->state);
|
||||
|
||||
kfree(crtc->state);
|
||||
crtc->state = kzalloc(sizeof(*crtc->state), GFP_KERNEL);
|
||||
struct drm_crtc_state *crtc_state =
|
||||
kzalloc(sizeof(*crtc->state), GFP_KERNEL);
|
||||
|
||||
if (crtc->state)
|
||||
crtc->state->crtc = crtc;
|
||||
crtc->funcs->atomic_destroy_state(crtc, crtc->state);
|
||||
|
||||
__drm_atomic_helper_crtc_reset(crtc, crtc_state);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_atomic_helper_crtc_reset);
|
||||
|
||||
|
@ -314,7 +336,7 @@ EXPORT_SYMBOL(drm_atomic_helper_plane_destroy_state);
|
|||
* @conn_state: connector state to assign
|
||||
*
|
||||
* Initializes the newly allocated @conn_state and assigns it to
|
||||
* the &drm_conector->state pointer of @connector, usually required when
|
||||
* the &drm_connector->state pointer of @connector, usually required when
|
||||
* initializing the drivers or when called from the &drm_connector_funcs.reset
|
||||
* hook.
|
||||
*
|
||||
|
@ -369,6 +391,9 @@ __drm_atomic_helper_connector_duplicate_state(struct drm_connector *connector,
|
|||
drm_connector_get(connector);
|
||||
state->commit = NULL;
|
||||
|
||||
if (state->hdr_output_metadata)
|
||||
drm_property_blob_get(state->hdr_output_metadata);
|
||||
|
||||
/* Don't copy over a writeback job, they are used only once */
|
||||
state->writeback_job = NULL;
|
||||
}
|
||||
|
@ -416,6 +441,8 @@ __drm_atomic_helper_connector_destroy_state(struct drm_connector_state *state)
|
|||
|
||||
if (state->writeback_job)
|
||||
drm_writeback_cleanup_job(state->writeback_job);
|
||||
|
||||
drm_property_blob_put(state->hdr_output_metadata);
|
||||
}
|
||||
EXPORT_SYMBOL(__drm_atomic_helper_connector_destroy_state);
|
||||
|
||||
|
|
|
@ -676,6 +676,8 @@ static int drm_atomic_connector_set_property(struct drm_connector *connector,
|
|||
{
|
||||
struct drm_device *dev = connector->dev;
|
||||
struct drm_mode_config *config = &dev->mode_config;
|
||||
bool replaced = false;
|
||||
int ret;
|
||||
|
||||
if (property == config->prop_crtc_id) {
|
||||
struct drm_crtc *crtc = drm_crtc_find(dev, file_priv, val);
|
||||
|
@ -726,6 +728,13 @@ static int drm_atomic_connector_set_property(struct drm_connector *connector,
|
|||
*/
|
||||
if (state->link_status != DRM_LINK_STATUS_GOOD)
|
||||
state->link_status = val;
|
||||
} else if (property == config->hdr_output_metadata_property) {
|
||||
ret = drm_atomic_replace_property_blob_from_id(dev,
|
||||
&state->hdr_output_metadata,
|
||||
val,
|
||||
sizeof(struct hdr_output_metadata), -1,
|
||||
&replaced);
|
||||
return ret;
|
||||
} else if (property == config->aspect_ratio_property) {
|
||||
state->picture_aspect_ratio = val;
|
||||
} else if (property == config->content_type_property) {
|
||||
|
@ -814,6 +823,9 @@ drm_atomic_connector_get_property(struct drm_connector *connector,
|
|||
*val = state->colorspace;
|
||||
} else if (property == connector->scaling_mode_property) {
|
||||
*val = state->scaling_mode;
|
||||
} else if (property == config->hdr_output_metadata_property) {
|
||||
*val = state->hdr_output_metadata ?
|
||||
state->hdr_output_metadata->base.id : 0;
|
||||
} else if (property == connector->content_protection_property) {
|
||||
*val = state->content_protection;
|
||||
} else if (property == config->writeback_fb_id_property) {
|
||||
|
|
|
@ -351,3 +351,23 @@ void drm_master_put(struct drm_master **master)
|
|||
*master = NULL;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_master_put);
|
||||
|
||||
/* Used by drm_client and drm_fb_helper */
|
||||
bool drm_master_internal_acquire(struct drm_device *dev)
|
||||
{
|
||||
mutex_lock(&dev->master_mutex);
|
||||
if (dev->master) {
|
||||
mutex_unlock(&dev->master_mutex);
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_master_internal_acquire);
|
||||
|
||||
/* Used by drm_client and drm_fb_helper */
|
||||
void drm_master_internal_release(struct drm_device *dev)
|
||||
{
|
||||
mutex_unlock(&dev->master_mutex);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_master_internal_release);
|
||||
|
|
|
@ -243,6 +243,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer)
|
|||
static struct drm_client_buffer *
|
||||
drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format)
|
||||
{
|
||||
const struct drm_format_info *info = drm_format_info(format);
|
||||
struct drm_mode_create_dumb dumb_args = { };
|
||||
struct drm_device *dev = client->dev;
|
||||
struct drm_client_buffer *buffer;
|
||||
|
@ -258,7 +259,7 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
|
|||
|
||||
dumb_args.width = width;
|
||||
dumb_args.height = height;
|
||||
dumb_args.bpp = drm_format_plane_cpp(format, 0) * 8;
|
||||
dumb_args.bpp = info->cpp[0] * 8;
|
||||
ret = drm_mode_create_dumb(dev, &dumb_args, client->file);
|
||||
if (ret)
|
||||
goto err_delete;
|
||||
|
|
|
@ -1058,6 +1058,12 @@ int drm_connector_create_standard_properties(struct drm_device *dev)
|
|||
return -ENOMEM;
|
||||
dev->mode_config.non_desktop_property = prop;
|
||||
|
||||
prop = drm_property_create(dev, DRM_MODE_PROP_BLOB,
|
||||
"HDR_OUTPUT_METADATA", 0);
|
||||
if (!prop)
|
||||
return -ENOMEM;
|
||||
dev->mode_config.hdr_output_metadata_property = prop;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -27,15 +27,17 @@
|
|||
|
||||
#include <linux/device.h>
|
||||
#include <linux/fs.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/sched/signal.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/uaccess.h>
|
||||
#include <linux/uio.h>
|
||||
#include <drm/drm_dp_helper.h>
|
||||
|
||||
#include <drm/drm_crtc.h>
|
||||
#include <drm/drmP.h>
|
||||
#include <drm/drm_dp_helper.h>
|
||||
#include <drm/drm_print.h>
|
||||
|
||||
#include "drm_crtc_helper_internal.h"
|
||||
|
||||
|
|
|
@ -20,13 +20,15 @@
|
|||
* OTHER DEALINGS IN THE SOFTWARE.
|
||||
*/
|
||||
|
||||
#include <linux/delay.h>
|
||||
#include <linux/errno.h>
|
||||
#include <linux/export.h>
|
||||
#include <linux/i2c.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/string.h>
|
||||
|
||||
#include <drm/drm_dp_dual_mode_helper.h>
|
||||
#include <drm/drmP.h>
|
||||
#include <drm/drm_print.h>
|
||||
|
||||
/**
|
||||
* DOC: dp dual mode helpers
|
||||
|
|
|
@ -20,16 +20,18 @@
|
|||
* OF THIS SOFTWARE.
|
||||
*/
|
||||
|
||||
#include <linux/delay.h>
|
||||
#include <linux/errno.h>
|
||||
#include <linux/i2c.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/errno.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/i2c.h>
|
||||
#include <linux/seq_file.h>
|
||||
|
||||
#include <drm/drm_dp_helper.h>
|
||||
#include <drm/drmP.h>
|
||||
#include <drm/drm_print.h>
|
||||
#include <drm/drm_vblank.h>
|
||||
|
||||
#include "drm_crtc_helper_internal.h"
|
||||
|
||||
|
|
|
@ -20,19 +20,20 @@
|
|||
* OF THIS SOFTWARE.
|
||||
*/
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/errno.h>
|
||||
#include <linux/i2c.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/seq_file.h>
|
||||
#include <linux/i2c.h>
|
||||
#include <drm/drm_dp_mst_helper.h>
|
||||
#include <drm/drmP.h>
|
||||
|
||||
#include <drm/drm_fixed.h>
|
||||
#include <drm/drm_atomic.h>
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_dp_mst_helper.h>
|
||||
#include <drm/drm_drv.h>
|
||||
#include <drm/drm_fixed.h>
|
||||
#include <drm/drm_print.h>
|
||||
#include <drm/drm_probe_helper.h>
|
||||
|
||||
/**
|
||||
|
|
|
@ -27,16 +27,19 @@
|
|||
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
||||
* DEALINGS IN THE SOFTWARE.
|
||||
*/
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
#include <linux/hdmi.h>
|
||||
#include <linux/i2c.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/vga_switcheroo.h>
|
||||
#include <drm/drmP.h>
|
||||
|
||||
#include <drm/drm_displayid.h>
|
||||
#include <drm/drm_drv.h>
|
||||
#include <drm/drm_edid.h>
|
||||
#include <drm/drm_encoder.h>
|
||||
#include <drm/drm_displayid.h>
|
||||
#include <drm/drm_print.h>
|
||||
#include <drm/drm_scdc_helper.h>
|
||||
|
||||
#include "drm_crtc_internal.h"
|
||||
|
@ -2849,6 +2852,7 @@ add_detailed_modes(struct drm_connector *connector, struct edid *edid,
|
|||
#define VIDEO_BLOCK 0x02
|
||||
#define VENDOR_BLOCK 0x03
|
||||
#define SPEAKER_BLOCK 0x04
|
||||
#define HDR_STATIC_METADATA_BLOCK 0x6
|
||||
#define USE_EXTENDED_TAG 0x07
|
||||
#define EXT_VIDEO_CAPABILITY_BLOCK 0x00
|
||||
#define EXT_VIDEO_DATA_BLOCK_420 0x0E
|
||||
|
@ -3831,6 +3835,55 @@ static void fixup_detailed_cea_mode_clock(struct drm_display_mode *mode)
|
|||
mode->clock = clock;
|
||||
}
|
||||
|
||||
static bool cea_db_is_hdmi_hdr_metadata_block(const u8 *db)
|
||||
{
|
||||
if (cea_db_tag(db) != USE_EXTENDED_TAG)
|
||||
return false;
|
||||
|
||||
if (db[1] != HDR_STATIC_METADATA_BLOCK)
|
||||
return false;
|
||||
|
||||
if (cea_db_payload_len(db) < 3)
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static uint8_t eotf_supported(const u8 *edid_ext)
|
||||
{
|
||||
return edid_ext[2] &
|
||||
(BIT(HDMI_EOTF_TRADITIONAL_GAMMA_SDR) |
|
||||
BIT(HDMI_EOTF_TRADITIONAL_GAMMA_HDR) |
|
||||
BIT(HDMI_EOTF_SMPTE_ST2084) |
|
||||
BIT(HDMI_EOTF_BT_2100_HLG));
|
||||
}
|
||||
|
||||
static uint8_t hdr_metadata_type(const u8 *edid_ext)
|
||||
{
|
||||
return edid_ext[3] &
|
||||
BIT(HDMI_STATIC_METADATA_TYPE1);
|
||||
}
|
||||
|
||||
static void
|
||||
drm_parse_hdr_metadata_block(struct drm_connector *connector, const u8 *db)
|
||||
{
|
||||
u16 len;
|
||||
|
||||
len = cea_db_payload_len(db);
|
||||
|
||||
connector->hdr_sink_metadata.hdmi_type1.eotf =
|
||||
eotf_supported(db);
|
||||
connector->hdr_sink_metadata.hdmi_type1.metadata_type =
|
||||
hdr_metadata_type(db);
|
||||
|
||||
if (len >= 4)
|
||||
connector->hdr_sink_metadata.hdmi_type1.max_cll = db[4];
|
||||
if (len >= 5)
|
||||
connector->hdr_sink_metadata.hdmi_type1.max_fall = db[5];
|
||||
if (len >= 6)
|
||||
connector->hdr_sink_metadata.hdmi_type1.min_cll = db[6];
|
||||
}
|
||||
|
||||
static void
|
||||
drm_parse_hdmi_vsdb_audio(struct drm_connector *connector, const u8 *db)
|
||||
{
|
||||
|
@ -4458,6 +4511,8 @@ static void drm_parse_cea_ext(struct drm_connector *connector,
|
|||
drm_parse_y420cmdb_bitmap(connector, db);
|
||||
if (cea_db_is_vcdb(db))
|
||||
drm_parse_vcdb(connector, db);
|
||||
if (cea_db_is_hdmi_hdr_metadata_block(db))
|
||||
drm_parse_hdr_metadata_block(connector, db);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -4850,6 +4905,78 @@ static bool is_hdmi2_sink(struct drm_connector *connector)
|
|||
connector->display_info.color_formats & DRM_COLOR_FORMAT_YCRCB420;
|
||||
}
|
||||
|
||||
static inline bool is_eotf_supported(u8 output_eotf, u8 sink_eotf)
|
||||
{
|
||||
return sink_eotf & BIT(output_eotf);
|
||||
}
|
||||
|
||||
/**
|
||||
* drm_hdmi_infoframe_set_hdr_metadata() - fill an HDMI DRM infoframe with
|
||||
* HDR metadata from userspace
|
||||
* @frame: HDMI DRM infoframe
|
||||
* @conn_state: Connector state containing HDR metadata
|
||||
*
|
||||
* Return: 0 on success or a negative error code on failure.
|
||||
*/
|
||||
int
|
||||
drm_hdmi_infoframe_set_hdr_metadata(struct hdmi_drm_infoframe *frame,
|
||||
const struct drm_connector_state *conn_state)
|
||||
{
|
||||
struct drm_connector *connector;
|
||||
struct hdr_output_metadata *hdr_metadata;
|
||||
int err;
|
||||
|
||||
if (!frame || !conn_state)
|
||||
return -EINVAL;
|
||||
|
||||
connector = conn_state->connector;
|
||||
|
||||
if (!conn_state->hdr_output_metadata)
|
||||
return -EINVAL;
|
||||
|
||||
hdr_metadata = conn_state->hdr_output_metadata->data;
|
||||
|
||||
if (!hdr_metadata || !connector)
|
||||
return -EINVAL;
|
||||
|
||||
/* Sink EOTF is Bit map while infoframe is absolute values */
|
||||
if (!is_eotf_supported(hdr_metadata->hdmi_metadata_type1.eotf,
|
||||
connector->hdr_sink_metadata.hdmi_type1.eotf)) {
|
||||
DRM_DEBUG_KMS("EOTF Not Supported\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
err = hdmi_drm_infoframe_init(frame);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
frame->eotf = hdr_metadata->hdmi_metadata_type1.eotf;
|
||||
frame->metadata_type = hdr_metadata->hdmi_metadata_type1.metadata_type;
|
||||
|
||||
BUILD_BUG_ON(sizeof(frame->display_primaries) !=
|
||||
sizeof(hdr_metadata->hdmi_metadata_type1.display_primaries));
|
||||
BUILD_BUG_ON(sizeof(frame->white_point) !=
|
||||
sizeof(hdr_metadata->hdmi_metadata_type1.white_point));
|
||||
|
||||
memcpy(&frame->display_primaries,
|
||||
&hdr_metadata->hdmi_metadata_type1.display_primaries,
|
||||
sizeof(frame->display_primaries));
|
||||
|
||||
memcpy(&frame->white_point,
|
||||
&hdr_metadata->hdmi_metadata_type1.white_point,
|
||||
sizeof(frame->white_point));
|
||||
|
||||
frame->max_display_mastering_luminance =
|
||||
hdr_metadata->hdmi_metadata_type1.max_display_mastering_luminance;
|
||||
frame->min_display_mastering_luminance =
|
||||
hdr_metadata->hdmi_metadata_type1.min_display_mastering_luminance;
|
||||
frame->max_fall = hdr_metadata->hdmi_metadata_type1.max_fall;
|
||||
frame->max_cll = hdr_metadata->hdmi_metadata_type1.max_cll;
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_hdmi_infoframe_set_hdr_metadata);
|
||||
|
||||
/**
|
||||
* drm_hdmi_avi_infoframe_from_display_mode() - fill an HDMI AVI infoframe with
|
||||
* data from a DRM display mode
|
||||
|
|
|
@ -7,12 +7,15 @@
|
|||
|
||||
*/
|
||||
|
||||
#include <linux/module.h>
|
||||
#include <linux/firmware.h>
|
||||
#include <drm/drmP.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
||||
#include <drm/drm_crtc.h>
|
||||
#include <drm/drm_crtc_helper.h>
|
||||
#include <drm/drm_drv.h>
|
||||
#include <drm/drm_edid.h>
|
||||
#include <drm/drm_print.h>
|
||||
|
||||
static char edid_firmware[PATH_MAX];
|
||||
module_param_string(edid_firmware, edid_firmware, sizeof(edid_firmware), 0644);
|
||||
|
|
|
@ -44,6 +44,7 @@
|
|||
|
||||
#include "drm_crtc_internal.h"
|
||||
#include "drm_crtc_helper_internal.h"
|
||||
#include "drm_internal.h"
|
||||
|
||||
static bool drm_fbdev_emulation = true;
|
||||
module_param_named(fbdev_emulation, drm_fbdev_emulation, bool, 0600);
|
||||
|
@ -387,6 +388,49 @@ int drm_fb_helper_debug_leave(struct fb_info *info)
|
|||
}
|
||||
EXPORT_SYMBOL(drm_fb_helper_debug_leave);
|
||||
|
||||
/* Check if the plane can hw rotate to match panel orientation */
|
||||
static bool drm_fb_helper_panel_rotation(struct drm_mode_set *modeset,
|
||||
unsigned int *rotation)
|
||||
{
|
||||
struct drm_connector *connector = modeset->connectors[0];
|
||||
struct drm_plane *plane = modeset->crtc->primary;
|
||||
u64 valid_mask = 0;
|
||||
unsigned int i;
|
||||
|
||||
if (!modeset->num_connectors)
|
||||
return false;
|
||||
|
||||
switch (connector->display_info.panel_orientation) {
|
||||
case DRM_MODE_PANEL_ORIENTATION_BOTTOM_UP:
|
||||
*rotation = DRM_MODE_ROTATE_180;
|
||||
break;
|
||||
case DRM_MODE_PANEL_ORIENTATION_LEFT_UP:
|
||||
*rotation = DRM_MODE_ROTATE_90;
|
||||
break;
|
||||
case DRM_MODE_PANEL_ORIENTATION_RIGHT_UP:
|
||||
*rotation = DRM_MODE_ROTATE_270;
|
||||
break;
|
||||
default:
|
||||
*rotation = DRM_MODE_ROTATE_0;
|
||||
}
|
||||
|
||||
/*
|
||||
* TODO: support 90 / 270 degree hardware rotation,
|
||||
* depending on the hardware this may require the framebuffer
|
||||
* to be in a specific tiling format.
|
||||
*/
|
||||
if (*rotation != DRM_MODE_ROTATE_180 || !plane->rotation_property)
|
||||
return false;
|
||||
|
||||
for (i = 0; i < plane->rotation_property->num_values; i++)
|
||||
valid_mask |= (1ULL << plane->rotation_property->values[i]);
|
||||
|
||||
if (!(*rotation & valid_mask))
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static int restore_fbdev_mode_atomic(struct drm_fb_helper *fb_helper, bool active)
|
||||
{
|
||||
struct drm_device *dev = fb_helper->dev;
|
||||
|
@ -427,10 +471,13 @@ static int restore_fbdev_mode_atomic(struct drm_fb_helper *fb_helper, bool activ
|
|||
for (i = 0; i < fb_helper->crtc_count; i++) {
|
||||
struct drm_mode_set *mode_set = &fb_helper->crtc_info[i].mode_set;
|
||||
struct drm_plane *primary = mode_set->crtc->primary;
|
||||
unsigned int rotation;
|
||||
|
||||
/* Cannot fail as we've already gotten the plane state above */
|
||||
plane_state = drm_atomic_get_new_plane_state(state, primary);
|
||||
plane_state->rotation = fb_helper->crtc_info[i].rotation;
|
||||
if (drm_fb_helper_panel_rotation(mode_set, &rotation)) {
|
||||
/* Cannot fail as we've already gotten the plane state above */
|
||||
plane_state = drm_atomic_get_new_plane_state(state, primary);
|
||||
plane_state->rotation = rotation;
|
||||
}
|
||||
|
||||
ret = __drm_atomic_helper_set_config(mode_set, state);
|
||||
if (ret != 0)
|
||||
|
@ -509,7 +556,7 @@ static int restore_fbdev_mode_legacy(struct drm_fb_helper *fb_helper)
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int restore_fbdev_mode(struct drm_fb_helper *fb_helper)
|
||||
static int restore_fbdev_mode_force(struct drm_fb_helper *fb_helper)
|
||||
{
|
||||
struct drm_device *dev = fb_helper->dev;
|
||||
|
||||
|
@ -519,6 +566,21 @@ static int restore_fbdev_mode(struct drm_fb_helper *fb_helper)
|
|||
return restore_fbdev_mode_legacy(fb_helper);
|
||||
}
|
||||
|
||||
static int restore_fbdev_mode(struct drm_fb_helper *fb_helper)
|
||||
{
|
||||
struct drm_device *dev = fb_helper->dev;
|
||||
int ret;
|
||||
|
||||
if (!drm_master_internal_acquire(dev))
|
||||
return -EBUSY;
|
||||
|
||||
ret = restore_fbdev_mode_force(fb_helper);
|
||||
|
||||
drm_master_internal_release(dev);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* drm_fb_helper_restore_fbdev_mode_unlocked - restore fbdev configuration
|
||||
* @fb_helper: driver-allocated fbdev helper, can be NULL
|
||||
|
@ -542,7 +604,17 @@ int drm_fb_helper_restore_fbdev_mode_unlocked(struct drm_fb_helper *fb_helper)
|
|||
return 0;
|
||||
|
||||
mutex_lock(&fb_helper->lock);
|
||||
ret = restore_fbdev_mode(fb_helper);
|
||||
/*
|
||||
* TODO:
|
||||
* We should bail out here if there is a master by dropping _force.
|
||||
* Currently these igt tests fail if we do that:
|
||||
* - kms_fbcon_fbt@psr
|
||||
* - kms_fbcon_fbt@psr-suspend
|
||||
*
|
||||
* So first these tests need to be fixed so they drop master or don't
|
||||
* have an fd open.
|
||||
*/
|
||||
ret = restore_fbdev_mode_force(fb_helper);
|
||||
|
||||
do_delayed = fb_helper->delayed_hotplug;
|
||||
if (do_delayed)
|
||||
|
@ -556,34 +628,6 @@ int drm_fb_helper_restore_fbdev_mode_unlocked(struct drm_fb_helper *fb_helper)
|
|||
}
|
||||
EXPORT_SYMBOL(drm_fb_helper_restore_fbdev_mode_unlocked);
|
||||
|
||||
static bool drm_fb_helper_is_bound(struct drm_fb_helper *fb_helper)
|
||||
{
|
||||
struct drm_device *dev = fb_helper->dev;
|
||||
struct drm_crtc *crtc;
|
||||
int bound = 0, crtcs_bound = 0;
|
||||
|
||||
/*
|
||||
* Sometimes user space wants everything disabled, so don't steal the
|
||||
* display if there's a master.
|
||||
*/
|
||||
if (READ_ONCE(dev->master))
|
||||
return false;
|
||||
|
||||
drm_for_each_crtc(crtc, dev) {
|
||||
drm_modeset_lock(&crtc->mutex, NULL);
|
||||
if (crtc->primary->fb)
|
||||
crtcs_bound++;
|
||||
if (crtc->primary->fb == fb_helper->fb)
|
||||
bound++;
|
||||
drm_modeset_unlock(&crtc->mutex);
|
||||
}
|
||||
|
||||
if (bound < crtcs_bound)
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_MAGIC_SYSRQ
|
||||
/*
|
||||
* restore fbcon display for all kms driver's using this helper, used for sysrq
|
||||
|
@ -604,7 +648,7 @@ static bool drm_fb_helper_force_kernel_mode(void)
|
|||
continue;
|
||||
|
||||
mutex_lock(&helper->lock);
|
||||
ret = restore_fbdev_mode(helper);
|
||||
ret = restore_fbdev_mode_force(helper);
|
||||
if (ret)
|
||||
error = true;
|
||||
mutex_unlock(&helper->lock);
|
||||
|
@ -663,20 +707,22 @@ static void dpms_legacy(struct drm_fb_helper *fb_helper, int dpms_mode)
|
|||
static void drm_fb_helper_dpms(struct fb_info *info, int dpms_mode)
|
||||
{
|
||||
struct drm_fb_helper *fb_helper = info->par;
|
||||
struct drm_device *dev = fb_helper->dev;
|
||||
|
||||
/*
|
||||
* For each CRTC in this fb, turn the connectors on/off.
|
||||
*/
|
||||
mutex_lock(&fb_helper->lock);
|
||||
if (!drm_fb_helper_is_bound(fb_helper)) {
|
||||
mutex_unlock(&fb_helper->lock);
|
||||
return;
|
||||
}
|
||||
if (!drm_master_internal_acquire(dev))
|
||||
goto unlock;
|
||||
|
||||
if (drm_drv_uses_atomic_modeset(fb_helper->dev))
|
||||
if (drm_drv_uses_atomic_modeset(dev))
|
||||
restore_fbdev_mode_atomic(fb_helper, dpms_mode == DRM_MODE_DPMS_ON);
|
||||
else
|
||||
dpms_legacy(fb_helper, dpms_mode);
|
||||
|
||||
drm_master_internal_release(dev);
|
||||
unlock:
|
||||
mutex_unlock(&fb_helper->lock);
|
||||
}
|
||||
|
||||
|
@ -767,7 +813,7 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
|
|||
struct drm_clip_rect *clip)
|
||||
{
|
||||
struct drm_framebuffer *fb = fb_helper->fb;
|
||||
unsigned int cpp = drm_format_plane_cpp(fb->format->format, 0);
|
||||
unsigned int cpp = fb->format->cpp[0];
|
||||
size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
|
||||
void *src = fb_helper->fbdev->screen_buffer + offset;
|
||||
void *dst = fb_helper->buffer->vaddr + offset;
|
||||
|
@ -881,7 +927,6 @@ int drm_fb_helper_init(struct drm_device *dev,
|
|||
if (!fb_helper->crtc_info[i].mode_set.connectors)
|
||||
goto out_free;
|
||||
fb_helper->crtc_info[i].mode_set.num_connectors = 0;
|
||||
fb_helper->crtc_info[i].rotation = DRM_MODE_ROTATE_0;
|
||||
}
|
||||
|
||||
i = 0;
|
||||
|
@ -1509,6 +1554,7 @@ static int setcmap_atomic(struct fb_cmap *cmap, struct fb_info *info)
|
|||
int drm_fb_helper_setcmap(struct fb_cmap *cmap, struct fb_info *info)
|
||||
{
|
||||
struct drm_fb_helper *fb_helper = info->par;
|
||||
struct drm_device *dev = fb_helper->dev;
|
||||
int ret;
|
||||
|
||||
if (oops_in_progress)
|
||||
|
@ -1516,9 +1562,9 @@ int drm_fb_helper_setcmap(struct fb_cmap *cmap, struct fb_info *info)
|
|||
|
||||
mutex_lock(&fb_helper->lock);
|
||||
|
||||
if (!drm_fb_helper_is_bound(fb_helper)) {
|
||||
if (!drm_master_internal_acquire(dev)) {
|
||||
ret = -EBUSY;
|
||||
goto out;
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
if (info->fix.visual == FB_VISUAL_TRUECOLOR)
|
||||
|
@ -1528,7 +1574,8 @@ int drm_fb_helper_setcmap(struct fb_cmap *cmap, struct fb_info *info)
|
|||
else
|
||||
ret = setcmap_legacy(cmap, info);
|
||||
|
||||
out:
|
||||
drm_master_internal_release(dev);
|
||||
unlock:
|
||||
mutex_unlock(&fb_helper->lock);
|
||||
|
||||
return ret;
|
||||
|
@ -1548,12 +1595,13 @@ int drm_fb_helper_ioctl(struct fb_info *info, unsigned int cmd,
|
|||
unsigned long arg)
|
||||
{
|
||||
struct drm_fb_helper *fb_helper = info->par;
|
||||
struct drm_device *dev = fb_helper->dev;
|
||||
struct drm_mode_set *mode_set;
|
||||
struct drm_crtc *crtc;
|
||||
int ret = 0;
|
||||
|
||||
mutex_lock(&fb_helper->lock);
|
||||
if (!drm_fb_helper_is_bound(fb_helper)) {
|
||||
if (!drm_master_internal_acquire(dev)) {
|
||||
ret = -EBUSY;
|
||||
goto unlock;
|
||||
}
|
||||
|
@ -1591,11 +1639,12 @@ int drm_fb_helper_ioctl(struct fb_info *info, unsigned int cmd,
|
|||
}
|
||||
|
||||
ret = 0;
|
||||
goto unlock;
|
||||
break;
|
||||
default:
|
||||
ret = -ENOTTY;
|
||||
}
|
||||
|
||||
drm_master_internal_release(dev);
|
||||
unlock:
|
||||
mutex_unlock(&fb_helper->lock);
|
||||
return ret;
|
||||
|
@ -1847,15 +1896,18 @@ int drm_fb_helper_pan_display(struct fb_var_screeninfo *var,
|
|||
return -EBUSY;
|
||||
|
||||
mutex_lock(&fb_helper->lock);
|
||||
if (!drm_fb_helper_is_bound(fb_helper)) {
|
||||
mutex_unlock(&fb_helper->lock);
|
||||
return -EBUSY;
|
||||
if (!drm_master_internal_acquire(dev)) {
|
||||
ret = -EBUSY;
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
if (drm_drv_uses_atomic_modeset(dev))
|
||||
ret = pan_display_atomic(var, info);
|
||||
else
|
||||
ret = pan_display_legacy(var, info);
|
||||
|
||||
drm_master_internal_release(dev);
|
||||
unlock:
|
||||
mutex_unlock(&fb_helper->lock);
|
||||
|
||||
return ret;
|
||||
|
@ -1979,16 +2031,16 @@ static int drm_fb_helper_single_fb_probe(struct drm_fb_helper *fb_helper,
|
|||
*/
|
||||
bool lastv = true, lasth = true;
|
||||
|
||||
desired_mode = fb_helper->crtc_info[i].desired_mode;
|
||||
mode_set = &fb_helper->crtc_info[i].mode_set;
|
||||
desired_mode = mode_set->mode;
|
||||
|
||||
if (!desired_mode)
|
||||
continue;
|
||||
|
||||
crtc_count++;
|
||||
|
||||
x = fb_helper->crtc_info[i].x;
|
||||
y = fb_helper->crtc_info[i].y;
|
||||
x = mode_set->x;
|
||||
y = mode_set->y;
|
||||
|
||||
sizes.surface_width = max_t(u32, desired_mode->hdisplay + x, sizes.surface_width);
|
||||
sizes.surface_height = max_t(u32, desired_mode->vdisplay + y, sizes.surface_height);
|
||||
|
@ -2014,7 +2066,7 @@ static int drm_fb_helper_single_fb_probe(struct drm_fb_helper *fb_helper,
|
|||
DRM_INFO("Cannot find any crtc or sizes\n");
|
||||
|
||||
/* First time: disable all crtc's.. */
|
||||
if (!fb_helper->deferred_setup && !READ_ONCE(fb_helper->dev->master))
|
||||
if (!fb_helper->deferred_setup)
|
||||
restore_fbdev_mode(fb_helper);
|
||||
return -EAGAIN;
|
||||
}
|
||||
|
@ -2503,62 +2555,6 @@ static int drm_pick_crtcs(struct drm_fb_helper *fb_helper,
|
|||
return best_score;
|
||||
}
|
||||
|
||||
/*
|
||||
* This function checks if rotation is necessary because of panel orientation
|
||||
* and if it is, if it is supported.
|
||||
* If rotation is necessary and supported, it gets set in fb_crtc.rotation.
|
||||
* If rotation is necessary but not supported, a DRM_MODE_ROTATE_* flag gets
|
||||
* or-ed into fb_helper->sw_rotations. In drm_setup_crtcs_fb() we check if only
|
||||
* one bit is set and then we set fb_info.fbcon_rotate_hint to make fbcon do
|
||||
* the unsupported rotation.
|
||||
*/
|
||||
static void drm_setup_crtc_rotation(struct drm_fb_helper *fb_helper,
|
||||
struct drm_fb_helper_crtc *fb_crtc,
|
||||
struct drm_connector *connector)
|
||||
{
|
||||
struct drm_plane *plane = fb_crtc->mode_set.crtc->primary;
|
||||
uint64_t valid_mask = 0;
|
||||
int i, rotation;
|
||||
|
||||
fb_crtc->rotation = DRM_MODE_ROTATE_0;
|
||||
|
||||
switch (connector->display_info.panel_orientation) {
|
||||
case DRM_MODE_PANEL_ORIENTATION_BOTTOM_UP:
|
||||
rotation = DRM_MODE_ROTATE_180;
|
||||
break;
|
||||
case DRM_MODE_PANEL_ORIENTATION_LEFT_UP:
|
||||
rotation = DRM_MODE_ROTATE_90;
|
||||
break;
|
||||
case DRM_MODE_PANEL_ORIENTATION_RIGHT_UP:
|
||||
rotation = DRM_MODE_ROTATE_270;
|
||||
break;
|
||||
default:
|
||||
rotation = DRM_MODE_ROTATE_0;
|
||||
}
|
||||
|
||||
/*
|
||||
* TODO: support 90 / 270 degree hardware rotation,
|
||||
* depending on the hardware this may require the framebuffer
|
||||
* to be in a specific tiling format.
|
||||
*/
|
||||
if (rotation != DRM_MODE_ROTATE_180 || !plane->rotation_property) {
|
||||
fb_helper->sw_rotations |= rotation;
|
||||
return;
|
||||
}
|
||||
|
||||
for (i = 0; i < plane->rotation_property->num_values; i++)
|
||||
valid_mask |= (1ULL << plane->rotation_property->values[i]);
|
||||
|
||||
if (!(rotation & valid_mask)) {
|
||||
fb_helper->sw_rotations |= rotation;
|
||||
return;
|
||||
}
|
||||
|
||||
fb_crtc->rotation = rotation;
|
||||
/* Rotating in hardware, fbcon should not rotate */
|
||||
fb_helper->sw_rotations |= DRM_MODE_ROTATE_0;
|
||||
}
|
||||
|
||||
static struct drm_fb_helper_crtc *
|
||||
drm_fb_helper_crtc(struct drm_fb_helper *fb_helper, struct drm_crtc *crtc)
|
||||
{
|
||||
|
@ -2805,7 +2801,6 @@ static void drm_setup_crtcs(struct drm_fb_helper *fb_helper,
|
|||
drm_fb_helper_modeset_release(fb_helper,
|
||||
&fb_helper->crtc_info[i].mode_set);
|
||||
|
||||
fb_helper->sw_rotations = 0;
|
||||
drm_fb_helper_for_each_connector(fb_helper, i) {
|
||||
struct drm_display_mode *mode = modes[i];
|
||||
struct drm_fb_helper_crtc *fb_crtc = crtcs[i];
|
||||
|
@ -2819,13 +2814,8 @@ static void drm_setup_crtcs(struct drm_fb_helper *fb_helper,
|
|||
DRM_DEBUG_KMS("desired mode %s set on crtc %d (%d,%d)\n",
|
||||
mode->name, fb_crtc->mode_set.crtc->base.id, offset->x, offset->y);
|
||||
|
||||
fb_crtc->desired_mode = mode;
|
||||
fb_crtc->x = offset->x;
|
||||
fb_crtc->y = offset->y;
|
||||
modeset->mode = drm_mode_duplicate(dev,
|
||||
fb_crtc->desired_mode);
|
||||
modeset->mode = drm_mode_duplicate(dev, mode);
|
||||
drm_connector_get(connector);
|
||||
drm_setup_crtc_rotation(fb_helper, fb_crtc, connector);
|
||||
modeset->connectors[modeset->num_connectors++] = connector;
|
||||
modeset->x = offset->x;
|
||||
modeset->y = offset->y;
|
||||
|
@ -2848,11 +2838,23 @@ static void drm_setup_crtcs(struct drm_fb_helper *fb_helper,
|
|||
static void drm_setup_crtcs_fb(struct drm_fb_helper *fb_helper)
|
||||
{
|
||||
struct fb_info *info = fb_helper->fbdev;
|
||||
unsigned int rotation, sw_rotations = 0;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < fb_helper->crtc_count; i++)
|
||||
if (fb_helper->crtc_info[i].mode_set.num_connectors)
|
||||
fb_helper->crtc_info[i].mode_set.fb = fb_helper->fb;
|
||||
for (i = 0; i < fb_helper->crtc_count; i++) {
|
||||
struct drm_mode_set *modeset = &fb_helper->crtc_info[i].mode_set;
|
||||
|
||||
if (!modeset->num_connectors)
|
||||
continue;
|
||||
|
||||
modeset->fb = fb_helper->fb;
|
||||
|
||||
if (drm_fb_helper_panel_rotation(modeset, &rotation))
|
||||
/* Rotating in hardware, fbcon should not rotate */
|
||||
sw_rotations |= DRM_MODE_ROTATE_0;
|
||||
else
|
||||
sw_rotations |= rotation;
|
||||
}
|
||||
|
||||
mutex_lock(&fb_helper->dev->mode_config.mutex);
|
||||
drm_fb_helper_for_each_connector(fb_helper, i) {
|
||||
|
@ -2868,7 +2870,7 @@ static void drm_setup_crtcs_fb(struct drm_fb_helper *fb_helper)
|
|||
}
|
||||
mutex_unlock(&fb_helper->dev->mode_config.mutex);
|
||||
|
||||
switch (fb_helper->sw_rotations) {
|
||||
switch (sw_rotations) {
|
||||
case DRM_MODE_ROTATE_0:
|
||||
info->fbcon_rotate_hint = FB_ROTATE_UR;
|
||||
break;
|
||||
|
@ -3041,12 +3043,14 @@ int drm_fb_helper_hotplug_event(struct drm_fb_helper *fb_helper)
|
|||
return err;
|
||||
}
|
||||
|
||||
if (!fb_helper->fb || !drm_fb_helper_is_bound(fb_helper)) {
|
||||
if (!fb_helper->fb || !drm_master_internal_acquire(fb_helper->dev)) {
|
||||
fb_helper->delayed_hotplug = true;
|
||||
mutex_unlock(&fb_helper->lock);
|
||||
return err;
|
||||
}
|
||||
|
||||
drm_master_internal_release(fb_helper->dev);
|
||||
|
||||
DRM_DEBUG_KMS("\n");
|
||||
|
||||
drm_setup_crtcs(fb_helper, fb_helper->fb->width, fb_helper->fb->height);
|
||||
|
|
|
@ -100,8 +100,6 @@ DEFINE_MUTEX(drm_global_mutex);
|
|||
* :ref:`IOCTL support in the userland interfaces chapter<drm_driver_ioctl>`.
|
||||
*/
|
||||
|
||||
static int drm_open_helper(struct file *filp, struct drm_minor *minor);
|
||||
|
||||
/**
|
||||
* drm_file_alloc - allocate file context
|
||||
* @minor: minor to allocate on
|
||||
|
@ -273,76 +271,6 @@ static void drm_close_helper(struct file *filp)
|
|||
drm_file_free(file_priv);
|
||||
}
|
||||
|
||||
static int drm_setup(struct drm_device * dev)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (dev->driver->firstopen &&
|
||||
drm_core_check_feature(dev, DRIVER_LEGACY)) {
|
||||
ret = dev->driver->firstopen(dev);
|
||||
if (ret != 0)
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = drm_legacy_dma_setup(dev);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
|
||||
DRM_DEBUG("\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* drm_open - open method for DRM file
|
||||
* @inode: device inode
|
||||
* @filp: file pointer.
|
||||
*
|
||||
* This function must be used by drivers as their &file_operations.open method.
|
||||
* It looks up the correct DRM device and instantiates all the per-file
|
||||
* resources for it. It also calls the &drm_driver.open driver callback.
|
||||
*
|
||||
* RETURNS:
|
||||
*
|
||||
* 0 on success or negative errno value on falure.
|
||||
*/
|
||||
int drm_open(struct inode *inode, struct file *filp)
|
||||
{
|
||||
struct drm_device *dev;
|
||||
struct drm_minor *minor;
|
||||
int retcode;
|
||||
int need_setup = 0;
|
||||
|
||||
minor = drm_minor_acquire(iminor(inode));
|
||||
if (IS_ERR(minor))
|
||||
return PTR_ERR(minor);
|
||||
|
||||
dev = minor->dev;
|
||||
if (!dev->open_count++)
|
||||
need_setup = 1;
|
||||
|
||||
/* share address_space across all char-devs of a single device */
|
||||
filp->f_mapping = dev->anon_inode->i_mapping;
|
||||
|
||||
retcode = drm_open_helper(filp, minor);
|
||||
if (retcode)
|
||||
goto err_undo;
|
||||
if (need_setup) {
|
||||
retcode = drm_setup(dev);
|
||||
if (retcode) {
|
||||
drm_close_helper(filp);
|
||||
goto err_undo;
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
|
||||
err_undo:
|
||||
dev->open_count--;
|
||||
drm_minor_release(minor);
|
||||
return retcode;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_open);
|
||||
|
||||
/*
|
||||
* Check whether DRI will run on this CPU.
|
||||
*
|
||||
|
@ -424,6 +352,56 @@ static int drm_open_helper(struct file *filp, struct drm_minor *minor)
|
|||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* drm_open - open method for DRM file
|
||||
* @inode: device inode
|
||||
* @filp: file pointer.
|
||||
*
|
||||
* This function must be used by drivers as their &file_operations.open method.
|
||||
* It looks up the correct DRM device and instantiates all the per-file
|
||||
* resources for it. It also calls the &drm_driver.open driver callback.
|
||||
*
|
||||
* RETURNS:
|
||||
*
|
||||
* 0 on success or negative errno value on falure.
|
||||
*/
|
||||
int drm_open(struct inode *inode, struct file *filp)
|
||||
{
|
||||
struct drm_device *dev;
|
||||
struct drm_minor *minor;
|
||||
int retcode;
|
||||
int need_setup = 0;
|
||||
|
||||
minor = drm_minor_acquire(iminor(inode));
|
||||
if (IS_ERR(minor))
|
||||
return PTR_ERR(minor);
|
||||
|
||||
dev = minor->dev;
|
||||
if (!dev->open_count++)
|
||||
need_setup = 1;
|
||||
|
||||
/* share address_space across all char-devs of a single device */
|
||||
filp->f_mapping = dev->anon_inode->i_mapping;
|
||||
|
||||
retcode = drm_open_helper(filp, minor);
|
||||
if (retcode)
|
||||
goto err_undo;
|
||||
if (need_setup) {
|
||||
retcode = drm_legacy_setup(dev);
|
||||
if (retcode) {
|
||||
drm_close_helper(filp);
|
||||
goto err_undo;
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
|
||||
err_undo:
|
||||
dev->open_count--;
|
||||
drm_minor_release(minor);
|
||||
return retcode;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_open);
|
||||
|
||||
void drm_lastclose(struct drm_device * dev)
|
||||
{
|
||||
DRM_DEBUG("\n");
|
||||
|
|
|
@ -36,7 +36,7 @@ static unsigned int clip_offset(struct drm_rect *clip,
|
|||
void drm_fb_memcpy(void *dst, void *vaddr, struct drm_framebuffer *fb,
|
||||
struct drm_rect *clip)
|
||||
{
|
||||
unsigned int cpp = drm_format_plane_cpp(fb->format->format, 0);
|
||||
unsigned int cpp = fb->format->cpp[0];
|
||||
size_t len = (clip->x2 - clip->x1) * cpp;
|
||||
unsigned int y, lines = clip->y2 - clip->y1;
|
||||
|
||||
|
@ -63,7 +63,7 @@ void drm_fb_memcpy_dstclip(void __iomem *dst, void *vaddr,
|
|||
struct drm_framebuffer *fb,
|
||||
struct drm_rect *clip)
|
||||
{
|
||||
unsigned int cpp = drm_format_plane_cpp(fb->format->format, 0);
|
||||
unsigned int cpp = fb->format->cpp[0];
|
||||
unsigned int offset = clip_offset(clip, fb->pitches[0], cpp);
|
||||
size_t len = (clip->x2 - clip->x1) * cpp;
|
||||
unsigned int y, lines = clip->y2 - clip->y1;
|
||||
|
|
|
@ -332,124 +332,6 @@ drm_get_format_info(struct drm_device *dev,
|
|||
}
|
||||
EXPORT_SYMBOL(drm_get_format_info);
|
||||
|
||||
/**
|
||||
* drm_format_num_planes - get the number of planes for format
|
||||
* @format: pixel format (DRM_FORMAT_*)
|
||||
*
|
||||
* Returns:
|
||||
* The number of planes used by the specified pixel format.
|
||||
*/
|
||||
int drm_format_num_planes(uint32_t format)
|
||||
{
|
||||
const struct drm_format_info *info;
|
||||
|
||||
info = drm_format_info(format);
|
||||
return info ? info->num_planes : 1;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_format_num_planes);
|
||||
|
||||
/**
|
||||
* drm_format_plane_cpp - determine the bytes per pixel value
|
||||
* @format: pixel format (DRM_FORMAT_*)
|
||||
* @plane: plane index
|
||||
*
|
||||
* Returns:
|
||||
* The bytes per pixel value for the specified plane.
|
||||
*/
|
||||
int drm_format_plane_cpp(uint32_t format, int plane)
|
||||
{
|
||||
const struct drm_format_info *info;
|
||||
|
||||
info = drm_format_info(format);
|
||||
if (!info || plane >= info->num_planes)
|
||||
return 0;
|
||||
|
||||
return info->cpp[plane];
|
||||
}
|
||||
EXPORT_SYMBOL(drm_format_plane_cpp);
|
||||
|
||||
/**
|
||||
* drm_format_horz_chroma_subsampling - get the horizontal chroma subsampling factor
|
||||
* @format: pixel format (DRM_FORMAT_*)
|
||||
*
|
||||
* Returns:
|
||||
* The horizontal chroma subsampling factor for the
|
||||
* specified pixel format.
|
||||
*/
|
||||
int drm_format_horz_chroma_subsampling(uint32_t format)
|
||||
{
|
||||
const struct drm_format_info *info;
|
||||
|
||||
info = drm_format_info(format);
|
||||
return info ? info->hsub : 1;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_format_horz_chroma_subsampling);
|
||||
|
||||
/**
|
||||
* drm_format_vert_chroma_subsampling - get the vertical chroma subsampling factor
|
||||
* @format: pixel format (DRM_FORMAT_*)
|
||||
*
|
||||
* Returns:
|
||||
* The vertical chroma subsampling factor for the
|
||||
* specified pixel format.
|
||||
*/
|
||||
int drm_format_vert_chroma_subsampling(uint32_t format)
|
||||
{
|
||||
const struct drm_format_info *info;
|
||||
|
||||
info = drm_format_info(format);
|
||||
return info ? info->vsub : 1;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_format_vert_chroma_subsampling);
|
||||
|
||||
/**
|
||||
* drm_format_plane_width - width of the plane given the first plane
|
||||
* @width: width of the first plane
|
||||
* @format: pixel format
|
||||
* @plane: plane index
|
||||
*
|
||||
* Returns:
|
||||
* The width of @plane, given that the width of the first plane is @width.
|
||||
*/
|
||||
int drm_format_plane_width(int width, uint32_t format, int plane)
|
||||
{
|
||||
const struct drm_format_info *info;
|
||||
|
||||
info = drm_format_info(format);
|
||||
if (!info || plane >= info->num_planes)
|
||||
return 0;
|
||||
|
||||
if (plane == 0)
|
||||
return width;
|
||||
|
||||
return width / info->hsub;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_format_plane_width);
|
||||
|
||||
/**
|
||||
* drm_format_plane_height - height of the plane given the first plane
|
||||
* @height: height of the first plane
|
||||
* @format: pixel format
|
||||
* @plane: plane index
|
||||
*
|
||||
* Returns:
|
||||
* The height of @plane, given that the height of the first plane is @height.
|
||||
*/
|
||||
int drm_format_plane_height(int height, uint32_t format, int plane)
|
||||
{
|
||||
const struct drm_format_info *info;
|
||||
|
||||
info = drm_format_info(format);
|
||||
if (!info || plane >= info->num_planes)
|
||||
return 0;
|
||||
|
||||
if (plane == 0)
|
||||
return height;
|
||||
|
||||
return height / info->vsub;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_format_plane_height);
|
||||
|
||||
/**
|
||||
* drm_format_info_block_width - width in pixels of block.
|
||||
* @info: pixel format info
|
||||
|
|
772
drivers/gpu/drm/drm_gem_vram_helper.c
Normal file
772
drivers/gpu/drm/drm_gem_vram_helper.c
Normal file
|
@ -0,0 +1,772 @@
|
|||
// SPDX-License-Identifier: GPL-2.0-or-later
|
||||
|
||||
#include <drm/drm_gem_vram_helper.h>
|
||||
#include <drm/drm_device.h>
|
||||
#include <drm/drm_mode.h>
|
||||
#include <drm/drm_prime.h>
|
||||
#include <drm/drm_vram_mm_helper.h>
|
||||
#include <drm/ttm/ttm_page_alloc.h>
|
||||
|
||||
/**
|
||||
* DOC: overview
|
||||
*
|
||||
* This library provides a GEM buffer object that is backed by video RAM
|
||||
* (VRAM). It can be used for framebuffer devices with dedicated memory.
|
||||
*/
|
||||
|
||||
/*
|
||||
* Buffer-objects helpers
|
||||
*/
|
||||
|
||||
static void drm_gem_vram_cleanup(struct drm_gem_vram_object *gbo)
|
||||
{
|
||||
/* We got here via ttm_bo_put(), which means that the
|
||||
* TTM buffer object in 'bo' has already been cleaned
|
||||
* up; only release the GEM object.
|
||||
*/
|
||||
drm_gem_object_release(&gbo->gem);
|
||||
}
|
||||
|
||||
static void drm_gem_vram_destroy(struct drm_gem_vram_object *gbo)
|
||||
{
|
||||
drm_gem_vram_cleanup(gbo);
|
||||
kfree(gbo);
|
||||
}
|
||||
|
||||
static void ttm_buffer_object_destroy(struct ttm_buffer_object *bo)
|
||||
{
|
||||
struct drm_gem_vram_object *gbo = drm_gem_vram_of_bo(bo);
|
||||
|
||||
drm_gem_vram_destroy(gbo);
|
||||
}
|
||||
|
||||
static void drm_gem_vram_placement(struct drm_gem_vram_object *gbo,
|
||||
unsigned long pl_flag)
|
||||
{
|
||||
unsigned int i;
|
||||
unsigned int c = 0;
|
||||
|
||||
gbo->placement.placement = gbo->placements;
|
||||
gbo->placement.busy_placement = gbo->placements;
|
||||
|
||||
if (pl_flag & TTM_PL_FLAG_VRAM)
|
||||
gbo->placements[c++].flags = TTM_PL_FLAG_WC |
|
||||
TTM_PL_FLAG_UNCACHED |
|
||||
TTM_PL_FLAG_VRAM;
|
||||
|
||||
if (pl_flag & TTM_PL_FLAG_SYSTEM)
|
||||
gbo->placements[c++].flags = TTM_PL_MASK_CACHING |
|
||||
TTM_PL_FLAG_SYSTEM;
|
||||
|
||||
if (!c)
|
||||
gbo->placements[c++].flags = TTM_PL_MASK_CACHING |
|
||||
TTM_PL_FLAG_SYSTEM;
|
||||
|
||||
gbo->placement.num_placement = c;
|
||||
gbo->placement.num_busy_placement = c;
|
||||
|
||||
for (i = 0; i < c; ++i) {
|
||||
gbo->placements[i].fpfn = 0;
|
||||
gbo->placements[i].lpfn = 0;
|
||||
}
|
||||
}
|
||||
|
||||
static int drm_gem_vram_init(struct drm_device *dev,
|
||||
struct ttm_bo_device *bdev,
|
||||
struct drm_gem_vram_object *gbo,
|
||||
size_t size, unsigned long pg_align,
|
||||
bool interruptible)
|
||||
{
|
||||
int ret;
|
||||
size_t acc_size;
|
||||
|
||||
ret = drm_gem_object_init(dev, &gbo->gem, size);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
acc_size = ttm_bo_dma_acc_size(bdev, size, sizeof(*gbo));
|
||||
|
||||
gbo->bo.bdev = bdev;
|
||||
drm_gem_vram_placement(gbo, TTM_PL_FLAG_VRAM | TTM_PL_FLAG_SYSTEM);
|
||||
|
||||
ret = ttm_bo_init(bdev, &gbo->bo, size, ttm_bo_type_device,
|
||||
&gbo->placement, pg_align, interruptible, acc_size,
|
||||
NULL, NULL, ttm_buffer_object_destroy);
|
||||
if (ret)
|
||||
goto err_drm_gem_object_release;
|
||||
|
||||
return 0;
|
||||
|
||||
err_drm_gem_object_release:
|
||||
drm_gem_object_release(&gbo->gem);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* drm_gem_vram_create() - Creates a VRAM-backed GEM object
|
||||
* @dev: the DRM device
|
||||
* @bdev: the TTM BO device backing the object
|
||||
* @size: the buffer size in bytes
|
||||
* @pg_align: the buffer's alignment in multiples of the page size
|
||||
* @interruptible: sleep interruptible if waiting for memory
|
||||
*
|
||||
* Returns:
|
||||
* A new instance of &struct drm_gem_vram_object on success, or
|
||||
* an ERR_PTR()-encoded error code otherwise.
|
||||
*/
|
||||
struct drm_gem_vram_object *drm_gem_vram_create(struct drm_device *dev,
|
||||
struct ttm_bo_device *bdev,
|
||||
size_t size,
|
||||
unsigned long pg_align,
|
||||
bool interruptible)
|
||||
{
|
||||
struct drm_gem_vram_object *gbo;
|
||||
int ret;
|
||||
|
||||
gbo = kzalloc(sizeof(*gbo), GFP_KERNEL);
|
||||
if (!gbo)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
ret = drm_gem_vram_init(dev, bdev, gbo, size, pg_align, interruptible);
|
||||
if (ret < 0)
|
||||
goto err_kfree;
|
||||
|
||||
return gbo;
|
||||
|
||||
err_kfree:
|
||||
kfree(gbo);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_create);
|
||||
|
||||
/**
|
||||
* drm_gem_vram_put() - Releases a reference to a VRAM-backed GEM object
|
||||
* @gbo: the GEM VRAM object
|
||||
*
|
||||
* See ttm_bo_put() for more information.
|
||||
*/
|
||||
void drm_gem_vram_put(struct drm_gem_vram_object *gbo)
|
||||
{
|
||||
ttm_bo_put(&gbo->bo);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_put);
|
||||
|
||||
/**
|
||||
* drm_gem_vram_lock() - Locks a VRAM-backed GEM object
|
||||
* @gbo: the GEM VRAM object
|
||||
* @no_wait: don't wait for buffer object to become available
|
||||
*
|
||||
* See ttm_bo_reserve() for more information.
|
||||
*
|
||||
* Returns:
|
||||
* 0 on success, or
|
||||
* a negative error code otherwise
|
||||
*/
|
||||
int drm_gem_vram_lock(struct drm_gem_vram_object *gbo, bool no_wait)
|
||||
{
|
||||
return ttm_bo_reserve(&gbo->bo, true, no_wait, NULL);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_lock);
|
||||
|
||||
/**
|
||||
* drm_gem_vram_unlock() - \
|
||||
Release a reservation acquired by drm_gem_vram_lock()
|
||||
* @gbo: the GEM VRAM object
|
||||
*
|
||||
* See ttm_bo_unreserve() for more information.
|
||||
*/
|
||||
void drm_gem_vram_unlock(struct drm_gem_vram_object *gbo)
|
||||
{
|
||||
ttm_bo_unreserve(&gbo->bo);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_unlock);
|
||||
|
||||
/**
|
||||
* drm_gem_vram_mmap_offset() - Returns a GEM VRAM object's mmap offset
|
||||
* @gbo: the GEM VRAM object
|
||||
*
|
||||
* See drm_vma_node_offset_addr() for more information.
|
||||
*
|
||||
* Returns:
|
||||
* The buffer object's offset for userspace mappings on success, or
|
||||
* 0 if no offset is allocated.
|
||||
*/
|
||||
u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo)
|
||||
{
|
||||
return drm_vma_node_offset_addr(&gbo->bo.vma_node);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_mmap_offset);
|
||||
|
||||
/**
|
||||
* drm_gem_vram_offset() - \
|
||||
Returns a GEM VRAM object's offset in video memory
|
||||
* @gbo: the GEM VRAM object
|
||||
*
|
||||
* This function returns the buffer object's offset in the device's video
|
||||
* memory. The buffer object has to be pinned to %TTM_PL_VRAM.
|
||||
*
|
||||
* Returns:
|
||||
* The buffer object's offset in video memory on success, or
|
||||
* a negative errno code otherwise.
|
||||
*/
|
||||
s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo)
|
||||
{
|
||||
if (WARN_ON_ONCE(!gbo->pin_count))
|
||||
return (s64)-ENODEV;
|
||||
return gbo->bo.offset;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_offset);
|
||||
|
||||
/**
|
||||
* drm_gem_vram_pin() - Pins a GEM VRAM object in a region.
|
||||
* @gbo: the GEM VRAM object
|
||||
* @pl_flag: a bitmask of possible memory regions
|
||||
*
|
||||
* Pinning a buffer object ensures that it is not evicted from
|
||||
* a memory region. A pinned buffer object has to be unpinned before
|
||||
* it can be pinned to another region.
|
||||
*
|
||||
* Returns:
|
||||
* 0 on success, or
|
||||
* a negative error code otherwise.
|
||||
*/
|
||||
int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag)
|
||||
{
|
||||
int i, ret;
|
||||
struct ttm_operation_ctx ctx = { false, false };
|
||||
|
||||
ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
if (gbo->pin_count)
|
||||
goto out;
|
||||
|
||||
drm_gem_vram_placement(gbo, pl_flag);
|
||||
for (i = 0; i < gbo->placement.num_placement; ++i)
|
||||
gbo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT;
|
||||
|
||||
ret = ttm_bo_validate(&gbo->bo, &gbo->placement, &ctx);
|
||||
if (ret < 0)
|
||||
goto err_ttm_bo_unreserve;
|
||||
|
||||
out:
|
||||
++gbo->pin_count;
|
||||
ttm_bo_unreserve(&gbo->bo);
|
||||
|
||||
return 0;
|
||||
|
||||
err_ttm_bo_unreserve:
|
||||
ttm_bo_unreserve(&gbo->bo);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_pin);
|
||||
|
||||
/**
|
||||
* drm_gem_vram_pin_locked() - Pins a GEM VRAM object in a region.
|
||||
* @gbo: the GEM VRAM object
|
||||
* @pl_flag: a bitmask of possible memory regions
|
||||
*
|
||||
* Pinning a buffer object ensures that it is not evicted from
|
||||
* a memory region. A pinned buffer object has to be unpinned before
|
||||
* it can be pinned to another region.
|
||||
*
|
||||
* This function pins a GEM VRAM object that has already been
|
||||
* locked. Use drm_gem_vram_pin() if possible.
|
||||
*
|
||||
* Returns:
|
||||
* 0 on success, or
|
||||
* a negative error code otherwise.
|
||||
*/
|
||||
int drm_gem_vram_pin_locked(struct drm_gem_vram_object *gbo,
|
||||
unsigned long pl_flag)
|
||||
{
|
||||
int i, ret;
|
||||
struct ttm_operation_ctx ctx = { false, false };
|
||||
|
||||
lockdep_assert_held(&gbo->bo.resv->lock.base);
|
||||
|
||||
if (gbo->pin_count) {
|
||||
++gbo->pin_count;
|
||||
return 0;
|
||||
}
|
||||
|
||||
drm_gem_vram_placement(gbo, pl_flag);
|
||||
for (i = 0; i < gbo->placement.num_placement; ++i)
|
||||
gbo->placements[i].flags |= TTM_PL_FLAG_NO_EVICT;
|
||||
|
||||
ret = ttm_bo_validate(&gbo->bo, &gbo->placement, &ctx);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
gbo->pin_count = 1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_pin_locked);
|
||||
|
||||
/**
|
||||
* drm_gem_vram_unpin() - Unpins a GEM VRAM object
|
||||
* @gbo: the GEM VRAM object
|
||||
*
|
||||
* Returns:
|
||||
* 0 on success, or
|
||||
* a negative error code otherwise.
|
||||
*/
|
||||
int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
|
||||
{
|
||||
int i, ret;
|
||||
struct ttm_operation_ctx ctx = { false, false };
|
||||
|
||||
ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
if (WARN_ON_ONCE(!gbo->pin_count))
|
||||
goto out;
|
||||
|
||||
--gbo->pin_count;
|
||||
if (gbo->pin_count)
|
||||
goto out;
|
||||
|
||||
for (i = 0; i < gbo->placement.num_placement ; ++i)
|
||||
gbo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT;
|
||||
|
||||
ret = ttm_bo_validate(&gbo->bo, &gbo->placement, &ctx);
|
||||
if (ret < 0)
|
||||
goto err_ttm_bo_unreserve;
|
||||
|
||||
out:
|
||||
ttm_bo_unreserve(&gbo->bo);
|
||||
|
||||
return 0;
|
||||
|
||||
err_ttm_bo_unreserve:
|
||||
ttm_bo_unreserve(&gbo->bo);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_unpin);
|
||||
|
||||
/**
|
||||
* drm_gem_vram_unpin_locked() - Unpins a GEM VRAM object
|
||||
* @gbo: the GEM VRAM object
|
||||
*
|
||||
* This function unpins a GEM VRAM object that has already been
|
||||
* locked. Use drm_gem_vram_unpin() if possible.
|
||||
*
|
||||
* Returns:
|
||||
* 0 on success, or
|
||||
* a negative error code otherwise.
|
||||
*/
|
||||
int drm_gem_vram_unpin_locked(struct drm_gem_vram_object *gbo)
|
||||
{
|
||||
int i, ret;
|
||||
struct ttm_operation_ctx ctx = { false, false };
|
||||
|
||||
lockdep_assert_held(&gbo->bo.resv->lock.base);
|
||||
|
||||
if (WARN_ON_ONCE(!gbo->pin_count))
|
||||
return 0;
|
||||
|
||||
--gbo->pin_count;
|
||||
if (gbo->pin_count)
|
||||
return 0;
|
||||
|
||||
for (i = 0; i < gbo->placement.num_placement ; ++i)
|
||||
gbo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT;
|
||||
|
||||
ret = ttm_bo_validate(&gbo->bo, &gbo->placement, &ctx);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_unpin_locked);
|
||||
|
||||
/**
|
||||
* drm_gem_vram_kmap_at() - Maps a GEM VRAM object into kernel address space
|
||||
* @gbo: the GEM VRAM object
|
||||
* @map: establish a mapping if necessary
|
||||
* @is_iomem: returns true if the mapped memory is I/O memory, or false \
|
||||
otherwise; can be NULL
|
||||
* @kmap: the mapping's kmap object
|
||||
*
|
||||
* This function maps the buffer object into the kernel's address space
|
||||
* or returns the current mapping. If the parameter map is false, the
|
||||
* function only queries the current mapping, but does not establish a
|
||||
* new one.
|
||||
*
|
||||
* Returns:
|
||||
* The buffers virtual address if mapped, or
|
||||
* NULL if not mapped, or
|
||||
* an ERR_PTR()-encoded error code otherwise.
|
||||
*/
|
||||
void *drm_gem_vram_kmap_at(struct drm_gem_vram_object *gbo, bool map,
|
||||
bool *is_iomem, struct ttm_bo_kmap_obj *kmap)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (kmap->virtual || !map)
|
||||
goto out;
|
||||
|
||||
ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
|
||||
if (ret)
|
||||
return ERR_PTR(ret);
|
||||
|
||||
out:
|
||||
if (!is_iomem)
|
||||
return kmap->virtual;
|
||||
if (!kmap->virtual) {
|
||||
*is_iomem = false;
|
||||
return NULL;
|
||||
}
|
||||
return ttm_kmap_obj_virtual(kmap, is_iomem);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_kmap_at);
|
||||
|
||||
/**
|
||||
* drm_gem_vram_kmap() - Maps a GEM VRAM object into kernel address space
|
||||
* @gbo: the GEM VRAM object
|
||||
* @map: establish a mapping if necessary
|
||||
* @is_iomem: returns true if the mapped memory is I/O memory, or false \
|
||||
otherwise; can be NULL
|
||||
*
|
||||
* This function maps the buffer object into the kernel's address space
|
||||
* or returns the current mapping. If the parameter map is false, the
|
||||
* function only queries the current mapping, but does not establish a
|
||||
* new one.
|
||||
*
|
||||
* Returns:
|
||||
* The buffers virtual address if mapped, or
|
||||
* NULL if not mapped, or
|
||||
* an ERR_PTR()-encoded error code otherwise.
|
||||
*/
|
||||
void *drm_gem_vram_kmap(struct drm_gem_vram_object *gbo, bool map,
|
||||
bool *is_iomem)
|
||||
{
|
||||
return drm_gem_vram_kmap_at(gbo, map, is_iomem, &gbo->kmap);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_kmap);
|
||||
|
||||
/**
|
||||
* drm_gem_vram_kunmap_at() - Unmaps a GEM VRAM object
|
||||
* @gbo: the GEM VRAM object
|
||||
* @kmap: the mapping's kmap object
|
||||
*/
|
||||
void drm_gem_vram_kunmap_at(struct drm_gem_vram_object *gbo,
|
||||
struct ttm_bo_kmap_obj *kmap)
|
||||
{
|
||||
if (!kmap->virtual)
|
||||
return;
|
||||
|
||||
ttm_bo_kunmap(kmap);
|
||||
kmap->virtual = NULL;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_kunmap_at);
|
||||
|
||||
/**
|
||||
* drm_gem_vram_kunmap() - Unmaps a GEM VRAM object
|
||||
* @gbo: the GEM VRAM object
|
||||
*/
|
||||
void drm_gem_vram_kunmap(struct drm_gem_vram_object *gbo)
|
||||
{
|
||||
drm_gem_vram_kunmap_at(gbo, &gbo->kmap);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_kunmap);
|
||||
|
||||
/**
|
||||
* drm_gem_vram_fill_create_dumb() - \
|
||||
Helper for implementing &struct drm_driver.dumb_create
|
||||
* @file: the DRM file
|
||||
* @dev: the DRM device
|
||||
* @bdev: the TTM BO device managing the buffer object
|
||||
* @pg_align: the buffer's alignment in multiples of the page size
|
||||
* @interruptible: sleep interruptible if waiting for memory
|
||||
* @args: the arguments as provided to \
|
||||
&struct drm_driver.dumb_create
|
||||
*
|
||||
* This helper function fills &struct drm_mode_create_dumb, which is used
|
||||
* by &struct drm_driver.dumb_create. Implementations of this interface
|
||||
* should forwards their arguments to this helper, plus the driver-specific
|
||||
* parameters.
|
||||
*
|
||||
* Returns:
|
||||
* 0 on success, or
|
||||
* a negative error code otherwise.
|
||||
*/
|
||||
int drm_gem_vram_fill_create_dumb(struct drm_file *file,
|
||||
struct drm_device *dev,
|
||||
struct ttm_bo_device *bdev,
|
||||
unsigned long pg_align,
|
||||
bool interruptible,
|
||||
struct drm_mode_create_dumb *args)
|
||||
{
|
||||
size_t pitch, size;
|
||||
struct drm_gem_vram_object *gbo;
|
||||
int ret;
|
||||
u32 handle;
|
||||
|
||||
pitch = args->width * ((args->bpp + 7) / 8);
|
||||
size = pitch * args->height;
|
||||
|
||||
size = roundup(size, PAGE_SIZE);
|
||||
if (!size)
|
||||
return -EINVAL;
|
||||
|
||||
gbo = drm_gem_vram_create(dev, bdev, size, pg_align, interruptible);
|
||||
if (IS_ERR(gbo))
|
||||
return PTR_ERR(gbo);
|
||||
|
||||
ret = drm_gem_handle_create(file, &gbo->gem, &handle);
|
||||
if (ret)
|
||||
goto err_drm_gem_object_put_unlocked;
|
||||
|
||||
drm_gem_object_put_unlocked(&gbo->gem);
|
||||
|
||||
args->pitch = pitch;
|
||||
args->size = size;
|
||||
args->handle = handle;
|
||||
|
||||
return 0;
|
||||
|
||||
err_drm_gem_object_put_unlocked:
|
||||
drm_gem_object_put_unlocked(&gbo->gem);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_fill_create_dumb);
|
||||
|
||||
/*
|
||||
* Helpers for struct ttm_bo_driver
|
||||
*/
|
||||
|
||||
static bool drm_is_gem_vram(struct ttm_buffer_object *bo)
|
||||
{
|
||||
return (bo->destroy == ttm_buffer_object_destroy);
|
||||
}
|
||||
|
||||
/**
|
||||
* drm_gem_vram_bo_driver_evict_flags() - \
|
||||
Implements &struct ttm_bo_driver.evict_flags
|
||||
* @bo: TTM buffer object. Refers to &struct drm_gem_vram_object.bo
|
||||
* @pl: TTM placement information.
|
||||
*/
|
||||
void drm_gem_vram_bo_driver_evict_flags(struct ttm_buffer_object *bo,
|
||||
struct ttm_placement *pl)
|
||||
{
|
||||
struct drm_gem_vram_object *gbo;
|
||||
|
||||
/* TTM may pass BOs that are not GEM VRAM BOs. */
|
||||
if (!drm_is_gem_vram(bo))
|
||||
return;
|
||||
|
||||
gbo = drm_gem_vram_of_bo(bo);
|
||||
drm_gem_vram_placement(gbo, TTM_PL_FLAG_SYSTEM);
|
||||
*pl = gbo->placement;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_bo_driver_evict_flags);
|
||||
|
||||
/**
|
||||
* drm_gem_vram_bo_driver_verify_access() - \
|
||||
Implements &struct ttm_bo_driver.verify_access
|
||||
* @bo: TTM buffer object. Refers to &struct drm_gem_vram_object.bo
|
||||
* @filp: File pointer.
|
||||
*
|
||||
* Returns:
|
||||
* 0 on success, or
|
||||
* a negative errno code otherwise.
|
||||
*/
|
||||
int drm_gem_vram_bo_driver_verify_access(struct ttm_buffer_object *bo,
|
||||
struct file *filp)
|
||||
{
|
||||
struct drm_gem_vram_object *gbo = drm_gem_vram_of_bo(bo);
|
||||
|
||||
return drm_vma_node_verify_access(&gbo->gem.vma_node,
|
||||
filp->private_data);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_bo_driver_verify_access);
|
||||
|
||||
/**
|
||||
* drm_gem_vram_mm_funcs - Functions for &struct drm_vram_mm
|
||||
*
|
||||
* Most users of @struct drm_gem_vram_object will also use
|
||||
* @struct drm_vram_mm. This instance of &struct drm_vram_mm_funcs
|
||||
* can be used to connect both.
|
||||
*/
|
||||
const struct drm_vram_mm_funcs drm_gem_vram_mm_funcs = {
|
||||
.evict_flags = drm_gem_vram_bo_driver_evict_flags,
|
||||
.verify_access = drm_gem_vram_bo_driver_verify_access
|
||||
};
|
||||
EXPORT_SYMBOL(drm_gem_vram_mm_funcs);
|
||||
|
||||
/*
|
||||
* Helpers for struct drm_driver
|
||||
*/
|
||||
|
||||
/**
|
||||
* drm_gem_vram_driver_gem_free_object_unlocked() - \
|
||||
Implements &struct drm_driver.gem_free_object_unlocked
|
||||
* @gem: GEM object. Refers to &struct drm_gem_vram_object.gem
|
||||
*/
|
||||
void drm_gem_vram_driver_gem_free_object_unlocked(struct drm_gem_object *gem)
|
||||
{
|
||||
struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
|
||||
|
||||
drm_gem_vram_put(gbo);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_driver_gem_free_object_unlocked);
|
||||
|
||||
/**
|
||||
* drm_gem_vram_driver_create_dumb() - \
|
||||
Implements &struct drm_driver.dumb_create
|
||||
* @file: the DRM file
|
||||
* @dev: the DRM device
|
||||
* @args: the arguments as provided to \
|
||||
&struct drm_driver.dumb_create
|
||||
*
|
||||
* This function requires the driver to use @drm_device.vram_mm for its
|
||||
* instance of VRAM MM.
|
||||
*
|
||||
* Returns:
|
||||
* 0 on success, or
|
||||
* a negative error code otherwise.
|
||||
*/
|
||||
int drm_gem_vram_driver_dumb_create(struct drm_file *file,
|
||||
struct drm_device *dev,
|
||||
struct drm_mode_create_dumb *args)
|
||||
{
|
||||
if (WARN_ONCE(!dev->vram_mm, "VRAM MM not initialized"))
|
||||
return -EINVAL;
|
||||
|
||||
return drm_gem_vram_fill_create_dumb(file, dev, &dev->vram_mm->bdev, 0,
|
||||
false, args);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_driver_dumb_create);
|
||||
|
||||
/**
|
||||
* drm_gem_vram_driver_dumb_mmap_offset() - \
|
||||
Implements &struct drm_driver.dumb_mmap_offset
|
||||
* @file: DRM file pointer.
|
||||
* @dev: DRM device.
|
||||
* @handle: GEM handle
|
||||
* @offset: Returns the mapping's memory offset on success
|
||||
*
|
||||
* Returns:
|
||||
* 0 on success, or
|
||||
* a negative errno code otherwise.
|
||||
*/
|
||||
int drm_gem_vram_driver_dumb_mmap_offset(struct drm_file *file,
|
||||
struct drm_device *dev,
|
||||
uint32_t handle, uint64_t *offset)
|
||||
{
|
||||
struct drm_gem_object *gem;
|
||||
struct drm_gem_vram_object *gbo;
|
||||
|
||||
gem = drm_gem_object_lookup(file, handle);
|
||||
if (!gem)
|
||||
return -ENOENT;
|
||||
|
||||
gbo = drm_gem_vram_of_gem(gem);
|
||||
*offset = drm_gem_vram_mmap_offset(gbo);
|
||||
|
||||
drm_gem_object_put_unlocked(gem);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_driver_dumb_mmap_offset);
|
||||
|
||||
/*
|
||||
* PRIME helpers for struct drm_driver
|
||||
*/
|
||||
|
||||
/**
|
||||
* drm_gem_vram_driver_gem_prime_pin() - \
|
||||
Implements &struct drm_driver.gem_prime_pin
|
||||
* @gem: The GEM object to pin
|
||||
*
|
||||
* Returns:
|
||||
* 0 on success, or
|
||||
* a negative errno code otherwise.
|
||||
*/
|
||||
int drm_gem_vram_driver_gem_prime_pin(struct drm_gem_object *gem)
|
||||
{
|
||||
struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
|
||||
|
||||
return drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_driver_gem_prime_pin);
|
||||
|
||||
/**
|
||||
* drm_gem_vram_driver_gem_prime_unpin() - \
|
||||
Implements &struct drm_driver.gem_prime_unpin
|
||||
* @gem: The GEM object to unpin
|
||||
*/
|
||||
void drm_gem_vram_driver_gem_prime_unpin(struct drm_gem_object *gem)
|
||||
{
|
||||
struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
|
||||
|
||||
drm_gem_vram_unpin(gbo);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_driver_gem_prime_unpin);
|
||||
|
||||
/**
|
||||
* drm_gem_vram_driver_gem_prime_vmap() - \
|
||||
Implements &struct drm_driver.gem_prime_vmap
|
||||
* @gem: The GEM object to map
|
||||
*
|
||||
* Returns:
|
||||
* The buffers virtual address on success, or
|
||||
* NULL otherwise.
|
||||
*/
|
||||
void *drm_gem_vram_driver_gem_prime_vmap(struct drm_gem_object *gem)
|
||||
{
|
||||
struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
|
||||
int ret;
|
||||
void *base;
|
||||
|
||||
ret = drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM);
|
||||
if (ret)
|
||||
return NULL;
|
||||
base = drm_gem_vram_kmap(gbo, true, NULL);
|
||||
if (IS_ERR(base)) {
|
||||
drm_gem_vram_unpin(gbo);
|
||||
return NULL;
|
||||
}
|
||||
return base;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_driver_gem_prime_vmap);
|
||||
|
||||
/**
|
||||
* drm_gem_vram_driver_gem_prime_vunmap() - \
|
||||
Implements &struct drm_driver.gem_prime_vunmap
|
||||
* @gem: The GEM object to unmap
|
||||
* @vaddr: The mapping's base address
|
||||
*/
|
||||
void drm_gem_vram_driver_gem_prime_vunmap(struct drm_gem_object *gem,
|
||||
void *vaddr)
|
||||
{
|
||||
struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
|
||||
|
||||
drm_gem_vram_kunmap(gbo);
|
||||
drm_gem_vram_unpin(gbo);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_driver_gem_prime_vunmap);
|
||||
|
||||
/**
|
||||
* drm_gem_vram_driver_gem_prime_mmap() - \
|
||||
Implements &struct drm_driver.gem_prime_mmap
|
||||
* @gem: The GEM object to map
|
||||
* @vma: The VMA describing the mapping
|
||||
*
|
||||
* Returns:
|
||||
* 0 on success, or
|
||||
* a negative errno code otherwise.
|
||||
*/
|
||||
int drm_gem_vram_driver_gem_prime_mmap(struct drm_gem_object *gem,
|
||||
struct vm_area_struct *vma)
|
||||
{
|
||||
struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
|
||||
|
||||
gbo->gem.vma_node.vm_node.start = gbo->bo.vma_node.vm_node.start;
|
||||
return drm_gem_prime_mmap(gem, vma);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_driver_gem_prime_mmap);
|
|
@ -93,6 +93,8 @@ int drm_dropmaster_ioctl(struct drm_device *dev, void *data,
|
|||
struct drm_file *file_priv);
|
||||
int drm_master_open(struct drm_file *file_priv);
|
||||
void drm_master_release(struct drm_file *file_priv);
|
||||
bool drm_master_internal_acquire(struct drm_device *dev);
|
||||
void drm_master_internal_release(struct drm_device *dev);
|
||||
|
||||
/* drm_sysfs.c */
|
||||
extern struct class *drm_class;
|
||||
|
|
|
@ -187,10 +187,12 @@ int drm_legacy_sg_free(struct drm_device *dev, void *data,
|
|||
void drm_legacy_init_members(struct drm_device *dev);
|
||||
void drm_legacy_destroy_members(struct drm_device *dev);
|
||||
void drm_legacy_dev_reinit(struct drm_device *dev);
|
||||
int drm_legacy_setup(struct drm_device * dev);
|
||||
#else
|
||||
static inline void drm_legacy_init_members(struct drm_device *dev) {}
|
||||
static inline void drm_legacy_destroy_members(struct drm_device *dev) {}
|
||||
static inline void drm_legacy_dev_reinit(struct drm_device *dev) {}
|
||||
static inline int drm_legacy_setup(struct drm_device * dev) { return 0; }
|
||||
#endif
|
||||
|
||||
#if IS_ENABLED(CONFIG_DRM_LEGACY)
|
||||
|
|
|
@ -51,6 +51,26 @@ void drm_legacy_destroy_members(struct drm_device *dev)
|
|||
mutex_destroy(&dev->ctxlist_mutex);
|
||||
}
|
||||
|
||||
int drm_legacy_setup(struct drm_device * dev)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (dev->driver->firstopen &&
|
||||
drm_core_check_feature(dev, DRIVER_LEGACY)) {
|
||||
ret = dev->driver->firstopen(dev);
|
||||
if (ret != 0)
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = drm_legacy_dma_setup(dev);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
|
||||
DRM_DEBUG("\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
void drm_legacy_dev_reinit(struct drm_device *dev)
|
||||
{
|
||||
if (dev->irq_enabled)
|
||||
|
|
|
@ -86,11 +86,6 @@ struct drm_prime_member {
|
|||
struct rb_node handle_rb;
|
||||
};
|
||||
|
||||
struct drm_prime_attachment {
|
||||
struct sg_table *sgt;
|
||||
enum dma_data_direction dir;
|
||||
};
|
||||
|
||||
static int drm_prime_add_buf_handle(struct drm_prime_file_private *prime_fpriv,
|
||||
struct dma_buf *dma_buf, uint32_t handle)
|
||||
{
|
||||
|
@ -188,25 +183,16 @@ static int drm_prime_lookup_buf_handle(struct drm_prime_file_private *prime_fpri
|
|||
* @dma_buf: buffer to attach device to
|
||||
* @attach: buffer attachment data
|
||||
*
|
||||
* Allocates &drm_prime_attachment and calls &drm_driver.gem_prime_pin for
|
||||
* device specific attachment. This can be used as the &dma_buf_ops.attach
|
||||
* callback.
|
||||
* Calls &drm_driver.gem_prime_pin for device specific handling. This can be
|
||||
* used as the &dma_buf_ops.attach callback.
|
||||
*
|
||||
* Returns 0 on success, negative error code on failure.
|
||||
*/
|
||||
int drm_gem_map_attach(struct dma_buf *dma_buf,
|
||||
struct dma_buf_attachment *attach)
|
||||
{
|
||||
struct drm_prime_attachment *prime_attach;
|
||||
struct drm_gem_object *obj = dma_buf->priv;
|
||||
|
||||
prime_attach = kzalloc(sizeof(*prime_attach), GFP_KERNEL);
|
||||
if (!prime_attach)
|
||||
return -ENOMEM;
|
||||
|
||||
prime_attach->dir = DMA_NONE;
|
||||
attach->priv = prime_attach;
|
||||
|
||||
return drm_gem_pin(obj);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_map_attach);
|
||||
|
@ -222,26 +208,8 @@ EXPORT_SYMBOL(drm_gem_map_attach);
|
|||
void drm_gem_map_detach(struct dma_buf *dma_buf,
|
||||
struct dma_buf_attachment *attach)
|
||||
{
|
||||
struct drm_prime_attachment *prime_attach = attach->priv;
|
||||
struct drm_gem_object *obj = dma_buf->priv;
|
||||
|
||||
if (prime_attach) {
|
||||
struct sg_table *sgt = prime_attach->sgt;
|
||||
|
||||
if (sgt) {
|
||||
if (prime_attach->dir != DMA_NONE)
|
||||
dma_unmap_sg_attrs(attach->dev, sgt->sgl,
|
||||
sgt->nents,
|
||||
prime_attach->dir,
|
||||
DMA_ATTR_SKIP_CPU_SYNC);
|
||||
sg_free_table(sgt);
|
||||
}
|
||||
|
||||
kfree(sgt);
|
||||
kfree(prime_attach);
|
||||
attach->priv = NULL;
|
||||
}
|
||||
|
||||
drm_gem_unpin(obj);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_map_detach);
|
||||
|
@ -286,39 +254,22 @@ void drm_prime_remove_buf_handle_locked(struct drm_prime_file_private *prime_fpr
|
|||
struct sg_table *drm_gem_map_dma_buf(struct dma_buf_attachment *attach,
|
||||
enum dma_data_direction dir)
|
||||
{
|
||||
struct drm_prime_attachment *prime_attach = attach->priv;
|
||||
struct drm_gem_object *obj = attach->dmabuf->priv;
|
||||
struct sg_table *sgt;
|
||||
|
||||
if (WARN_ON(dir == DMA_NONE || !prime_attach))
|
||||
if (WARN_ON(dir == DMA_NONE))
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
/* return the cached mapping when possible */
|
||||
if (prime_attach->dir == dir)
|
||||
return prime_attach->sgt;
|
||||
|
||||
/*
|
||||
* two mappings with different directions for the same attachment are
|
||||
* not allowed
|
||||
*/
|
||||
if (WARN_ON(prime_attach->dir != DMA_NONE))
|
||||
return ERR_PTR(-EBUSY);
|
||||
|
||||
if (obj->funcs)
|
||||
sgt = obj->funcs->get_sg_table(obj);
|
||||
else
|
||||
sgt = obj->dev->driver->gem_prime_get_sg_table(obj);
|
||||
|
||||
if (!IS_ERR(sgt)) {
|
||||
if (!dma_map_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir,
|
||||
DMA_ATTR_SKIP_CPU_SYNC)) {
|
||||
sg_free_table(sgt);
|
||||
kfree(sgt);
|
||||
sgt = ERR_PTR(-ENOMEM);
|
||||
} else {
|
||||
prime_attach->sgt = sgt;
|
||||
prime_attach->dir = dir;
|
||||
}
|
||||
if (!dma_map_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir,
|
||||
DMA_ATTR_SKIP_CPU_SYNC)) {
|
||||
sg_free_table(sgt);
|
||||
kfree(sgt);
|
||||
sgt = ERR_PTR(-ENOMEM);
|
||||
}
|
||||
|
||||
return sgt;
|
||||
|
@ -331,14 +282,19 @@ EXPORT_SYMBOL(drm_gem_map_dma_buf);
|
|||
* @sgt: scatterlist info of the buffer to unmap
|
||||
* @dir: direction of DMA transfer
|
||||
*
|
||||
* Not implemented. The unmap is done at drm_gem_map_detach(). This can be
|
||||
* used as the &dma_buf_ops.unmap_dma_buf callback.
|
||||
* This can be used as the &dma_buf_ops.unmap_dma_buf callback.
|
||||
*/
|
||||
void drm_gem_unmap_dma_buf(struct dma_buf_attachment *attach,
|
||||
struct sg_table *sgt,
|
||||
enum dma_data_direction dir)
|
||||
{
|
||||
/* nothing to be done here */
|
||||
if (!sgt)
|
||||
return;
|
||||
|
||||
dma_unmap_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir,
|
||||
DMA_ATTR_SKIP_CPU_SYNC);
|
||||
sg_free_table(sgt);
|
||||
kfree(sgt);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
|
||||
|
||||
|
@ -452,6 +408,7 @@ int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *vma)
|
|||
EXPORT_SYMBOL(drm_gem_dmabuf_mmap);
|
||||
|
||||
static const struct dma_buf_ops drm_gem_prime_dmabuf_ops = {
|
||||
.cache_sgt_mapping = true,
|
||||
.attach = drm_gem_map_attach,
|
||||
.detach = drm_gem_map_detach,
|
||||
.map_dma_buf = drm_gem_map_dma_buf,
|
||||
|
|
96
drivers/gpu/drm/drm_vram_helper_common.c
Normal file
96
drivers/gpu/drm/drm_vram_helper_common.c
Normal file
|
@ -0,0 +1,96 @@
|
|||
// SPDX-License-Identifier: GPL-2.0-or-later
|
||||
|
||||
#include <linux/module.h>
|
||||
|
||||
/**
|
||||
* DOC: overview
|
||||
*
|
||||
* This library provides &struct drm_gem_vram_object (GEM VRAM), a GEM
|
||||
* buffer object that is backed by video RAM. It can be used for
|
||||
* framebuffer devices with dedicated memory. The video RAM can be
|
||||
* managed with &struct drm_vram_mm (VRAM MM). Both data structures are
|
||||
* supposed to be used together, but can also be used individually.
|
||||
*
|
||||
* With the GEM interface userspace applications create, manage and destroy
|
||||
* graphics buffers, such as an on-screen framebuffer. GEM does not provide
|
||||
* an implementation of these interfaces. It's up to the DRM driver to
|
||||
* provide an implementation that suits the hardware. If the hardware device
|
||||
* contains dedicated video memory, the DRM driver can use the VRAM helper
|
||||
* library. Each active buffer object is stored in video RAM. Active
|
||||
* buffer are used for drawing the current frame, typically something like
|
||||
* the frame's scanout buffer or the cursor image. If there's no more space
|
||||
* left in VRAM, inactive GEM objects can be moved to system memory.
|
||||
*
|
||||
* The easiest way to use the VRAM helper library is to call
|
||||
* drm_vram_helper_alloc_mm(). The function allocates and initializes an
|
||||
* instance of &struct drm_vram_mm in &struct drm_device.vram_mm . Use
|
||||
* &DRM_GEM_VRAM_DRIVER to initialize &struct drm_driver and
|
||||
* &DRM_VRAM_MM_FILE_OPERATIONS to initialize &struct file_operations;
|
||||
* as illustrated below.
|
||||
*
|
||||
* .. code-block:: c
|
||||
*
|
||||
* struct file_operations fops ={
|
||||
* .owner = THIS_MODULE,
|
||||
* DRM_VRAM_MM_FILE_OPERATION
|
||||
* };
|
||||
* struct drm_driver drv = {
|
||||
* .driver_feature = DRM_ ... ,
|
||||
* .fops = &fops,
|
||||
* DRM_GEM_VRAM_DRIVER
|
||||
* };
|
||||
*
|
||||
* int init_drm_driver()
|
||||
* {
|
||||
* struct drm_device *dev;
|
||||
* uint64_t vram_base;
|
||||
* unsigned long vram_size;
|
||||
* int ret;
|
||||
*
|
||||
* // setup device, vram base and size
|
||||
* // ...
|
||||
*
|
||||
* ret = drm_vram_helper_alloc_mm(dev, vram_base, vram_size,
|
||||
* &drm_gem_vram_mm_funcs);
|
||||
* if (ret)
|
||||
* return ret;
|
||||
* return 0;
|
||||
* }
|
||||
*
|
||||
* This creates an instance of &struct drm_vram_mm, exports DRM userspace
|
||||
* interfaces for GEM buffer management and initializes file operations to
|
||||
* allow for accessing created GEM buffers. With this setup, the DRM driver
|
||||
* manages an area of video RAM with VRAM MM and provides GEM VRAM objects
|
||||
* to userspace.
|
||||
*
|
||||
* To clean up the VRAM memory management, call drm_vram_helper_release_mm()
|
||||
* in the driver's clean-up code.
|
||||
*
|
||||
* .. code-block:: c
|
||||
*
|
||||
* void fini_drm_driver()
|
||||
* {
|
||||
* struct drm_device *dev = ...;
|
||||
*
|
||||
* drm_vram_helper_release_mm(dev);
|
||||
* }
|
||||
*
|
||||
* For drawing or scanout operations, buffer object have to be pinned in video
|
||||
* RAM. Call drm_gem_vram_pin() with &DRM_GEM_VRAM_PL_FLAG_VRAM or
|
||||
* &DRM_GEM_VRAM_PL_FLAG_SYSTEM to pin a buffer object in video RAM or system
|
||||
* memory. Call drm_gem_vram_unpin() to release the pinned object afterwards.
|
||||
*
|
||||
* A buffer object that is pinned in video RAM has a fixed address within that
|
||||
* memory region. Call drm_gem_vram_offset() to retrieve this value. Typically
|
||||
* it's used to program the hardware's scanout engine for framebuffers, set
|
||||
* the cursor overlay's image for a mouse cursor, or use it as input to the
|
||||
* hardware's draing engine.
|
||||
*
|
||||
* To access a buffer object's memory from the DRM driver, call
|
||||
* drm_gem_vram_kmap(). It (optionally) maps the buffer into kernel address
|
||||
* space and returns the memory address. Use drm_gem_vram_kunmap() to
|
||||
* release the mapping.
|
||||
*/
|
||||
|
||||
MODULE_DESCRIPTION("DRM VRAM memory-management helpers");
|
||||
MODULE_LICENSE("GPL");
|
295
drivers/gpu/drm/drm_vram_mm_helper.c
Normal file
295
drivers/gpu/drm/drm_vram_mm_helper.c
Normal file
|
@ -0,0 +1,295 @@
|
|||
// SPDX-License-Identifier: GPL-2.0-or-later
|
||||
|
||||
#include <drm/drm_vram_mm_helper.h>
|
||||
#include <drm/drmP.h>
|
||||
#include <drm/ttm/ttm_page_alloc.h>
|
||||
|
||||
/**
|
||||
* DOC: overview
|
||||
*
|
||||
* The data structure &struct drm_vram_mm and its helpers implement a memory
|
||||
* manager for simple framebuffer devices with dedicated video memory. Buffer
|
||||
* objects are either placed in video RAM or evicted to system memory. These
|
||||
* helper functions work well with &struct drm_gem_vram_object.
|
||||
*/
|
||||
|
||||
/*
|
||||
* TTM TT
|
||||
*/
|
||||
|
||||
static void backend_func_destroy(struct ttm_tt *tt)
|
||||
{
|
||||
ttm_tt_fini(tt);
|
||||
kfree(tt);
|
||||
}
|
||||
|
||||
static struct ttm_backend_func backend_func = {
|
||||
.destroy = backend_func_destroy
|
||||
};
|
||||
|
||||
/*
|
||||
* TTM BO device
|
||||
*/
|
||||
|
||||
static struct ttm_tt *bo_driver_ttm_tt_create(struct ttm_buffer_object *bo,
|
||||
uint32_t page_flags)
|
||||
{
|
||||
struct ttm_tt *tt;
|
||||
int ret;
|
||||
|
||||
tt = kzalloc(sizeof(*tt), GFP_KERNEL);
|
||||
if (!tt)
|
||||
return NULL;
|
||||
|
||||
tt->func = &backend_func;
|
||||
|
||||
ret = ttm_tt_init(tt, bo, page_flags);
|
||||
if (ret < 0)
|
||||
goto err_ttm_tt_init;
|
||||
|
||||
return tt;
|
||||
|
||||
err_ttm_tt_init:
|
||||
kfree(tt);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static int bo_driver_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
|
||||
struct ttm_mem_type_manager *man)
|
||||
{
|
||||
switch (type) {
|
||||
case TTM_PL_SYSTEM:
|
||||
man->flags = TTM_MEMTYPE_FLAG_MAPPABLE;
|
||||
man->available_caching = TTM_PL_MASK_CACHING;
|
||||
man->default_caching = TTM_PL_FLAG_CACHED;
|
||||
break;
|
||||
case TTM_PL_VRAM:
|
||||
man->func = &ttm_bo_manager_func;
|
||||
man->flags = TTM_MEMTYPE_FLAG_FIXED |
|
||||
TTM_MEMTYPE_FLAG_MAPPABLE;
|
||||
man->available_caching = TTM_PL_FLAG_UNCACHED |
|
||||
TTM_PL_FLAG_WC;
|
||||
man->default_caching = TTM_PL_FLAG_WC;
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void bo_driver_evict_flags(struct ttm_buffer_object *bo,
|
||||
struct ttm_placement *placement)
|
||||
{
|
||||
struct drm_vram_mm *vmm = drm_vram_mm_of_bdev(bo->bdev);
|
||||
|
||||
if (vmm->funcs && vmm->funcs->evict_flags)
|
||||
vmm->funcs->evict_flags(bo, placement);
|
||||
}
|
||||
|
||||
static int bo_driver_verify_access(struct ttm_buffer_object *bo,
|
||||
struct file *filp)
|
||||
{
|
||||
struct drm_vram_mm *vmm = drm_vram_mm_of_bdev(bo->bdev);
|
||||
|
||||
if (!vmm->funcs || !vmm->funcs->verify_access)
|
||||
return 0;
|
||||
return vmm->funcs->verify_access(bo, filp);
|
||||
}
|
||||
|
||||
static int bo_driver_io_mem_reserve(struct ttm_bo_device *bdev,
|
||||
struct ttm_mem_reg *mem)
|
||||
{
|
||||
struct ttm_mem_type_manager *man = bdev->man + mem->mem_type;
|
||||
struct drm_vram_mm *vmm = drm_vram_mm_of_bdev(bdev);
|
||||
|
||||
if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE))
|
||||
return -EINVAL;
|
||||
|
||||
mem->bus.addr = NULL;
|
||||
mem->bus.size = mem->num_pages << PAGE_SHIFT;
|
||||
|
||||
switch (mem->mem_type) {
|
||||
case TTM_PL_SYSTEM: /* nothing to do */
|
||||
mem->bus.offset = 0;
|
||||
mem->bus.base = 0;
|
||||
mem->bus.is_iomem = false;
|
||||
break;
|
||||
case TTM_PL_VRAM:
|
||||
mem->bus.offset = mem->start << PAGE_SHIFT;
|
||||
mem->bus.base = vmm->vram_base;
|
||||
mem->bus.is_iomem = true;
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void bo_driver_io_mem_free(struct ttm_bo_device *bdev,
|
||||
struct ttm_mem_reg *mem)
|
||||
{ }
|
||||
|
||||
static struct ttm_bo_driver bo_driver = {
|
||||
.ttm_tt_create = bo_driver_ttm_tt_create,
|
||||
.ttm_tt_populate = ttm_pool_populate,
|
||||
.ttm_tt_unpopulate = ttm_pool_unpopulate,
|
||||
.init_mem_type = bo_driver_init_mem_type,
|
||||
.eviction_valuable = ttm_bo_eviction_valuable,
|
||||
.evict_flags = bo_driver_evict_flags,
|
||||
.verify_access = bo_driver_verify_access,
|
||||
.io_mem_reserve = bo_driver_io_mem_reserve,
|
||||
.io_mem_free = bo_driver_io_mem_free,
|
||||
};
|
||||
|
||||
/*
|
||||
* struct drm_vram_mm
|
||||
*/
|
||||
|
||||
/**
|
||||
* drm_vram_mm_init() - Initialize an instance of VRAM MM.
|
||||
* @vmm: the VRAM MM instance to initialize
|
||||
* @dev: the DRM device
|
||||
* @vram_base: the base address of the video memory
|
||||
* @vram_size: the size of the video memory in bytes
|
||||
* @funcs: callback functions for buffer objects
|
||||
*
|
||||
* Returns:
|
||||
* 0 on success, or
|
||||
* a negative error code otherwise.
|
||||
*/
|
||||
int drm_vram_mm_init(struct drm_vram_mm *vmm, struct drm_device *dev,
|
||||
uint64_t vram_base, size_t vram_size,
|
||||
const struct drm_vram_mm_funcs *funcs)
|
||||
{
|
||||
int ret;
|
||||
|
||||
vmm->vram_base = vram_base;
|
||||
vmm->vram_size = vram_size;
|
||||
vmm->funcs = funcs;
|
||||
|
||||
ret = ttm_bo_device_init(&vmm->bdev, &bo_driver,
|
||||
dev->anon_inode->i_mapping,
|
||||
true);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = ttm_bo_init_mm(&vmm->bdev, TTM_PL_VRAM, vram_size >> PAGE_SHIFT);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_vram_mm_init);
|
||||
|
||||
/**
|
||||
* drm_vram_mm_cleanup() - Cleans up an initialized instance of VRAM MM.
|
||||
* @vmm: the VRAM MM instance to clean up
|
||||
*/
|
||||
void drm_vram_mm_cleanup(struct drm_vram_mm *vmm)
|
||||
{
|
||||
ttm_bo_device_release(&vmm->bdev);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_vram_mm_cleanup);
|
||||
|
||||
/**
|
||||
* drm_vram_mm_mmap() - Helper for implementing &struct file_operations.mmap()
|
||||
* @filp: the mapping's file structure
|
||||
* @vma: the mapping's memory area
|
||||
* @vmm: the VRAM MM instance
|
||||
*
|
||||
* Returns:
|
||||
* 0 on success, or
|
||||
* a negative error code otherwise.
|
||||
*/
|
||||
int drm_vram_mm_mmap(struct file *filp, struct vm_area_struct *vma,
|
||||
struct drm_vram_mm *vmm)
|
||||
{
|
||||
return ttm_bo_mmap(filp, vma, &vmm->bdev);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_vram_mm_mmap);
|
||||
|
||||
/*
|
||||
* Helpers for integration with struct drm_device
|
||||
*/
|
||||
|
||||
/**
|
||||
* drm_vram_helper_alloc_mm - Allocates a device's instance of \
|
||||
&struct drm_vram_mm
|
||||
* @dev: the DRM device
|
||||
* @vram_base: the base address of the video memory
|
||||
* @vram_size: the size of the video memory in bytes
|
||||
* @funcs: callback functions for buffer objects
|
||||
*
|
||||
* Returns:
|
||||
* The new instance of &struct drm_vram_mm on success, or
|
||||
* an ERR_PTR()-encoded errno code otherwise.
|
||||
*/
|
||||
struct drm_vram_mm *drm_vram_helper_alloc_mm(
|
||||
struct drm_device *dev, uint64_t vram_base, size_t vram_size,
|
||||
const struct drm_vram_mm_funcs *funcs)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (WARN_ON(dev->vram_mm))
|
||||
return dev->vram_mm;
|
||||
|
||||
dev->vram_mm = kzalloc(sizeof(*dev->vram_mm), GFP_KERNEL);
|
||||
if (!dev->vram_mm)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
ret = drm_vram_mm_init(dev->vram_mm, dev, vram_base, vram_size, funcs);
|
||||
if (ret)
|
||||
goto err_kfree;
|
||||
|
||||
return dev->vram_mm;
|
||||
|
||||
err_kfree:
|
||||
kfree(dev->vram_mm);
|
||||
dev->vram_mm = NULL;
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_vram_helper_alloc_mm);
|
||||
|
||||
/**
|
||||
* drm_vram_helper_release_mm - Releases a device's instance of \
|
||||
&struct drm_vram_mm
|
||||
* @dev: the DRM device
|
||||
*/
|
||||
void drm_vram_helper_release_mm(struct drm_device *dev)
|
||||
{
|
||||
if (!dev->vram_mm)
|
||||
return;
|
||||
|
||||
drm_vram_mm_cleanup(dev->vram_mm);
|
||||
kfree(dev->vram_mm);
|
||||
dev->vram_mm = NULL;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_vram_helper_release_mm);
|
||||
|
||||
/*
|
||||
* Helpers for &struct file_operations
|
||||
*/
|
||||
|
||||
/**
|
||||
* drm_vram_mm_file_operations_mmap() - \
|
||||
Implements &struct file_operations.mmap()
|
||||
* @filp: the mapping's file structure
|
||||
* @vma: the mapping's memory area
|
||||
*
|
||||
* Returns:
|
||||
* 0 on success, or
|
||||
* a negative error code otherwise.
|
||||
*/
|
||||
int drm_vram_mm_file_operations_mmap(
|
||||
struct file *filp, struct vm_area_struct *vma)
|
||||
{
|
||||
struct drm_file *file_priv = filp->private_data;
|
||||
struct drm_device *dev = file_priv->minor->dev;
|
||||
|
||||
if (WARN_ONCE(!dev->vram_mm, "VRAM MM not initialized"))
|
||||
return -EINVAL;
|
||||
|
||||
return drm_vram_mm_mmap(filp, vma, dev->vram_mm);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_vram_mm_file_operations_mmap);
|
|
@ -118,7 +118,6 @@ void etnaviv_core_dump(struct etnaviv_gpu *gpu)
|
|||
unsigned int n_obj, n_bomap_pages;
|
||||
size_t file_size, mmu_size;
|
||||
__le64 *bomap, *bomap_start;
|
||||
unsigned long flags;
|
||||
|
||||
/* Only catch the first event, or when manually re-armed */
|
||||
if (!etnaviv_dump_core)
|
||||
|
@ -135,13 +134,11 @@ void etnaviv_core_dump(struct etnaviv_gpu *gpu)
|
|||
mmu_size + gpu->buffer.size;
|
||||
|
||||
/* Add in the active command buffers */
|
||||
spin_lock_irqsave(&gpu->sched.job_list_lock, flags);
|
||||
list_for_each_entry(s_job, &gpu->sched.ring_mirror_list, node) {
|
||||
submit = to_etnaviv_submit(s_job);
|
||||
file_size += submit->cmdbuf.size;
|
||||
n_obj++;
|
||||
}
|
||||
spin_unlock_irqrestore(&gpu->sched.job_list_lock, flags);
|
||||
|
||||
/* Add in the active buffer objects */
|
||||
list_for_each_entry(vram, &gpu->mmu->mappings, mmu_node) {
|
||||
|
@ -183,14 +180,12 @@ void etnaviv_core_dump(struct etnaviv_gpu *gpu)
|
|||
gpu->buffer.size,
|
||||
etnaviv_cmdbuf_get_va(&gpu->buffer));
|
||||
|
||||
spin_lock_irqsave(&gpu->sched.job_list_lock, flags);
|
||||
list_for_each_entry(s_job, &gpu->sched.ring_mirror_list, node) {
|
||||
submit = to_etnaviv_submit(s_job);
|
||||
etnaviv_core_dump_mem(&iter, ETDUMP_BUF_CMD,
|
||||
submit->cmdbuf.vaddr, submit->cmdbuf.size,
|
||||
etnaviv_cmdbuf_get_va(&submit->cmdbuf));
|
||||
}
|
||||
spin_unlock_irqrestore(&gpu->sched.job_list_lock, flags);
|
||||
|
||||
/* Reserve space for the bomap */
|
||||
if (n_bomap_pages) {
|
||||
|
|
|
@ -109,7 +109,7 @@ static void etnaviv_sched_timedout_job(struct drm_sched_job *sched_job)
|
|||
}
|
||||
|
||||
/* block scheduler */
|
||||
drm_sched_stop(&gpu->sched);
|
||||
drm_sched_stop(&gpu->sched, sched_job);
|
||||
|
||||
if(sched_job)
|
||||
drm_sched_increase_karma(sched_job);
|
||||
|
|
|
@ -20,24 +20,24 @@
|
|||
*
|
||||
**************************************************************************/
|
||||
|
||||
#include <linux/module.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/errno.h>
|
||||
#include <linux/string.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/tty.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/console.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/errno.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/string.h>
|
||||
#include <linux/tty.h>
|
||||
|
||||
#include <drm/drmP.h>
|
||||
#include <drm/drm.h>
|
||||
#include <drm/drm_crtc.h>
|
||||
#include <drm/drm_fourcc.h>
|
||||
|
||||
#include "framebuffer.h"
|
||||
#include "psb_drv.h"
|
||||
#include "psb_reg.h"
|
||||
#include "framebuffer.h"
|
||||
|
||||
/**
|
||||
* psb_spank - reset the 2D engine
|
||||
|
|
|
@ -17,6 +17,8 @@
|
|||
#ifndef __BLITTER_H
|
||||
#define __BLITTER_H
|
||||
|
||||
struct drm_psb_private;
|
||||
|
||||
extern int gma_blt_wait_idle(struct drm_psb_private *dev_priv);
|
||||
|
||||
#endif
|
||||
|
|
|
@ -18,15 +18,16 @@
|
|||
**************************************************************************/
|
||||
|
||||
#include <linux/backlight.h>
|
||||
#include <drm/drmP.h>
|
||||
#include <linux/delay.h>
|
||||
|
||||
#include <drm/drm.h>
|
||||
#include <drm/gma_drm.h>
|
||||
#include "psb_drv.h"
|
||||
#include "psb_reg.h"
|
||||
#include "psb_intel_reg.h"
|
||||
#include "intel_bios.h"
|
||||
|
||||
#include "cdv_device.h"
|
||||
#include "gma_device.h"
|
||||
#include "intel_bios.h"
|
||||
#include "psb_drv.h"
|
||||
#include "psb_intel_reg.h"
|
||||
#include "psb_reg.h"
|
||||
|
||||
#define VGA_SR_INDEX 0x3c4
|
||||
#define VGA_SR_DATA 0x3c5
|
||||
|
|
|
@ -15,6 +15,10 @@
|
|||
* 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
*/
|
||||
|
||||
struct drm_crtc;
|
||||
struct drm_device;
|
||||
struct psb_intel_mode_device;
|
||||
|
||||
extern const struct drm_crtc_helper_funcs cdv_intel_helper_funcs;
|
||||
extern const struct drm_crtc_funcs cdv_intel_crtc_funcs;
|
||||
extern const struct gma_clock_funcs cdv_clock_funcs;
|
||||
|
|
|
@ -24,16 +24,16 @@
|
|||
* Eric Anholt <eric@anholt.net>
|
||||
*/
|
||||
|
||||
#include <linux/delay.h>
|
||||
#include <linux/i2c.h>
|
||||
#include <drm/drmP.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
|
||||
#include "cdv_device.h"
|
||||
#include "intel_bios.h"
|
||||
#include "power.h"
|
||||
#include "psb_drv.h"
|
||||
#include "psb_intel_drv.h"
|
||||
#include "psb_intel_reg.h"
|
||||
#include "power.h"
|
||||
#include "cdv_device.h"
|
||||
#include <linux/pm_runtime.h>
|
||||
|
||||
|
||||
static void cdv_intel_crt_dpms(struct drm_encoder *encoder, int mode)
|
||||
|
|
|
@ -18,16 +18,18 @@
|
|||
* Eric Anholt <eric@anholt.net>
|
||||
*/
|
||||
|
||||
#include <linux/delay.h>
|
||||
#include <linux/i2c.h>
|
||||
|
||||
#include <drm/drmP.h>
|
||||
#include <drm/drm_crtc.h>
|
||||
|
||||
#include "cdv_device.h"
|
||||
#include "framebuffer.h"
|
||||
#include "gma_display.h"
|
||||
#include "power.h"
|
||||
#include "psb_drv.h"
|
||||
#include "psb_intel_drv.h"
|
||||
#include "psb_intel_reg.h"
|
||||
#include "gma_display.h"
|
||||
#include "power.h"
|
||||
#include "cdv_device.h"
|
||||
|
||||
static bool cdv_intel_find_dp_pll(const struct gma_limit_t *limit,
|
||||
struct drm_crtc *crtc, int target,
|
||||
|
|
|
@ -26,16 +26,17 @@
|
|||
*/
|
||||
|
||||
#include <linux/i2c.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/module.h>
|
||||
#include <drm/drmP.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
#include <drm/drm_crtc.h>
|
||||
#include <drm/drm_crtc_helper.h>
|
||||
#include <drm/drm_dp_helper.h>
|
||||
|
||||
#include "gma_display.h"
|
||||
#include "psb_drv.h"
|
||||
#include "psb_intel_drv.h"
|
||||
#include "psb_intel_reg.h"
|
||||
#include "gma_display.h"
|
||||
#include <drm/drm_dp_helper.h>
|
||||
|
||||
/**
|
||||
* struct i2c_algo_dp_aux_data - driver interface structure for i2c over dp
|
||||
|
|
|
@ -27,15 +27,16 @@
|
|||
* We should probably make this generic and share it with Medfield
|
||||
*/
|
||||
|
||||
#include <drm/drmP.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
|
||||
#include <drm/drm.h>
|
||||
#include <drm/drm_crtc.h>
|
||||
#include <drm/drm_edid.h>
|
||||
#include "psb_intel_drv.h"
|
||||
#include "psb_drv.h"
|
||||
#include "psb_intel_reg.h"
|
||||
|
||||
#include "cdv_device.h"
|
||||
#include <linux/pm_runtime.h>
|
||||
#include "psb_drv.h"
|
||||
#include "psb_intel_drv.h"
|
||||
#include "psb_intel_reg.h"
|
||||
|
||||
/* hdmi control bits */
|
||||
#define HDMI_NULL_PACKETS_DURING_VSYNC (1 << 9)
|
||||
|
|
|
@ -20,17 +20,16 @@
|
|||
* Jesse Barnes <jesse.barnes@intel.com>
|
||||
*/
|
||||
|
||||
#include <linux/i2c.h>
|
||||
#include <linux/dmi.h>
|
||||
#include <drm/drmP.h>
|
||||
#include <linux/i2c.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
|
||||
#include "cdv_device.h"
|
||||
#include "intel_bios.h"
|
||||
#include "power.h"
|
||||
#include "psb_drv.h"
|
||||
#include "psb_intel_drv.h"
|
||||
#include "psb_intel_reg.h"
|
||||
#include "power.h"
|
||||
#include <linux/pm_runtime.h>
|
||||
#include "cdv_device.h"
|
||||
|
||||
/**
|
||||
* LVDS I2C backlight control macros
|
||||
|
|
|
@ -17,29 +17,29 @@
|
|||
*
|
||||
**************************************************************************/
|
||||
|
||||
#include <linux/module.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/errno.h>
|
||||
#include <linux/string.h>
|
||||
#include <linux/pfn_t.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/tty.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/console.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/errno.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/pfn_t.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/string.h>
|
||||
#include <linux/tty.h>
|
||||
|
||||
#include <drm/drmP.h>
|
||||
#include <drm/drm.h>
|
||||
#include <drm/drm_crtc.h>
|
||||
#include <drm/drm_fb_helper.h>
|
||||
#include <drm/drm_fourcc.h>
|
||||
#include <drm/drm_gem_framebuffer_helper.h>
|
||||
|
||||
#include "psb_drv.h"
|
||||
#include "psb_intel_reg.h"
|
||||
#include "psb_intel_drv.h"
|
||||
#include "framebuffer.h"
|
||||
#include "gtt.h"
|
||||
#include "psb_drv.h"
|
||||
#include "psb_intel_drv.h"
|
||||
#include "psb_intel_reg.h"
|
||||
|
||||
static const struct drm_framebuffer_funcs psb_fb_funcs = {
|
||||
.destroy = drm_gem_fb_destroy,
|
||||
|
@ -232,7 +232,7 @@ static int psb_framebuffer_init(struct drm_device *dev,
|
|||
* Reject unknown formats, YUV formats, and formats with more than
|
||||
* 4 bytes per pixel.
|
||||
*/
|
||||
info = drm_format_info(mode_cmd->pixel_format);
|
||||
info = drm_get_format_info(dev, mode_cmd);
|
||||
if (!info || !info->depth || info->cpp[0] > 4)
|
||||
return -EINVAL;
|
||||
|
||||
|
|
|
@ -22,7 +22,6 @@
|
|||
#ifndef _FRAMEBUFFER_H_
|
||||
#define _FRAMEBUFFER_H_
|
||||
|
||||
#include <drm/drmP.h>
|
||||
#include <drm/drm_fb_helper.h>
|
||||
|
||||
#include "psb_drv.h"
|
||||
|
|
|
@ -23,10 +23,11 @@
|
|||
* accelerated operations on a GEM object)
|
||||
*/
|
||||
|
||||
#include <drm/drmP.h>
|
||||
#include <linux/pagemap.h>
|
||||
|
||||
#include <drm/drm.h>
|
||||
#include <drm/gma_drm.h>
|
||||
#include <drm/drm_vma_manager.h>
|
||||
|
||||
#include "psb_drv.h"
|
||||
|
||||
void psb_gem_free_object(struct drm_gem_object *obj)
|
||||
|
|
|
@ -13,7 +13,6 @@
|
|||
*
|
||||
**************************************************************************/
|
||||
|
||||
#include <drm/drmP.h>
|
||||
#include "psb_drv.h"
|
||||
|
||||
void gma_get_core_freq(struct drm_device *dev)
|
||||
|
|
|
@ -15,6 +15,7 @@
|
|||
|
||||
#ifndef _GMA_DEVICE_H
|
||||
#define _GMA_DEVICE_H
|
||||
struct drm_device;
|
||||
|
||||
extern void gma_get_core_freq(struct drm_device *dev);
|
||||
|
||||
|
|
|
@ -19,12 +19,18 @@
|
|||
* Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
|
||||
*/
|
||||
|
||||
#include <drm/drmP.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/highmem.h>
|
||||
|
||||
#include <drm/drm_crtc.h>
|
||||
#include <drm/drm_fourcc.h>
|
||||
#include <drm/drm_vblank.h>
|
||||
|
||||
#include "framebuffer.h"
|
||||
#include "gma_display.h"
|
||||
#include "psb_drv.h"
|
||||
#include "psb_intel_drv.h"
|
||||
#include "psb_intel_reg.h"
|
||||
#include "psb_drv.h"
|
||||
#include "framebuffer.h"
|
||||
|
||||
/**
|
||||
* Returns whether any output on the specified pipe is of the specified type
|
||||
|
|
|
@ -24,6 +24,9 @@
|
|||
|
||||
#include <linux/pm_runtime.h>
|
||||
|
||||
struct drm_encoder;
|
||||
struct drm_mode_set;
|
||||
|
||||
struct gma_clock_t {
|
||||
/* given values */
|
||||
int n;
|
||||
|
|
|
@ -19,11 +19,12 @@
|
|||
* Alan Cox <alan@linux.intel.com>
|
||||
*/
|
||||
|
||||
#include <drm/drmP.h>
|
||||
#include <linux/shmem_fs.h>
|
||||
|
||||
#include <asm/set_memory.h>
|
||||
#include "psb_drv.h"
|
||||
|
||||
#include "blitter.h"
|
||||
#include "psb_drv.h"
|
||||
|
||||
|
||||
/*
|
||||
|
|
|
@ -20,7 +20,6 @@
|
|||
#ifndef _PSB_GTT_H_
|
||||
#define _PSB_GTT_H_
|
||||
|
||||
#include <drm/drmP.h>
|
||||
#include <drm/drm_gem.h>
|
||||
|
||||
/* This wants cleaning up with respect to the psb_dev and un-needed stuff */
|
||||
|
|
|
@ -18,13 +18,13 @@
|
|||
* Eric Anholt <eric@anholt.net>
|
||||
*
|
||||
*/
|
||||
#include <drm/drmP.h>
|
||||
#include <drm/drm.h>
|
||||
#include <drm/gma_drm.h>
|
||||
#include <drm/drm_dp_helper.h>
|
||||
|
||||
#include "intel_bios.h"
|
||||
#include "psb_drv.h"
|
||||
#include "psb_intel_drv.h"
|
||||
#include "psb_intel_reg.h"
|
||||
#include "intel_bios.h"
|
||||
|
||||
#define SLAVE_ADDR1 0x70
|
||||
#define SLAVE_ADDR2 0x72
|
||||
|
|
|
@ -22,8 +22,7 @@
|
|||
#ifndef _INTEL_BIOS_H_
|
||||
#define _INTEL_BIOS_H_
|
||||
|
||||
#include <drm/drmP.h>
|
||||
#include <drm/drm_dp_helper.h>
|
||||
struct drm_device;
|
||||
|
||||
struct vbt_header {
|
||||
u8 signature[20]; /**< Always starts with 'VBT$' */
|
||||
|
|
|
@ -26,13 +26,14 @@
|
|||
* Eric Anholt <eric@anholt.net>
|
||||
* Chris Wilson <chris@chris-wilson.co.uk>
|
||||
*/
|
||||
#include <linux/module.h>
|
||||
#include <linux/i2c.h>
|
||||
|
||||
#include <linux/delay.h>
|
||||
#include <linux/i2c-algo-bit.h>
|
||||
#include <drm/drmP.h>
|
||||
#include "psb_intel_drv.h"
|
||||
#include <drm/gma_drm.h>
|
||||
#include <linux/i2c.h>
|
||||
#include <linux/module.h>
|
||||
|
||||
#include "psb_drv.h"
|
||||
#include "psb_intel_drv.h"
|
||||
#include "psb_intel_reg.h"
|
||||
|
||||
#define _wait_for(COND, MS, W) ({ \
|
||||
|
|
|
@ -17,9 +17,11 @@
|
|||
* Authors:
|
||||
* Eric Anholt <eric@anholt.net>
|
||||
*/
|
||||
|
||||
#include <linux/delay.h>
|
||||
#include <linux/export.h>
|
||||
#include <linux/i2c.h>
|
||||
#include <linux/i2c-algo-bit.h>
|
||||
#include <linux/i2c.h>
|
||||
|
||||
#include "psb_drv.h"
|
||||
#include "psb_intel_reg.h"
|
||||
|
|
|
@ -17,14 +17,16 @@
|
|||
*
|
||||
**************************************************************************/
|
||||
|
||||
#include "psb_drv.h"
|
||||
#include "mid_bios.h"
|
||||
#include "mdfld_output.h"
|
||||
#include "mdfld_dsi_output.h"
|
||||
#include "tc35876x-dsi-lvds.h"
|
||||
#include <linux/delay.h>
|
||||
|
||||
#include <asm/intel_scu_ipc.h>
|
||||
|
||||
#include "mdfld_dsi_output.h"
|
||||
#include "mdfld_output.h"
|
||||
#include "mid_bios.h"
|
||||
#include "psb_drv.h"
|
||||
#include "tc35876x-dsi-lvds.h"
|
||||
|
||||
#ifdef CONFIG_BACKLIGHT_CLASS_DEVICE
|
||||
|
||||
#define MRST_BLC_MAX_PWM_REG_FREQ 0xFFFF
|
||||
|
@ -342,7 +344,7 @@ static int mdfld_restore_display_registers(struct drm_device *dev, int pipenum)
|
|||
|
||||
if (pipenum == 1) {
|
||||
/* restore palette (gamma) */
|
||||
/*DRM_UDELAY(50000); */
|
||||
/* udelay(50000); */
|
||||
for (i = 0; i < 256; i++)
|
||||
PSB_WVDC32(pipe->palette[i], map->palette + (i << 2));
|
||||
|
||||
|
@ -404,7 +406,7 @@ static int mdfld_restore_display_registers(struct drm_device *dev, int pipenum)
|
|||
PSB_WVDC32(pipe->conf, map->conf);
|
||||
|
||||
/* restore palette (gamma) */
|
||||
/*DRM_UDELAY(50000); */
|
||||
/* udelay(50000); */
|
||||
for (i = 0; i < 256; i++)
|
||||
PSB_WVDC32(pipe->palette[i], map->palette + (i << 2));
|
||||
|
||||
|
|
|
@ -25,9 +25,11 @@
|
|||
* Jackie Li<yaodong.li@intel.com>
|
||||
*/
|
||||
|
||||
#include <linux/delay.h>
|
||||
|
||||
#include "mdfld_dsi_dpi.h"
|
||||
#include "mdfld_output.h"
|
||||
#include "mdfld_dsi_pkg_sender.h"
|
||||
#include "mdfld_output.h"
|
||||
#include "psb_drv.h"
|
||||
#include "tc35876x-dsi-lvds.h"
|
||||
|
||||
|
|
|
@ -25,16 +25,18 @@
|
|||
* Jackie Li<yaodong.li@intel.com>
|
||||
*/
|
||||
|
||||
#include <linux/module.h>
|
||||
|
||||
#include "mdfld_dsi_output.h"
|
||||
#include "mdfld_dsi_dpi.h"
|
||||
#include "mdfld_output.h"
|
||||
#include "mdfld_dsi_pkg_sender.h"
|
||||
#include "tc35876x-dsi-lvds.h"
|
||||
#include <linux/delay.h>
|
||||
#include <linux/moduleparam.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
|
||||
#include <asm/intel_scu_ipc.h>
|
||||
|
||||
#include "mdfld_dsi_dpi.h"
|
||||
#include "mdfld_dsi_output.h"
|
||||
#include "mdfld_dsi_pkg_sender.h"
|
||||
#include "mdfld_output.h"
|
||||
#include "tc35876x-dsi-lvds.h"
|
||||
|
||||
/* get the LABC from command line. */
|
||||
static int LABC_control = 1;
|
||||
|
||||
|
|
|
@ -29,17 +29,17 @@
|
|||
#define __MDFLD_DSI_OUTPUT_H__
|
||||
|
||||
#include <linux/backlight.h>
|
||||
#include <drm/drmP.h>
|
||||
|
||||
#include <asm/intel-mid.h>
|
||||
|
||||
#include <drm/drm.h>
|
||||
#include <drm/drm_crtc.h>
|
||||
#include <drm/drm_edid.h>
|
||||
|
||||
#include "mdfld_output.h"
|
||||
#include "psb_drv.h"
|
||||
#include "psb_intel_drv.h"
|
||||
#include "psb_intel_reg.h"
|
||||
#include "mdfld_output.h"
|
||||
|
||||
#include <asm/intel-mid.h>
|
||||
|
||||
#define FLD_MASK(start, end) (((1 << ((start) - (end) + 1)) - 1) << (end))
|
||||
#define FLD_VAL(val, start, end) (((val) << (end)) & FLD_MASK(start, end))
|
||||
|
|
|
@ -24,12 +24,14 @@
|
|||
* Jackie Li<yaodong.li@intel.com>
|
||||
*/
|
||||
|
||||
#include <linux/delay.h>
|
||||
#include <linux/freezer.h>
|
||||
|
||||
#include <video/mipi_display.h>
|
||||
|
||||
#include "mdfld_dsi_dpi.h"
|
||||
#include "mdfld_dsi_output.h"
|
||||
#include "mdfld_dsi_pkg_sender.h"
|
||||
#include "mdfld_dsi_dpi.h"
|
||||
|
||||
#define MDFLD_DSI_READ_MAX_COUNT 5000
|
||||
|
||||
|
|
|
@ -18,15 +18,18 @@
|
|||
* Eric Anholt <eric@anholt.net>
|
||||
*/
|
||||
|
||||
#include <linux/delay.h>
|
||||
#include <linux/i2c.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
|
||||
#include <drm/drmP.h>
|
||||
#include "psb_intel_reg.h"
|
||||
#include "gma_display.h"
|
||||
#include <drm/drm_crtc.h>
|
||||
#include <drm/drm_fourcc.h>
|
||||
|
||||
#include "framebuffer.h"
|
||||
#include "mdfld_output.h"
|
||||
#include "gma_display.h"
|
||||
#include "mdfld_dsi_output.h"
|
||||
#include "mdfld_output.h"
|
||||
#include "psb_intel_reg.h"
|
||||
|
||||
/* Hardcoded currently */
|
||||
static int ksel = KSEL_CRYSTAL_19;
|
||||
|
|
|
@ -27,6 +27,8 @@
|
|||
* Scott Rowe <scott.m.rowe@intel.com>
|
||||
*/
|
||||
|
||||
#include <linux/delay.h>
|
||||
|
||||
#include "mdfld_dsi_dpi.h"
|
||||
#include "mdfld_dsi_pkg_sender.h"
|
||||
|
||||
|
|
|
@ -23,11 +23,10 @@
|
|||
* - Check ioremap failures
|
||||
*/
|
||||
|
||||
#include <drm/drmP.h>
|
||||
#include <drm/drm.h>
|
||||
#include <drm/gma_drm.h>
|
||||
#include "psb_drv.h"
|
||||
|
||||
#include "mid_bios.h"
|
||||
#include "psb_drv.h"
|
||||
|
||||
static void mid_get_fuse_settings(struct drm_device *dev)
|
||||
{
|
||||
|
|
|
@ -16,6 +16,7 @@
|
|||
* 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
*
|
||||
**************************************************************************/
|
||||
struct drm_device;
|
||||
|
||||
extern int mid_chip_setup(struct drm_device *dev);
|
||||
|
||||
|
|
|
@ -15,10 +15,12 @@
|
|||
* 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
*
|
||||
**************************************************************************/
|
||||
#include <drm/drmP.h>
|
||||
|
||||
#include <linux/highmem.h>
|
||||
|
||||
#include "mmu.h"
|
||||
#include "psb_drv.h"
|
||||
#include "psb_reg.h"
|
||||
#include "mmu.h"
|
||||
|
||||
/*
|
||||
* Code for the SGX MMU:
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user