summaryrefslogtreecommitdiff
path: root/Documentation/admin-guide
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2024-07-17 18:30:10 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2024-07-17 18:30:10 -0700
commitb1bc554e009e3aeed7e4cfd2e717c7a34a98c683 (patch)
treedb092dd7887e732588250f2c2f932e1bfc3f87a2 /Documentation/admin-guide
parent0ffb8a4c96e55ecf0e572aec1a0220af3da84e22 (diff)
parent68a72104cbcf38ad16500216e213fa4eb21c4be2 (diff)
Merge tag 'media/v6.11-1' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media
Pull media updates from Mauro Carvalho Chehab: - New sensor drivers: gc05a2, gc08a3 and imx283 - New serializer/deserializer drivers: max96714 and max96717 - New JPEG encoder driver: e5010 - Support for Raspberry Pi PiSP Backend (BE) ISP driver - Old documentation for av7110 driver removed, as a new version was added as Documentation/userspace-api/media/dvb/legacy*.rst - atompisp: Linux firmwares are now available, so drop firmware-related task from TODO and update firmware logic - The imx258 driver has gained several improvements - wave5 driver has gained support for HEVC decoding - em28xx gained support for MyGica UTV3 - av7110 budget-patch driver removed - Lots of other cleanups, improvements and fixes * tag 'media/v6.11-1' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media: (301 commits) media: raspberrypi: Switch to remove_new media: uapi: pisp_be_config: Add extra config fields media: uapi: pisp_be_config: Re-sort pisp_be_tiles_config media: uapi: pisp_common: Capitalize all macros media: uapi: pisp_common: Add 32 bpp format test media: uapi: pisp_be_config: Drop BIT() from uAPI media: stm32: dcmipp: correct error handling in dcmipp_create_subdevs media: atomisp: Fix spelling mistakes in sh_css_sp.c media: atomisp: Fix spelling mistake in ia_css_debug.c media: atomisp: Fix spelling mistake in hmm_bo.c media: atomisp: Fix spelling mistake in ia_css_eed1_8.host.c media: atomisp: Fix spelling mistake in sh_css_internal.h media: atomisp: Fix spelling mistake "pipline" -> "pipeline" media: atomisp: Remove unused GPIO related defines and APIs media: atomisp: Replace COMPILATION_ERROR_IF() by static_assert() media: atomisp: Clean up unused macros from math_support.h media: atomisp: csi2-bridge: Add DMI quirk for OV5693 on Xiaomi Mipad2 media: atomisp: Update TODO media: atomisp: Prefix firmware paths with "intel/ipu/" media: atomisp: Remove firmware_name module parameter ...
Diffstat (limited to 'Documentation/admin-guide')
-rw-r--r--Documentation/admin-guide/media/em28xx-cardlist.rst8
-rw-r--r--Documentation/admin-guide/media/ipu6-isys.rst14
-rw-r--r--Documentation/admin-guide/media/raspberrypi-pisp-be.dot20
-rw-r--r--Documentation/admin-guide/media/raspberrypi-pisp-be.rst109
-rw-r--r--Documentation/admin-guide/media/tuner-cardlist.rst2
-rw-r--r--Documentation/admin-guide/media/v4l-drivers.rst1
-rw-r--r--Documentation/admin-guide/media/vivid.rst185
7 files changed, 212 insertions, 127 deletions
diff --git a/Documentation/admin-guide/media/em28xx-cardlist.rst b/Documentation/admin-guide/media/em28xx-cardlist.rst
index ace65718ea22..7dac07986d91 100644
--- a/Documentation/admin-guide/media/em28xx-cardlist.rst
+++ b/Documentation/admin-guide/media/em28xx-cardlist.rst
@@ -438,3 +438,11 @@ EM28xx cards list
- MyGica iGrabber
- em2860
- 1f4d:1abe
+ * - 106
+ - Hauppauge USB QuadHD ATSC
+ - em28274
+ - 2040:846d
+ * - 107
+ - MyGica UTV3 Analog USB2.0 TV Box
+ - em2860
+ - eb1a:2860
diff --git a/Documentation/admin-guide/media/ipu6-isys.rst b/Documentation/admin-guide/media/ipu6-isys.rst
index 0721e920b5e6..d05086824a74 100644
--- a/Documentation/admin-guide/media/ipu6-isys.rst
+++ b/Documentation/admin-guide/media/ipu6-isys.rst
@@ -135,16 +135,16 @@ sensor ov2740 on Lenovo X1 Yoga laptop.
.. code-block:: none
media-ctl -l "\"ov2740 14-0036\":0 -> \"Intel IPU6 CSI2 1\":0[1]"
- media-ctl -l "\"Intel IPU6 CSI2 1\":1 -> \"Intel IPU6 ISYS Capture 0\":0[5]"
- media-ctl -l "\"Intel IPU6 CSI2 1\":2 -> \"Intel IPU6 ISYS Capture 1\":0[5]"
+ media-ctl -l "\"Intel IPU6 CSI2 1\":1 -> \"Intel IPU6 ISYS Capture 0\":0[1]"
+ media-ctl -l "\"Intel IPU6 CSI2 1\":2 -> \"Intel IPU6 ISYS Capture 1\":0[1]"
# set routing
- media-ctl -v -R "\"Intel IPU6 CSI2 1\" [0/0->1/0[1],0/1->2/1[1]]"
+ media-ctl -R "\"Intel IPU6 CSI2 1\" [0/0->1/0[1],0/1->2/1[1]]"
- media-ctl -v "\"Intel IPU6 CSI2 1\":0/0 [fmt:SGRBG10/1932x1092]"
- media-ctl -v "\"Intel IPU6 CSI2 1\":0/1 [fmt:GENERIC_8/97x1]"
- media-ctl -v "\"Intel IPU6 CSI2 1\":1/0 [fmt:SGRBG10/1932x1092]"
- media-ctl -v "\"Intel IPU6 CSI2 1\":2/1 [fmt:GENERIC_8/97x1]"
+ media-ctl -V "\"Intel IPU6 CSI2 1\":0/0 [fmt:SGRBG10/1932x1092]"
+ media-ctl -V "\"Intel IPU6 CSI2 1\":0/1 [fmt:GENERIC_8/97x1]"
+ media-ctl -V "\"Intel IPU6 CSI2 1\":1/0 [fmt:SGRBG10/1932x1092]"
+ media-ctl -V "\"Intel IPU6 CSI2 1\":2/1 [fmt:GENERIC_8/97x1]"
CAPTURE_DEV=$(media-ctl -e "Intel IPU6 ISYS Capture 0")
./yavta --data-prefix -c100 -n5 -I -s1932x1092 --file=/tmp/frame-#.bin \
diff --git a/Documentation/admin-guide/media/raspberrypi-pisp-be.dot b/Documentation/admin-guide/media/raspberrypi-pisp-be.dot
new file mode 100644
index 000000000000..55671dc1d443
--- /dev/null
+++ b/Documentation/admin-guide/media/raspberrypi-pisp-be.dot
@@ -0,0 +1,20 @@
+digraph board {
+ rankdir=TB
+ n00000001 [label="{{<port0> 0 | <port1> 1 | <port2> 2 | <port7> 7} | pispbe\n | {<port3> 3 | <port4> 4 | <port5> 5 | <port6> 6}}", shape=Mrecord, style=filled, fillcolor=green]
+ n00000001:port3 -> n0000001c [style=bold]
+ n00000001:port4 -> n00000022 [style=bold]
+ n00000001:port5 -> n00000028 [style=bold]
+ n00000001:port6 -> n0000002e [style=bold]
+ n0000000a [label="pispbe-input\n/dev/video0", shape=box, style=filled, fillcolor=yellow]
+ n0000000a -> n00000001:port0 [style=bold]
+ n00000010 [label="pispbe-tdn_input\n/dev/video1", shape=box, style=filled, fillcolor=yellow]
+ n00000010 -> n00000001:port1 [style=bold]
+ n00000016 [label="pispbe-stitch_input\n/dev/video2", shape=box, style=filled, fillcolor=yellow]
+ n00000016 -> n00000001:port2 [style=bold]
+ n0000001c [label="pispbe-output0\n/dev/video3", shape=box, style=filled, fillcolor=yellow]
+ n00000022 [label="pispbe-output1\n/dev/video4", shape=box, style=filled, fillcolor=yellow]
+ n00000028 [label="pispbe-tdn_output\n/dev/video5", shape=box, style=filled, fillcolor=yellow]
+ n0000002e [label="pispbe-stitch_output\n/dev/video6", shape=box, style=filled, fillcolor=yellow]
+ n00000034 [label="pispbe-config\n/dev/video7", shape=box, style=filled, fillcolor=yellow]
+ n00000034 -> n00000001:port7 [style=bold]
+}
diff --git a/Documentation/admin-guide/media/raspberrypi-pisp-be.rst b/Documentation/admin-guide/media/raspberrypi-pisp-be.rst
new file mode 100644
index 000000000000..0fcf46f26276
--- /dev/null
+++ b/Documentation/admin-guide/media/raspberrypi-pisp-be.rst
@@ -0,0 +1,109 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=========================================================
+Raspberry Pi PiSP Back End Memory-to-Memory ISP (pisp-be)
+=========================================================
+
+The PiSP Back End
+=================
+
+The PiSP Back End is a memory-to-memory Image Signal Processor (ISP) which reads
+image data from DRAM memory and performs image processing as specified by the
+application through the parameters in a configuration buffer, before writing
+pixel data back to memory through two distinct output channels.
+
+The ISP registers and programming model are documented in the `Raspberry Pi
+Image Signal Processor (PiSP) Specification document`_
+
+The PiSP Back End ISP processes images in tiles. The handling of image
+tessellation and the computation of low-level configuration parameters is
+realized by a free software library called `libpisp
+<https://github.com/raspberrypi/libpisp>`_.
+
+The full image processing pipeline, which involves capturing RAW Bayer data from
+an image sensor through a MIPI CSI-2 compatible capture interface, storing them
+in DRAM memory and processing them in the PiSP Back End to obtain images usable
+by an application is implemented in `libcamera <https://libcamera.org>`_ as
+part of the Raspberry Pi platform support.
+
+The pisp-be driver
+==================
+
+The Raspberry Pi PiSP Back End (pisp-be) driver is located under
+drivers/media/platform/raspberrypi/pisp-be. It uses the `V4L2 API` to register
+a number of video capture and output devices, the `V4L2 subdev API` to register
+a subdevice for the ISP that connects the video devices in a single media graph
+realized using the `Media Controller (MC) API`.
+
+The media topology registered by the `pisp-be` driver is represented below:
+
+.. _pips-be-topology:
+
+.. kernel-figure:: raspberrypi-pisp-be.dot
+ :alt: Diagram of the default media pipeline topology
+ :align: center
+
+
+The media graph registers the following video device nodes:
+
+- pispbe-input: output device for images to be submitted to the ISP for
+ processing.
+- pispbe-tdn_input: output device for temporal denoise.
+- pispbe-stitch_input: output device for image stitching (HDR).
+- pispbe-output0: first capture device for processed images.
+- pispbe-output1: second capture device for processed images.
+- pispbe-tdn_output: capture device for temporal denoise.
+- pispbe-stitch_output: capture device for image stitching (HDR).
+- pispbe-config: output device for ISP configuration parameters.
+
+pispbe-input
+------------
+
+Images to be processed by the ISP are queued to the `pispbe-input` output device
+node. For a list of image formats supported as input to the ISP refer to the
+`Raspberry Pi Image Signal Processor (PiSP) Specification document`_.
+
+pispbe-tdn_input, pispbe-tdn_output
+-----------------------------------
+
+The `pispbe-tdn_input` output video device receives images to be processed by
+the temporal denoise block which are captured from the `pispbe-tdn_output`
+capture video device. Userspace is responsible for maintaining queues on both
+devices, and ensuring that buffers completed on the output are queued to the
+input.
+
+pispbe-stitch_input, pispbe-stitch_output
+-----------------------------------------
+
+To realize HDR (high dynamic range) image processing the image stitching and
+tonemapping blocks are used. The `pispbe-stitch_output` writes images to memory
+and the `pispbe-stitch_input` receives the previously written frame to process
+it along with the current input image. Userspace is responsible for maintaining
+queues on both devices, and ensuring that buffers completed on the output are
+queued to the input.
+
+pispbe-output0, pispbe-output1
+------------------------------
+
+The two capture devices write to memory the pixel data as processed by the ISP.
+
+pispbe-config
+-------------
+
+The `pispbe-config` output video devices receives a buffer of configuration
+parameters that define the desired image processing to be performed by the ISP.
+
+The format of the ISP configuration parameter is defined by
+:c:type:`pisp_be_tiles_config` C structure and the meaning of each parameter is
+described in the `Raspberry Pi Image Signal Processor (PiSP) Specification
+document`_.
+
+ISP configuration
+=================
+
+The ISP configuration is described solely by the content of the parameters
+buffer. The only parameter that userspace needs to configure using the V4L2 API
+is the image format on the output and capture video devices for validation of
+the content of the parameters buffer.
+
+.. _Raspberry Pi Image Signal Processor (PiSP) Specification document: https://datasheets.raspberrypi.com/camera/raspberry-pi-image-signal-processor-specification.pdf
diff --git a/Documentation/admin-guide/media/tuner-cardlist.rst b/Documentation/admin-guide/media/tuner-cardlist.rst
index 362617c59c5d..65ecf48ddf24 100644
--- a/Documentation/admin-guide/media/tuner-cardlist.rst
+++ b/Documentation/admin-guide/media/tuner-cardlist.rst
@@ -97,4 +97,6 @@ Tuner number Card name
89 Sony BTF-PG472Z PAL/SECAM
90 Sony BTF-PK467Z NTSC-M-JP
91 Sony BTF-PB463Z NTSC-M
+92 Silicon Labs Si2157 tuner
+93 Tena TNF931D-DFDR1
============ =====================================================
diff --git a/Documentation/admin-guide/media/v4l-drivers.rst b/Documentation/admin-guide/media/v4l-drivers.rst
index 4120eded9a13..b6af448b9fe9 100644
--- a/Documentation/admin-guide/media/v4l-drivers.rst
+++ b/Documentation/admin-guide/media/v4l-drivers.rst
@@ -23,6 +23,7 @@ Video4Linux (V4L) driver-specific documentation
omap4_camera
philips
qcom_camss
+ raspberrypi-pisp-be
rcar-fdp1
rkisp1
saa7134
diff --git a/Documentation/admin-guide/media/vivid.rst b/Documentation/admin-guide/media/vivid.rst
index b6f658c0997e..1306f19ecb5a 100644
--- a/Documentation/admin-guide/media/vivid.rst
+++ b/Documentation/admin-guide/media/vivid.rst
@@ -302,6 +302,15 @@ all configurable using the following module options:
- 0: forbid hints
- 1: allow hints
+- supports_requests:
+
+ specifies if the device should support the Request API. There are
+ three possible values, default is 1:
+
+ - 0: no request
+ - 1: supports requests
+ - 2: requires requests
+
Taken together, all these module options allow you to precisely customize
the driver behavior and test your application with all sorts of permutations.
It is also very suitable to emulate hardware that is not yet available, e.g.
@@ -313,10 +322,10 @@ Video Capture
This is probably the most frequently used feature. The video capture device
can be configured by using the module options num_inputs, input_types and
-ccs_cap_mode (see section 1 for more detailed information), but by default
-four inputs are configured: a webcam, a TV tuner, an S-Video and an HDMI
-input, one input for each input type. Those are described in more detail
-below.
+ccs_cap_mode (see "Configuring the driver" for more detailed information),
+but by default four inputs are configured: a webcam, a TV tuner, an S-Video
+and an HDMI input, one input for each input type. Those are described in more
+detail below.
Special attention has been given to the rate at which new frames become
available. The jitter will be around 1 jiffie (that depends on the HZ
@@ -434,10 +443,10 @@ Video Output
------------
The video output device can be configured by using the module options
-num_outputs, output_types and ccs_out_mode (see section 1 for more detailed
-information), but by default two outputs are configured: an S-Video and an
-HDMI input, one output for each output type. Those are described in more detail
-below.
+num_outputs, output_types and ccs_out_mode (see "Configuring the driver"
+for more detailed information), but by default two outputs are configured:
+an S-Video and an HDMI input, one output for each output type. Those are
+described in more detail below.
Like with video capture the framerate is also exact in the long term.
@@ -1011,11 +1020,6 @@ Digital Video Controls
affects the reported colorspace since DVI_D outputs will always use
sRGB.
-- Display Present:
-
- sets the presence of a "display" on the HDMI output. This affects
- the tx_edid_present, tx_hotplug and tx_rxsense controls.
-
FM Radio Receiver Controls
~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -1130,35 +1134,34 @@ Metadata Capture Controls
if set, then the generated metadata stream contains Source Clock information.
-Video, VBI and RDS Looping
---------------------------
-The vivid driver supports looping of video output to video input, VBI output
-to VBI input and RDS output to RDS input. For video/VBI looping this emulates
-as if a cable was hooked up between the output and input connector. So video
-and VBI looping is only supported between S-Video and HDMI inputs and outputs.
-VBI is only valid for S-Video as it makes no sense for HDMI.
+Video, Sliced VBI and HDMI CEC Looping
+--------------------------------------
-Since radio is wireless this looping always happens if the radio receiver
-frequency is close to the radio transmitter frequency. In that case the radio
-transmitter will 'override' the emulated radio stations.
-
-Looping is currently supported only between devices created by the same
-vivid driver instance.
+Video Looping functionality is supported for devices created by the same
+vivid driver instance, as well as across multiple instances of the vivid driver.
+The vivid driver supports looping of video and Sliced VBI data between an S-Video output
+and an S-Video input. It also supports looping of video and HDMI CEC data between an
+HDMI output and an HDMI input.
+To enable looping, set the 'HDMI/S-Video XXX-N Is Connected To' control(s) to select
+whether an input uses the Test Pattern Generator, or is disconnected, or is connected
+to an output. An input can be connected to an output from any vivid instance.
+The inputs and outputs are numbered XXX-N where XXX is the vivid instance number
+(see module option n_devs). If there is only one vivid instance (the default), then
+XXX will be 000. And N is the Nth S-Video/HDMI input or output of that instance.
+If vivid is loaded without module options, then you can connect the S-Video 000-0 input
+to the S-Video 000-0 output, or the HDMI 000-0 input to the HDMI 000-0 output.
+This is the equivalent of connecting or disconnecting a cable between an input and an
+output in a physical device.
-Video and Sliced VBI looping
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+If an 'HDMI/S-Video XXX-N Is Connected To' control selected an output, then the video
+output will be looped to the video input provided that:
-The way to enable video/VBI looping is currently fairly crude. A 'Loop Video'
-control is available in the "Vivid" control class of the video
-capture and VBI capture devices. When checked the video looping will be enabled.
-Once enabled any video S-Video or HDMI input will show a static test pattern
-until the video output has started. At that time the video output will be
-looped to the video input provided that:
+- the currently selected input matches the input indicated by the control name.
-- the input type matches the output type. So the HDMI input cannot receive
- video from the S-Video output.
+- in the vivid instance of the output connector, the currently selected output matches
+ the output indicated by the control's value.
- the video resolution of the video input must match that of the video output.
So it is not possible to loop a 50 Hz (720x576) S-Video output to a 60 Hz
@@ -1185,6 +1188,8 @@ looped to the video input provided that:
"DV Timings Signal Mode" for the HDMI input should be configured so that a
valid signal is passed to the video input.
+If any condition is not valid, then the 'Noise' test pattern is shown.
+
The framerates do not have to match, although this might change in the future.
By default you will see the OSD text superimposed on top of the looped video.
@@ -1198,17 +1203,26 @@ and WSS (50 Hz formats) VBI data is looped. Teletext VBI data is not looped.
Radio & RDS Looping
-~~~~~~~~~~~~~~~~~~~
-
-As mentioned in section 6 the radio receiver emulates stations are regular
-frequency intervals. Depending on the frequency of the radio receiver a
-signal strength value is calculated (this is returned by VIDIOC_G_TUNER).
-However, it will also look at the frequency set by the radio transmitter and
-if that results in a higher signal strength than the settings of the radio
-transmitter will be used as if it was a valid station. This also includes
-the RDS data (if any) that the transmitter 'transmits'. This is received
-faithfully on the receiver side. Note that when the driver is loaded the
-frequencies of the radio receiver and transmitter are not identical, so
+-------------------
+
+The vivid driver supports looping of RDS output to RDS input.
+
+Since radio is wireless this looping always happens if the radio receiver
+frequency is close to the radio transmitter frequency. In that case the radio
+transmitter will 'override' the emulated radio stations.
+
+RDS looping is currently supported only between devices created by the same
+vivid driver instance.
+
+As mentioned in the "Radio Receiver" section, the radio receiver emulates
+stations at regular frequency intervals. Depending on the frequency of the
+radio receiver a signal strength value is calculated (this is returned by
+VIDIOC_G_TUNER). However, it will also look at the frequency set by the radio
+transmitter and if that results in a higher signal strength than the settings
+of the radio transmitter will be used as if it was a valid station. This also
+includes the RDS data (if any) that the transmitter 'transmits'. This is
+received faithfully on the receiver side. Note that when the driver is loaded
+the frequencies of the radio receiver and transmitter are not identical, so
initially no looping takes place.
@@ -1218,8 +1232,8 @@ Cropping, Composing, Scaling
This driver supports cropping, composing and scaling in any combination. Normally
which features are supported can be selected through the Vivid controls,
but it is also possible to hardcode it when the module is loaded through the
-ccs_cap_mode and ccs_out_mode module options. See section 1 on the details of
-these module options.
+ccs_cap_mode and ccs_out_mode module options. See "Configuring the driver" on
+the details of these module options.
This allows you to test your application for all these variations.
@@ -1260,7 +1274,8 @@ is set, then the alpha component is only used for the color red and set to
The driver has to be configured to support the multiplanar formats. By default
the driver instances are single-planar. This can be changed by setting the
-multiplanar module option, see section 1 for more details on that option.
+multiplanar module option, see "Configuring the driver" for more details on that
+option.
If the driver instance is using the multiplanar formats/API, then the first
single planar format (YUYV) and the multiplanar NV16M and NV61M formats the
@@ -1270,74 +1285,6 @@ data_offset to be non-zero, so this is a useful feature for testing applications
Video output will also honor any data_offset that the application set.
-Capture Overlay
----------------
-
-Note: capture overlay support is implemented primarily to test the existing
-V4L2 capture overlay API. In practice few if any GPUs support such overlays
-anymore, and neither are they generally needed anymore since modern hardware
-is so much more capable. By setting flag 0x10000 in the node_types module
-option the vivid driver will create a simple framebuffer device that can be
-used for testing this API. Whether this API should be used for new drivers is
-questionable.
-
-This driver has support for a destructive capture overlay with bitmap clipping
-and list clipping (up to 16 rectangles) capabilities. Overlays are not
-supported for multiplanar formats. It also honors the struct v4l2_window field
-setting: if it is set to FIELD_TOP or FIELD_BOTTOM and the capture setting is
-FIELD_ALTERNATE, then only the top or bottom fields will be copied to the overlay.
-
-The overlay only works if you are also capturing at that same time. This is a
-vivid limitation since it copies from a buffer to the overlay instead of
-filling the overlay directly. And if you are not capturing, then no buffers
-are available to fill.
-
-In addition, the pixelformat of the capture format and that of the framebuffer
-must be the same for the overlay to work. Otherwise VIDIOC_OVERLAY will return
-an error.
-
-In order to really see what it going on you will need to create two vivid
-instances: the first with a framebuffer enabled. You configure the capture
-overlay of the second instance to use the framebuffer of the first, then
-you start capturing in the second instance. For the first instance you setup
-the output overlay for the video output, turn on video looping and capture
-to see the blended framebuffer overlay that's being written to by the second
-instance. This setup would require the following commands:
-
-.. code-block:: none
-
- $ sudo modprobe vivid n_devs=2 node_types=0x10101,0x1
- $ v4l2-ctl -d1 --find-fb
- /dev/fb1 is the framebuffer associated with base address 0x12800000
- $ sudo v4l2-ctl -d2 --set-fbuf fb=1
- $ v4l2-ctl -d1 --set-fbuf fb=1
- $ v4l2-ctl -d0 --set-fmt-video=pixelformat='AR15'
- $ v4l2-ctl -d1 --set-fmt-video-out=pixelformat='AR15'
- $ v4l2-ctl -d2 --set-fmt-video=pixelformat='AR15'
- $ v4l2-ctl -d0 -i2
- $ v4l2-ctl -d2 -i2
- $ v4l2-ctl -d2 -c horizontal_movement=4
- $ v4l2-ctl -d1 --overlay=1
- $ v4l2-ctl -d0 -c loop_video=1
- $ v4l2-ctl -d2 --stream-mmap --overlay=1
-
-And from another console:
-
-.. code-block:: none
-
- $ v4l2-ctl -d1 --stream-out-mmap
-
-And yet another console:
-
-.. code-block:: none
-
- $ qv4l2
-
-and start streaming.
-
-As you can see, this is not for the faint of heart...
-
-
Output Overlay
--------------
@@ -1405,8 +1352,6 @@ Just as a reminder and in no particular order:
- Add ARGB888 overlay support: better testing of the alpha channel
- Improve pixel aspect support in the tpg code by passing a real v4l2_fract
- Use per-queue locks and/or per-device locks to improve throughput
-- Add support to loop from a specific output to a specific input across
- vivid instances
- The SDR radio should use the same 'frequencies' for stations as the normal
radio receiver, and give back noise if the frequency doesn't match up with
a station frequency