TSN Documentation Project for Linux*¶
Introduction¶
Welcome to the Time-Sensitive Networking (TSN) Documentation project for Linux*! This project provides a set of hands-on tutorials to help you get started with TSN on Linux systems. This project focuses only on the TSN features provided by mainline projects from the Linux ecosystem such as Linux kernel, Linux PTP, ALSA, and GStreamer.
For the past few years, multiple TSN features have been enabled on the upstream Linux ecosystem [1] [2] [3] [4] [5] [6]. This project aims to help developers and integrators on how to get started with those TSN features by providing hands-on tutorials on how to leverage the support available in the upstream Linux ecosystem. This documentation project stitches together the information scattered throughout cover letters and project-specific documentation.
Linux ecosystem supports several TSN features such as Credit-Based Shaper (former Qav), Enhancements for Scheduled Traffic (EST, former Qbv), Generalized Precision Time Protocol (gPTP), and Audio/Video Transport Protocol (AVTP). It also supports the LaunchTime feature present in some NICs, such as Intel(R) Ethernet Controller I210, which enables user applications to offload packet transmission.
The features above are supported by different projects: CBS, EST, and LaunchTime are supported by the Linux kernel via the Queueing Disciplines (qdiscs), gPTP is supported by Linux PTP project, and AVTP is supported by Libavtp project. Audio/Video Bridging (AVB) Talker/Listener use-cases are supported by ALSA and GStreamer AVTP plugins.
Organization¶
The documentation provided in this project is organized in self-contained tutorials that are listed below.
Getting Started with AVB on Linux*¶
Introduction¶
Audio/Video Bridging (AVB) is a set of IEEE standards enabling time-sensitive Audio/Video applications on Local Area Networks (LANs). AVB provides time synchronization, bounded transmission latency, resource management, and application interoperability. Since these features can additionally be leveraged by non-AV systems, IEEE expanded its scope and rebranded AVB as Time-Sensitive Networking (TSN).
For the past few years, several TSN building blocks have been developed in the upstream Linux* ecosystem, such as generalized Precision Time Protocol (gPTP) support on Linux PTP, TSN Queueing Disciplines (qdiscs), device driver support, and Libavtp project. On top of these building blocks, Audio Video Transport Protocol (AVTP) plugins were developed for ALSA and GStreamer frameworks to enable AVB on Linux systems. This tutorial focuses on leveraging these plugins to implement an AVB application.
The Advanced Linux Sound Architecture (ALSA) is a low-level framework that provides audio functionality on Linux. It comprises sound card device drivers, kernel-user interfaces, and user-space libraries and utility tools. GStreamer, on the other hand, is a higher-level framework that provides multimedia functionalities such as encoding, multiplexing, filtering, and rendering to applications.
This tutorial discusses how to set up the system and get started with AVB talker/listener applications. By the end of this tutorial, you will have two endpoints, a TSN Talker and a TSN Listener, configured to transmit audio and video streams with bounded latency, and will be able to run some AVB sample applications.
System Requirements¶
This tutorial has been validated on two desktop machines with Intel(R) Ethernet Controller I210 connected back-to-back and Linux kernel version 4.19.
Plugins Installation¶
Depending on your use case, install the plugin you need. Follow these steps to install the ALSA and GStreamer AVTP plugins on both machines.
To complete the installation, follow the steps to get the source code and build the plugin. This includes installing the respective dependencies. Those should be packaged for most distros, though. For example, on Ubuntu*, one can use to install them:
sudo apt install build-essential git meson flex bison glib2.0 \
libcmocka-dev autoconf libtool autopoint libncurses-dev \
libpulse-dev
In the instructions below, all plugins artifacts are installed in
/usr/local
so make sure your environment variables considered it.
export PATH=/usr/local/bin:/usr/local/sbin:$PATH
export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH
export ACLOCAL_PATH=/usr/local/share/aclocal/:$ACLOCAL_PATH
export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig:$PKG_CONFIG_PATH
Both ALSA and GStreamer plugins depend on Libavtp so you must install it in your system. Libavtp is an open source implementation of Audio Video Transport Protocol (AVTP) specified in IEEE 1722-2016 spec. Libavtp source code can be found in github.com/AVnu/libavtp. Checkout the code and follow the instructions in the README file to get it built and installed.
ALSA Plugin¶
The ALSA framework is the de facto framework that provides audio functionality in the Linux system. While it comprises both kernel and user space components, the AVB examples covered in this tutorial only depend on the user space components which are the following:
- alsa-lib provides the core libraries
- alsa-utils provides utility tools for playback, capture, and mix audio samples
- alsa-plugins provides assorted plugins to the ALSA framework
The AVTP plugin is part of alsa-plugins since version 1.1.8.
This tutorial uses the latest version, 1.1.9. Follow these steps to get, build, and install the ALSA artifacts provided by those projects.
Step 1: Start with the core libraries from alsa-lib project:
git clone https://github.com/alsa-project/alsa-lib.git
cd alsa-lib/
git checkout v1.1.9
autoreconf -i
./configure --prefix=/usr/local
make
sudo make install
Step 2: Install the utility tools from alsa-utils project:
git clone https://github.com/alsa-project/alsa-utils.git
cd alsa-utils/
git checkout v1.1.9
autoreconf -i
./configure --prefix=/usr/local
make
sudo make install
Step 3: Install the plugins from alsa-plugins project:
git clone https://github.com/alsa-project/alsa-plugins.git
cd alsa-plugins/
git checkout v1.1.9
autoreconf -i
./configure --prefix=/usr/local
make
sudo make install
Step 4: Regenerate the shared library cache after manually installing libraries:
sudo ldconfig
GStreamer Plugin¶
By its own definition, GStreamer “is a library for constructing graphs of media-handling components”. It provides a pipeline, in which elements connect to one another and data is processed as it flows. Elements are provided by GStreamer plugins. The AVTP plugin is provided by the gst-plugins-bad module. As the AVTP plugin is not yet part of a GStreamer release, to build it is necessary to also build GStreamer core and gst-plugins-base from source.
Step 1: Install GStreamer core:
git clone https://gitlab.freedesktop.org/gstreamer/gstreamer.git
cd gstreamer
meson build --prefix=/usr/local
ninja -C build
sudo ninja -C build install
sudo setcap cap_net_raw,cap_net_admin+ep /usr/local/bin/gst-launch-1.0
Last command ensures that gst-launch-1.0 tool has permission to access network over layer 2, as needed by TSN applications.
Step 2: Install gst-plugins-base:
git clone https://gitlab.freedesktop.org/gstreamer/gst-plugins-base.git
cd gst-plugins-base
meson build --prefix=/usr/local
ninja -C build
sudo ninja -C build install
Step 3: Install gst-plugins-bad:
git clone https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad.git
cd gst-plugins-bad
meson build --prefix=/usr/local
ninja -C build
sudo ninja -C build install
Step 4: Regenerate the shared library cache after manually installing libraries:
sudo ldconfig
Step 5: Confirm that the GStreamer AVTP plugin has been successfully installed.
gst-inspect-1.0 avtp
The output contains standard information about the GStreamer AVTP plugin.
Step 6: Install additional GStreamer modules:
- gst-plugins-good provides basic elements used in this tutorial
- gst-plugins-ugly is needed if using software encoder x264enc
- gst-libav provides software decoders
- gst-vaapi provides VA-API encoders and decoders
These modules can be installed from your favorite distro packages. For instance, to install all of the above on Ubuntu, run:
sudo apt install gstreamer1.0-plugins-ugly gstreamer1.0-plugins-good \
gstreamer1.0-libav gstreamer1.0-vaapi
Step 7: To ensure GStreamer finds the plugins installed from packages or from sources, add the system default plugin directory to the path GStreamer search plugin on. For instance, on Ubuntu do:
export GST_PLUGIN_PATH="/usr/lib/x86_64-linux-gnu/gstreamer-1.0"
System Setup¶
To run an AVB application, configure the following:
- VLAN interface
- Time synchronization
- Qdiscs
VLAN Configuration¶
Since AVB streams are transmitted over Virtual LANs (VLANs), a VLAN interface on both hosts is required. The VLAN interface is created using the ip-link command from iproute2 project which is pre-installed on most Linux distributions.
This example transmits AVB streams on VLAN ID 5 and follows the priority mapping recommended by IEEE 802.1Q-2018. In this tutorial, the TSN-capable NIC is represented by the eth0 interface. Make sure to replace it with the interface name of the TSN-capable NIC in your system.
Run the following command to create the eth0.5 interface, which represents the VLAN interface in this tutorial:
sudo ip link add link eth0 name eth0.5 type vlan id 5 \
egress-qos-map 2:2 3:3
sudo ip link set eth0.5 up
For further information regarding VLAN in Linux, refer to Configuring VLAN Interfaces.
Qdiscs Configuration¶
The TSN control plane is implemented through the Linux Traffic Control (TC) System. The transmission algorithms specified in Forwarding and Queuing for Time-Sensitive Streams (FQTSS) chapter from IEEE 802.1Q-2018 are supported via TC Queuing Disciplines (qdiscs). Three qdiscs are required to set up an AVB system: MQPRIO, CBS and ETF.
Follow these steps to configure the qdiscs:
Step 1: Add the MQPRIO qdisc to the root qdisc to expose hardware queues in the TC system. The command below configures MQPRIO for the Intel(R) Ethernet Controller I210 which has 4 transmission queues:
sudo tc qdisc add dev eth0 parent root handle 6666 mqprio \
num_tc 3 \
map 2 2 1 0 2 2 2 2 2 2 2 2 2 2 2 2 \
queues 1@0 1@1 2@2 \
hw 0
Step 2: CBS qdisc configuration depends on the number of AVB streams as well as the stream features. This tutorial uses 2 streams with the following features:
- Stream A: SR class A, AVTP Compressed Video Format, H.264 profile High, 1920x1080, 30 fps.
- Stream B: SR class B, AVTP Audio Format, PCM 16-bit sample, 48 kHz, stereo, 12 frames per AVTPDU.
Configure the CBS qdiscs as below to reserve bandwidth to accommodate these streams:
sudo tc qdisc replace dev eth0 parent 6666:1 handle 7777 cbs \
idleslope 98688 sendslope -901312 hicredit 153 locredit -1389 \
offload 1
sudo tc qdisc replace dev eth0 parent 6666:2 handle 8888 cbs \
idleslope 3648 sendslope -996352 hicredit 12 locredit -113 \
offload 1
Step 3: Configure the ETF qdiscs as children of CBS qdiscs.
sudo tc qdisc add dev eth0 parent 7777:1 etf \
clockid CLOCK_TAI \
delta 500000 \
offload
sudo tc qdisc add dev eth0 parent 8888:1 etf \
clockid CLOCK_TAI \
delta 500000 \
offload
For further information regarding TSN qdiscs configuration refer to Configuring TSN Qdiscs.
Time Synchronization¶
Both ALSA and GStreamer plugins require the PTP Hardware Clock (PHC) from the NIC as well as the System clock to be synchronized with gPTP Grand Master. This is done by Linux PTP tools. To do this, follow these steps:
Step 1: Synchronize the PHC with gPTP GM clock:
sudo ptp4l -i eth0 -f <linuxptp source dir>/configs/gPTP.cfg --step_threshold=1 -m
Step 2: PHC time is set in TAI coordinate time while the system clock time is in UTC time. To set the system clocks (CLOCK_REALTIME and CLOCK_TAI), configure the UTC-TAI offset in the system, as below:
sudo pmc -u -b 0 -t 1 "SET GRANDMASTER_SETTINGS_NP clockClass 248 \
clockAccuracy 0xfe offsetScaledLogVariance 0xffff \
currentUtcOffset 37 leap61 0 leap59 0 currentUtcOffsetValid 1 \
ptpTimescale 1 timeTraceable 1 frequencyTraceable 0 \
timeSource 0xa0"
Step 3: Synchronize the system clock with the PHC:
sudo phc2sys -w -m -s eth0 -c CLOCK_REALTIME --step_threshold=1 \
--transportSpecific=1
For further information regarding time synchronization, refer to Synchronizing Time with Linux* PTP.
AVB Audio Talker/Listener Examples¶
With software installed and system set up, you are ready to see AVB audio talker and listener applications in action. AVB Audio streaming is supported by both ALSA and GStreamer plugins.
Examples using ALSA Framework¶
The ALSA AVTP Audio Format (AAF) plugin is a PCM plugin that uses AAF AVTPDUs to transmit/receive audio data through a TSN network. The plugin enables any existing ALSA-based application to operate as AVB talker or listener.
- In playback mode, the plugin reads PCM samples from the audio buffer, encapsulates into AVTPDUs and transmits to the network, mimicking a typical AVB talker.
- In capture mode, the plugin receives AVTPDUs from the network, retrieves the PCM samples, and presents them (at AVTP presentation time) to the application for rendering, mimicking a typical AVB Listener.
Step 1: Add the AAF device to the ALSA configuration file (/etc/asound.conf) on both Talker and Listener hosts. The following configuration creates the AAF device according to the AVB audio stream described in Qdiscs Configuration. For a full description of AAF device configuration options, refer to ALSA AAF Plugin documentation.
Note: In the configuration file, replace the interface name eth0.5 with the VLAN interface you created in VLAN Configuration.
pcm.aaf0 {
type aaf
ifname eth0.5
addr 01:AA:AA:AA:AA:AA
prio 2
streamid AA:BB:CC:DD:EE:FF:000B
mtt 50000
time_uncertainty 1000
frames_per_pdu 12
ptime_tolerance 100
}
Step 2: Run the speaker-test tool from alsa-utils to implement the AVB talker application. The tool generates a tone which is transmitted through the network as an AVTP stream by the aaf0 device.
On the Talker host run:
sudo speaker-test -p 25000 -F S16_BE -c 2 -r 48000 -D aaf0
Quick explanation about speaker-test arguments: -p
configures ALSA period
size, -F
sets the sample format, -c
the number of channels, -r
the
sampling rate, and -D
the ALSA device. For more details check
speaker-test(1) manpage.
Step 3: While the AVB stream is being transmitted through the network, run the listener and play it back, using aplay and arecord tools from alsa-utils. These tools create a PCM loopback between two ALSA devices. In this case, the capture device is aaf0 and the playback device is default (usually, this is the main sound card in the system).
On the listener host run:
sudo arecord -F 25000 -t raw -f S16_BE -c 2 -r 48000 -D aaf0 | \
aplay -F 25000 -t raw -f S16_BE -c 2 -r 48000 -D default
Result: You can hear the tone transmitted by the Talker in the speakers (or headphones) attached to the Listener host.
Troubleshooting¶
If no sound is heard:
- Ensure the volume is high enough.
If aplay fails with “Sample format non available”:
- Some sound cards do not support big endian formats. It’s necessary to convert the PCM samples to little endian before pushing them to your soundcard. This can be done by defining a converter device in /etc/asound.conf, on Listener, as shown below.
pcm.converter0 {
type linear
slave {
pcm default
format S16_LE
}
}
Use converter0 as playback device instead of default.
Examples using GStreamer Framework¶
The GStreamer AVTP plugin provides a set of elements that are arranged in a GStreamer pipeline to implement AVB talker and listener applications. These elements can be categorized as:
- payloaders: elements that encapsulate/decapsulate audio and video data into/from AVTPDUs. The plugin provides a pair of payloader/depayloader elements for each AVTP format supported;
- sink: element receives AVTPDUs from upstream and sends them to the network;
- source: element that receives AVTPDUs from the network and send them upstream in the pipeline.
This example uses the gst-launch-1.0 tool to implement the AVB talker and listener applications.
Step 1: At the AVB talker host, run the following command to generate the AAF stream:
On the AVB talker:
gst-launch-1.0 clockselect. \( clock-id=realtime \
audiotestsrc samplesperbuffer=12 is-live=true ! \
audio/x-raw,format=S16BE,channels=2,rate=48000 ! \
avtpaafpay mtt=50000000 tu=1000000 streamid=0xAABBCCDDEEFF000B processing-deadline=0 ! \
avtpsink ifname=eth0.5 address=01:AA:AA:AA:AA:AA priority=2 processing-deadline=0 \)
In this command the clockselect defines a special GStreamer pipeline to be used, that enables you to select a clock for the pipeline, using its clock-id switch. Setting it to realtime sets CLOCK_REALTIME as the pipeline clock while the rest of the command between the parenthesis describes the pipeline. The ! sign refers to connecting two elements. Let’s check what each element in the pipeline does:
- audiotestsrc generates a tone.
- audio/x-raw,format=S16BE,channels=2,rate=48000 is not a true element but a filter that defines the audio sample features audiotestsrc generates.
- avtpaafpay encapsulates audio samples into AAF AVTPDUs.
- avtpsink sends AVTPDs to the network.
Note that AVTP-specific features, such as maximum transit time, time uncertainty, and stream ID, are set via the avtpaafpay element properties while network-specific features such as network interface and traffic priority are set via avtpsink element properties.
The processing-deadline property set above defines an overall processing latency for the pipeline. The payloader element takes it into consideration when calculating the AVTP presentation time. Note that the processing-deadline property from the payloader and sink elements should have the same value.
To learn about a specific element utilized in the pipeline above, run:
gst-inspect-1.0 <ELEMENT>
Step 2: While the AVB stream is being transmitted through the network, run the listener application to receive the stream and play it back.
On the AVB listener:
gst-launch-1.0 clockselect. \( clock-id=realtime \
avtpsrc ifname=eth0.5 address=01:AA:AA:AA:AA:AA ! \
queue max-size-buffers=0 max-size-time=0 ! \
avtpaafdepay streamid=0xAABBCCDDEEFF000B ! audioconvert ! autoaudiosink \)
The avtpsrc element receives AVTPDUs from the network and push them to the avtpaafdepay element which extracts the audio samples. The autoaudiosink automatically detects the default audio sink in the system and plays it back.
In the pipeline above:
- Using the queue element after avtpsrc ensures packet reception is not blocked in case any downstream element blocks the pipeline.
- Using the audioconvert element before autoaudiosink ensures the audio stream is automatically converted to a compatible stream configuration in case the playback device doesn’t support S16BE, stereo, 48 kHz.
Result: You hear the tone transmitted by the Talker in the speakers (or headphones) attached to the Listener host.
Troubleshooting¶
If no sound is heard, make sure the volume is high enough.
AVB Video Talker/Listener Example¶
AVB video is only supported by the GStreamer AVTP plugin. Similar to the GStreamer audio example, the AVB Video example also uses gst-launch-1.0 tool to implement AVB video talker and listener applications.
Step 1: Run the following command to generate the CVF stream on the AVB talker:
gst-launch-1.0 clockselect. \( clock-id=realtime \
videotestsrc is-live=true ! video/x-raw,width=720,height=480,framerate=30/1 ! \
clockoverlay ! vaapih264enc ! h264parse config-interval=-1 ! \
avtpcvfpay processing-deadline=20000000 mtt=2000000 tu=125000 streamid=0xAABBCCDDEEFF000A ! \
avtpsink ifname=eth0.5 priority=3 processing-deadline=20000000 \)
Similar to the audio talker pipeline, the videotestsrc element generates the video stream to transmit over AVTP. The clockoverlay element adds a wall-clock time on the top-left corner of the video (we use this information to check playback synchronization, more on this later). The vaapih264enc element encodes the stream into H.264 and the h264parse element parses it so the output capabilities are set correctly. The avtpcvfpay element then encapsulates it into CVF AVTPDUs which are finally transmitted by the avtpsink element. If vaapih264enc isn’t available in your system, you may use another H.264 encoder instead, such as x264enc.
Note that we set the config-interval=-1
property from h264parse to ensure
H.264 stream metadata is in-band so the H.264 decoder running by the AVB
listener application is able to actually decode it. Also note we use a
processing-deadline of 20ms as opposed to 0ms used on audio pipeline. We
chose this value due this pipeline being more “heavy” on processing -
generating and encoding video, adding overlays, etc. The correct value for this
property depends on the pipeline and the system it runs on.
Step 2: While the AVB stream is being transmitted through the network, run the listener and play it back.
On the AVB listener:
gst-launch-1.0 clockselect. \( clock-id=realtime \
avtpsrc ifname=eth0.5 ! avtpcvfdepay streamid=0xAABBCCDDEEFF000A ! \
queue max-size-bytes=0 max-size-buffers=0 max-size-time=0 ! \
vaapih264dec ! videoconvert ! clockoverlay halignment=right ! autovideosink \)
avtpsrc receives AVTPDUs from the network, avtpcvfdepay extracts the H.264 NAL units, vaapih264dec decodes the stream, clockoveraly adds a wall clock to the top-right corner of the video, and autovideosink automatically detects a video sync (e.g. X server) and renders the video stream. If vaapih264dec isn’t available in your system, you may use another H.264 decoder instead, such as avdec_h264.
Results: The video is streamed by the talker and displayed on the listener screen.
Note that clocks on top left (talker clock) and right (listener clock) may not be in perfect sync, due network and pipeline latencies.
Troubleshooting¶
- If there is a delay when video playback starts on listener, try starting the listener after the talker application. This usually happens due to the fact that video can only be decoded when a keyframe is present on stream. This effect won’t happen if talker side is started after listener, as first frame will be a keyframe already. Should this be an issue, check your encoder options to control keyframe frequency.
Streaming From a File¶
The examples above use helper tools to synthetize the stream contents. However, implementing real use-cases involves reading stream content from a file, such as a WAV file for audio or MP4 file for video.
For the audio examples, use the WAV file which has the PCM features from the AVB Stream B (16-bit sample, stereo, and 48kHz).
The ALSA Way¶
Follow these steps to stream contents from a file, using the ALSA Framework:
Step 1: Convert the PCM samples within that file from little endian into big endian format, before pushing them to the AAF device. To achieve that, define a converter device and add it to the /etc/asound.conf file:
pcm.converter1 {
type linear
slave {
pcm aaf0
format S16_BE
}
}
Step 2: Use aplay to read PCM samples from the file and play them back in the converter device:
sudo aplay -F 12500 -D converter1 piano2.wav
Troubleshooting¶
If you try another WAV file, and it does not work, make sure the CBS qdisc is adjusted accordingly to accommodate the stream features from this another file.
The GStreamer Way¶
From a WAV File¶
To stream contents from a WAV file, use the filesrc element and run the command:
gst-launch-1.0 clockselect. \( clock-id=realtime \
filesrc location=piano2.wav ! wavparse ! audioconvert ! \
audiobuffersplit output-buffer-duration=12/48000 ! \
avtpaafpay mtt=50000000 tu=1000000 streamid=0xAABBCCDDEEFF000B processing-deadline=0 ! \
avtpsink ifname=eth0.5 address=01:AA:AA:AA:AA:AA priority=2 processing-deadline=0 \)
This example uses the wavparse element to demux the WAV file, the audioconvert element to handle any audio conversion needed, and the audiosplitbuffer element to generate GstBuffers with 12 samples.
From an MP4 File¶
Run this command to generate a CVF stream from an MP4 file. The file used in this example can be downloaded here.
gst-launch-1.0 clockselect. \( clock-id=realtime \
filesrc location=sintel_trailer-480p.mp4 ! qtdemux ! h264parse config-interval=-1 ! \
avtpcvfpay processing-deadline=20000000 mtt=2000000 tu=125000 streamid=0xAABBCCDDEEFF000A ! \
avtpsink ifname=eth0.5 priority=3 processing-deadline=20000000 \)
This example uses qtdemux element to demultiplex the MP4 container and access the video data.
Both pipelines above have one caveat: gst-launch-1.0 does not provide a way to disable prerolling [1] so the timestamp from the first AVTPDU isn’t set correctly. While this should not usually be an issue, it could cause hiccups and delays when starting playback on listener side. Writing an appsrc element may allow sourcing the file without the preroll step.
Streaming From a Live Source¶
Generating an AVB stream from a live source, such as microphones or cameras, is another use case to consider.
The ALSA Way¶
Follow these steps to stream contents from a live source, using the ALSA Framework:
Step 1: To determine what device is the microphone in your system, run:
arecord -l
This command lists the capture devices detected in the system. It provides
information about the devices alongside a card X (...) device Y
.
Considering card 1 (...) device 0
is the microphone device, move to Step 2.
Step 2: Use the arecord and aplay pair to get PCM samples from the microphone device loop them into the AAF device as shown:
arecord -F 25000 -t raw -f S16_LE -c 2 -r 48000 -D hw:1,0 | \
sudo aplay -F 25000 -t raw -f S16_BE -c 2 -r 48000 -D aaf0
Troubleshooting¶
Some microphones do not supply big endian formats. In this case, convert the PCM samples to big endian before pushing them to the AAF plugin. This can be done as described in the Streaming From a File. Remember to use the convert device as playback device instead of aaf0.
The GStreamer Way¶
Follow these steps to stream contents from a live source, using the GStreamer Framework:
Step 1: To check which devices are known to GStreamer and its properties, including the kind of output, use the gst-device-monitor-1.0 tool:
gst-device-monitor-1.0 Video/Source Audio/Source
This generates a list of all audio and video source devices GStreamer knows
about, along with their properties. It includes brief tips on using them, such
as gst-launch-1.0 v4l2src ! ...
. Use this information to create an
appropriate pipeline.
Step 2.A: To use a microphone as the source of a pipeline stream, use alsasrc element as shown:
gst-launch-1.0 clockselect. \( clock-id=realtime \
alsasrc device=hw:1,0 ! audioconvert ! \
audio/x-raw,format=S16BE,channels=2,rate=48000 ! \
audiobuffersplit output-buffer-duration=12/48000 ! \
avtpaafpay mtt=50000000 tu=1000000 streamid=0xAABBCCDDEEFF000B processing-deadline=0 ! \
avtpsink ifname=eth0.5 address=01:AA:AA:AA:AA:AA priority=2 processing-deadline=0 \)
Step 2.B: To use a camera as the source of a pipeline stream for video, use the v4l2src element as shown:
gst-launch-1.0 clockselect. \( clock-id=realtime \
v4l2src ! videoconvert ! video/x-raw,width=720,height=480,framerate=30/1 ! \
vaapih264enc ! h264parse config-interval=1 ! \
avtpcvfpay processing-deadline=20000000 mtt=2000000 tu=125000 streamid=0xAABBCCDDEEFF000A ! \
avtpsink ifname=eth0.5 priority=3 processing-deadline=20000000 \)
Here, videoconvert element converts output from v4l2src element to the video/x-raw filter specified, creating a video stream with the features expected by the H.264 encoder.
Running Multiple Talker Applications on the Same Host¶
This section describes how to run multiple talker applications on the same host. In addition to the streams described in Qdiscs Configuration, two more streams are included as follows:
- Stream C: SR class B, AVTP Audio Format, PCM 16-bit sample, 8 kHz, mono, 2 frames per AVTPDU;
- Stream D: SR class B, AVTP Audio Format, PCM 16-bit sample, 48 kHz, 6 channels, 12 frames per AVTPDU.
First, reconfigure CBS to accommodate the two new streams. Configure the qdiscs as shown.
sudo tc qdisc replace dev eth0 parent 6666:2 cbs \
idleslope 12608 sendslope -987392 hicredit 41 locredit -207 \
offload 1
The ALSA Way¶
To run multiple streams you need to add new AAF devices to ALSA configuration file (one for each stream). We could do the same thing done in Examples using ALSA Framework, but instead we’re going to leverage the ALSA plugin runtime configuration. Instead of defining AAF devices statically, you can do it dynamically by the time you specify the device.
Follow these steps to run multiple talker applications on one host using the ALSA Framework:
Step 1: Replace the pcm.aaf0 device in the /etc/asound.conf file by the device shown below:
pcm.aaf {
@args [ IFNAME ADDR PRIO STREAMID MTT UNCERTAINTY FRAMES TOLERANCE ]
@args.IFNAME {
type string
}
@args.ADDR {
type string
}
@args.PRIO {
type integer
}
@args.STREAMID {
type string
}
@args.MTT {
type integer
}
@args.UNCERTAINTY {
type integer
}
@args.FRAMES {
type integer
}
@args.TOLERANCE {
type integer
}
type aaf
ifname $IFNAME
addr $ADDR
prio $PRIO
streamid $STREAMID
mtt $MTT
time_uncertainty $UNCERTAINTY
frames_per_pdu $FRAMES
ptime_tolerance $TOLERANCE
}
Step 2: Run multiple instances of speaker-test (one for each AVB audio stream), varying the AAF device parameters and the PCM features according to the features from streams B, C, and D.
Stream B:
sudo speaker-test -p 12500 -F S16_BE -c 2 -r 48000 \
-D aaf:eth0.5,01:AA:AA:AA:AA:AA,2,AA:BB:CC:DD:EE:FF:000B,50000,1000,12,100
Stream C:
sudo speaker-test -p 12500 -F S16_BE -c 1 -r 8000 \
-D aaf:eth0.5,01:AA:AA:AA:AA:AA,2,AA:BB:CC:DD:EE:FF:000C,50000,1000,2,100
Stream D:
sudo speaker-test -p 12500 -F S16_BE -c 6 -r 48000 \
-D aaf:eth0.5,01:AA:AA:AA:AA:AA,2,AA:BB:CC:DD:EE:FF:000D,50000,1000,12,100
Note that you can check if each stream is running properly on listener, by adapting listener sample command shown in Examples using ALSA Framework (remember to account for streamid, frequency and number of channels differences). Note that by default, ALSA will not mix different audio streams, so you will only be able to listen to one audio stream each time. You can use a mixer plugin if you want to mix.
The GStreamer Way¶
For GStreamer, running several streams at the same time involves creating several pipelines, each one with the right stream parameters:
Stream B:
gst-launch-1.0 clockselect. \( clock-id=realtime \
audiotestsrc samplesperbuffer=12 is-live=true ! \
audio/x-raw,format=S16BE,channels=2,rate=48000 ! \
avtpaafpay mtt=50000000 tu=1000000 streamid=0xAABBCCDDEEFF000B processing-deadline=0 ! \
avtpsink ifname=eth0.5 address=01:AA:AA:AA:AA:AA priority=2 processing-deadline=0 \)
Stream C:
gst-launch-1.0 clockselect. \( clock-id=realtime \
audiotestsrc samplesperbuffer=2 is-live=true ! \
audio/x-raw,format=S16BE,channels=1,rate=8000 ! \
avtpaafpay mtt=50000000 tu=1000000 streamid=0xAABBCCDDEEFF000C processing-deadline=0 ! \
avtpsink ifname=eth0.5 address=01:AA:AA:AA:AA:AA priority=2 processing-deadline=0 \)
Stream D:
gst-launch-1.0 clockselect. \( clock-id=realtime \
audiotestsrc samplesperbuffer=12 is-live=true ! \
audio/x-raw,format=S16BE,channels=6,rate=48000 ! \
avtpaafpay mtt=50000000 tu=1000000 streamid=0xAABBCCDDEEFF000D processing-deadline=0 ! \
avtpsink ifname=eth0.5 address=01:AA:AA:AA:AA:AA priority=2 processing-deadline=0 \)
Note that you can check if each stream is running properly on listener, by adapting listener sample command shown in Examples using GStreamer Framework (remember to account for streamid, frequency and number of channels differences).
Troubleshooting¶
ALSA AVB talker eventually fails when I set ‘time_uncertainty’ to 125¶
According to Table 4 from IEEE 1722-2016, the Max Timing Uncertainty for Class A streams is 125 us so the ‘time_uncertainty’ configuration from AAF device should be set to 125. However, speaker-test or aplay eventually fail transmitting when such value is set. This is a known issue and should be fixed soon. This tutorial will be updated once the issue is fixed. As a workaround set the ‘time_uncertainty’ to a greater value. 500 has worked consistently when running empirical tests.
Video doesn’t work when using x264enc as encoder and avdec_h264 as decoder¶
Using x264enc and avdec_h264 together was found to have issues on some
systems. One workaround is to set bframes as in ... ! x264enc bframes=0 !
...
on talker.
When streaming a higher resolution video, such as HD or Full HD, video doesn’t work or work with awful quality on Listener¶
While audio samples are usually small in size, videos can be much bigger, especially with big resolutions and framerate. As videos commonly eat up more resources, one must make sure that more resources are available to them.
One of the resources that can be a limitation to big videos is the socket buffer size. If socket buffer fills up, packets may be dropped, compromising video playback. To avoid transmit buffer filling up, one can increase its size:
sudo sysctl -w net.core.wmem_max=21299200
sudo sysctl -w net.core.wmem_default=21299200
These commands will increase socket buffer size to approximately 20 MB. It depends on video characteristics how big this buffer needs to be, but with suggested size, it should work well for Full HD videos.
On the listener side, queue element after depayloader is usually enough to bufferize packets received.
I get “Unknown qdisc etf” when trying to set up ETF Qdisc on Ubuntu Disco (19.04)¶
While Ubuntu Disco has kernel 5.0, it has iproute2 4.18. Update iproute2 package. You can install it from sources following these instructions. Alternatively, you can install it from Ubuntu Eoan (19.10) following these other instructions.
ALSA talker application fails while sending AVTPDUs¶
Especially when system load is high, AVB applications can take too long to be scheduled in and AVTPDU transmission deadlines could be lost. This can be addressed by using RT Linux and assigning schedule priorities properly. On regular Linux, though, empirical tests have shown that changing the schedule policy and priority of the AVB application process mitigates the issue.
The command example below runs speaker-test with FIFO scheduling policy and priority 98.
sudo chrt --fifo 98 speaker-test -p 12500 -F S16_BE -c 2 -r 48000 -D aaf0
[1] | Prerolling is a technique GStreamer uses to ensure smooth playing. The first frame is processed by the pipeline, but is not played by the sink until pipeline state changes to “playing”. While this allows for smooth transition when user clicks the play button on a normal media player, for the AVTP plugin, the first frame does not have timing information, as GStreamer is unaware when it will be played. However, as gst-launch-1.0 tool starts playing right after preroll, any disruption should be minimal. |
Configuring VLAN Interfaces¶
Introduction¶
TSN streams are transmitted over Virtual LANs (VLANs). Bridges use the VLAN priority information (PCP) to identify Stream Reservation (SR) traffic classes which are handled according to the Forward and Queuing Enhancements for Time-Sensitive Streams (FQTSS) mechanisms described in Chapter 34 of the IEEE 802.1Q standard. This tutorial will cover how to setup a VLAN interface for TSN application.
VLAN is supported in Linux* via virtual networking interface. Any packet sent through the VLAN interface is automatically tagged with the VLAN information (such as ID and PCP). Any packet received from that interface belongs to the VLAN.
Configuring the Interface¶
The VLAN interface is created using the ip-link command from iproute2 project which is pre-installed by the majority of Linux distributions. The following example creates a VLAN interface for TSN usage. The example assumes eth0 is the physical interface.
sudo ip link add link eth0 name eth0.5 type vlan id 5 egress-qos-map 2:2 3:3
The egress-qos-map argument defines a mapping of Linux internal packet priority (SO_PRORITY) to VLAN header PCP field for outgoing frames. The format is FROM:TO with multiple mappings separated by spaces. This example maps SO_PRIORITY 2 into PCP 2 and SO_PRIORITY 3 into PCP 3. For further information about command arguments, see ip-link(8) manpage.
VLAN configuration is required on Talkers, Listeners, and Bridges.
Configuring TSN Qdiscs¶
Introduction¶
The TSN control plane is implemented through the Linux* Traffic Control (TC) System. The transmission algorithms specified in the Forwarding and Queuing for Time-Sensitive Streams (FQTSS) chapter of IEEE 802.1Q-2018 are supported via TC Queuing Disciplines (qdiscs).
Linux currently provides the following qdiscs relating to TSN:
- CBS qdisc: Implements the Credit-Based Shaper introduced by the IEEE 802.1Qav amendment.
- TAPRIO qdisc: Implements the Enhancements for Scheduled Traffic introduced by IEEE 802.1Qbv.
- ETF qdisc: While not an FQTSS feature, Linux also provides the Earliest TxTime First (ETF) qdisc which enables the LaunchTime feature present in some NICs, such as Intel(R) Ethernet Controller I210.
These qdiscs provide an offload option to leverage the hardware support (when supported by the NIC driver) as well as a software implementation that could be utilized as a fallback.
Note: these qdiscs enable a transmission algorithm and should be configured on transmitting end-stations (Talker systems). They are not required on the receiving end-stations (Listener systems).
Although this tutorial was tested with an Intel(R) Ethernet Controller I210, it can be used as a guide to configure any Network Interface Card (NIC). This tutorial will enable you to configure Linux Qdiscs and enable hardware offloading.
Configuring CBS Qdisc¶
The CBS algorithm shapes the transmission according to the bandwidth that has been reserved on a given outbound queue. This feature was introduced to IEEE 802.1Q to enable Audio/Video Bridging (AVB) on top of Local Area Networks (LANs). AVB systems rely on CBS to determine the amount of buffering required at the receiving stations. For details on how the CBS algorithm works refer to Annex L from IEEE 802.1Q-2018 spec.
Follow these steps to configure the CBS Qdisc:
Step 1: The CBS operates on a per-queue basis. To expose the hardware transmission queues use the MQPRIO qdisc. MQPRIO does more than just expose the hardware transmission queues, it also defines how Linux network priorities map into traffic classes and how traffic classes map into hardware queues. The command-line example below shows how to configure MQPRIO qdisc for Intel(R) Ethernet Controller I210 which has 4 transmission queues.
sudo tc qdisc add dev eth0 parent root handle 6666 mqprio \
num_tc 3 \
map 2 2 1 0 2 2 2 2 2 2 2 2 2 2 2 2 \
queues 1@0 1@1 2@2 \
hw 0
After running this command:
- MQPRIO is installed as root qdisc oneth0 interface with handle ID 6666;
- 3 different traffic classes are defined (from 0 to 2), where Linux priority 3 maps into traffic class 0, Linux priority 2 maps into traffic class 1, and all other Linux priorities map into traffic class 2;
- Packets belonging to traffic class 0 go into 1 queue at offset 0 (i.e queue index 0 or Q0), packet from traffic class 1 go into 1 queue at offset 1 (i.e. queue index 1 or Q1), and packets from traffic class 2 go into 2 queues at offset 2 (i.e. queues index 2 and 3, or Q2 and Q3);
- No hardware offload is enabled.
Note: By configuring MQPRIO, Stream Reservation (SR) Class A (Priority 3) is enqueued on Q0, the highest priority transmission queue in Intel(R) Ethernet Controller I210, while SR Class B (Priority 2) is enqueued on Q1, the second priority. All best-effort traffic goes into Q2 or Q3.
Step 2: With MQPRIO configured, now configure CBS qdisc. By default MPPRIO installs the fq_codel qdisc in each hardware queue exposed. This step involves replacing that qdisc by the CBS qdisc for Q0 and Q1.
CBS parameters come straight from the IEEE 802.1Q-2018 specification. They are the following:
- idleSlope: rate credits are accumulated when queue isn’t transmitting;
- sendSlope: rate credits are spent when queue is transmitting;
- hiCredit: maximum amount of credits the queue is allowed to have;
- loCredit: minimum amount of credits the queue is allowed to have;
Calculating those parameters can be tricky and error-prone so this tutorial
provides a the calc-cbs-params.py
helper script which takes as input TSN stream features, such as SR class,
transport protocol, and payload size and outputs the CBS parameters.
For example, consider 2 AVB streams with the following features:
- Stream A: SR class A, AVTP Compressed Video Format, H.264 profile High, 1920x1080, 30fps.
- Stream B: SR class B, AVTP Audio Format, PCM 16-bit sample, 48 kHz, stereo, 12 frames perAVTPDU.
To calculate the CBS parameters for that set of AVB streams, run the helper script as follows:
calc-cbs-params.py \
--stream class=a,transport=avtp-cvf-h264,rate=8000,psize=1470 \
--stream class=b,transport=avtp-aaf,rate=4000,psize=48
Which should produce the output:
1st priority queue: idleslope 98688 sendslope -901312 hicredit 153 locredit -1389
2nd priority queue: idleslope 3648 sendslope -996352 hicredit 12 locredit -113
With the CBS parameters, configuring the CBS qdisc is straightforward. Q0 is the first priority queue while Q1 is the second priority so the CBS qdiscs are installed as follows. The offload mode is enabled since the Intel(R) Ethernet Controller I210 supports that feature.
sudo tc qdisc replace dev eth0 parent 6666:1 cbs \
idleslope 98688 sendslope -901312 hicredit 153 locredit -1389 \
offload 1
sudo tc qdisc replace dev eth0 parent 6666:2 cbs \
idleslope 3648 sendslope -996352 hicredit 12 locredit -113 \
offload 1
For further information about MQPRIO and CBS qdiscs refer totc-mqprio(8) and tc-cbs(8) manpages.
Configuring the ETF Qdisc¶
Intel(R) Ethernet Controller I210 and other NICs provide the LaunchTime feature which enables frames to be transmitted at specific times. In Linux, this hardware feature is enabled through the SO_TXTIME sockopt and ETF qdisc. The SO_TXTIME socket option allows applications to configure the transmission time for each frame while the ETF qdiscs ensures frames coming from multiple sockets are sent to the hardware ordered by transmission time.
Like the CBS qdisc, the ETF qdisc operates on a per-queue basis so the MQPRIO configuration described in Configuring CBS Qdisc is required.
In the example below, the ETF qdisc is installed on Q0 and offload feature is enabled since the Intel(R) Ethernet Controller I210 driver supports the LaunchTime feature.
sudo tc qdisc add dev eth0 parent 6666:1 etf \
clockid CLOCK_TAI \
delta 500000 \
offload
The clockid
parameter specifies which clock is utilized to set the
transmission timestamps from frames. Only CLOCK_TAI
is supported. ETF
requires the System clock to be in sync with the PTP Hardware Clock (PHC, refer
to Synchronizing Time with Linux* PTP for more info). The delta
parameter specifies the
length of time before the transmission timestamp the ETF qdisc sends the frame
to hardware. That value depends on multiple factors and can vary from system
to system. This example uses 500us.
The value to use for the delta parameter can be estimated using
cyclictest,
run under similar conditions (same kind of expected system load, same
kernel configuration, etc) as the application using ETF. After running
cyclictest
for a reasonable amount of time (1 hour for example),
the maximum latency detected by cyclictest
is a good aproximation
of the minimum value that should be used as ETF delta
. For
example, running cyclictest
like this:
sudo cyclictest --mlockall --smp --priority=80 --interval=200 --distance=0
Which should have output:
T: 0 (11795) P:80 I:200 C: 726864 Min: 1 Act: 2 Avg: 1 Max: 6
T: 1 (11796) P:80 I:200 C: 726861 Min: 1 Act: 1 Avg: 1 Max: 10
T: 2 (11797) P:80 I:200 C: 726858 Min: 1 Act: 1 Avg: 1 Max: 78
T: 3 (11798) P:80 I:200 C: 726855 Min: 1 Act: 1 Avg: 1 Max: 49
T: 4 (11799) P:80 I:200 C: 726852 Min: 1 Act: 1 Avg: 1 Max: 43
T: 5 (11800) P:80 I:200 C: 726831 Min: 1 Act: 1 Avg: 1 Max: 10
T: 6 (11801) P:80 I:200 C: 726846 Min: 1 Act: 2 Avg: 1 Max: 27
T: 7 (11802) P:80 I:200 C: 726843 Min: 1 Act: 1 Avg: 1 Max: 7
T: 8 (11803) P:80 I:200 C: 726840 Min: 1 Act: 2 Avg: 1 Max: 94
T: 9 (11804) P:80 I:200 C: 726838 Min: 1 Act: 1 Avg: 1 Max: 12
T:10 (11805) P:80 I:200 C: 726835 Min: 1 Act: 1 Avg: 1 Max: 14
T:11 (11806) P:80 I:200 C: 726832 Min: 1 Act: 1 Avg: 1 Max: 18
Would indicate that the minimum value of delta
that can be used
should be greater than 94us, and in real use cases, a safety margin
should be added, making the minimum acceptable value of delta
to
be around 100us for this particular system and workload combination.
Cyclictest is a good estimate because in nanosleep mode, it uses the
same mechanisms as the ETF Qdisc to suspend execution until a given
instant.For further information about ETF qdisc refer to tc-etf(8)
manpage.
Configuring TAPRIO Qdisc¶
IEEE 802.1Q-2018 introduces the Enhancements for Scheduled Traffic (EST) feature (formerly known as Qbv) which allows transmission from each queue to be scheduled relative to a known timescale. In summary, transmission gates are associated with each queue; the state of the transmission gate determines whether queued frames can be selected for transmission (“Open” or “Closed” states). Each Port is associated with a Gate Control List (GCL) which contains an ordered list of gate operations. For further details on how this feature works, refer to section 8.6.8.4 of IEEE 802.1Q-2018.
EST allows systems to be configured and participate in complex networks, similar to those envisioned by IEEE 802.1Qcc-2018. In this specification, a central entity with full knowledge of all the nodes, the traffic produced by those nodes, and their requirements, is able to produce a schedule for the whole network. This scenario is thought to enable primarily industrial use-cases, as many of the concepts are similar to other field buses.
The EST feature is supported in Linux via the TAPRIO qdisc. Similar to MQPRIO, the qdisc defines how Linux networking stack priorities map into traffic classes and how traffic classes map into hardware queues. Besides that, it also enables the user to configure the GCL for a given interface.
No NIC driver in kernel mainline currently supports the EST feature so TAPRIO hardware offload isn’t supported. However, EST can still be leveraged since TAPRIO provides a TxTime-assisted implementation (available since kernel 5.3) and a pure software implementation. In Tx-Time-assisted mode, the LaunchTime feature is used to schedule packet transmissions, emulating the EST feature. The NIC must support LaunchTime to be able to use that mode. If not, use the pure software implementation. This tutorial uses the Intel(R) Ethernet Controller I210 which supports LaunchTime, thereby setting TAPRIO up for using the TxTime-assisted mode.
For the sake of exercise, let’s say we have 3 traffic classes and we want to schedule traffic as follows:
- The first transmission window has 300 us duration and only traffic class 0 is transmitted;
- The second transmission window also has 300 us duration but now both traffic class 0 and 1 are transmitted;
- Third and last window has 400 us duration and only traffic class 2 is transmitted;
- The following schedule starts at 1,000,000,000 absolute time.
To achieve that, configure TAPRIO qdisc as shown below:
sudo tc qdisc replace dev eth0 parent root handle 100 taprio \
num_tc 3 \
map 2 2 1 0 2 2 2 2 2 2 2 2 2 2 2 2 \
queues 1@0 1@0 1@0 \
base-time 1000000000 \
sched-entry S 01 300000 \
sched-entry S 03 300000 \
sched-entry S 04 400000 \
flags 0x1 \
txtime-delay 500000 \
clockid CLOCK_TAI
The parameters num_tc
, map and queues are identical to MQPRIO so refer to
Configuring CBS Qdisc for details. The way TAPRIO is configured, only one
hardware queue is enabled. The other parameters are described as follows. For
further details on TAPRIO configuration, check tc-taprio(8) manpage.
base-time
: specifies the start time of the schedule. Ifbase-time
is in the past, the schedule starts as soon as possible, aligning the start cycle specified by the GCL.sched-entry
: each of these specify one entry in the cycle, which is executed in order. Each entry has the format:<CMD> <GATE MASK> <INTERVAL>`
.CMD
defines the command that is executed for each interval, the commands defined are “S”: SetGates, which defines that the traffic classes defined inGATE MASK
will be open for this interval “H”: Set-And-Hold-MAC, has the same meaning as SetGates, with the addition that preemption is disabled during this interval; “R”: Set-And-Release-MAC, has the same meaning as SetGates, with the addition that preemption is enabled during this interval;GATE MASK
defines to which traffic classes the command is applied, specified as a bit mask, with bit 0 referring to traffic class 0 (TC 0) and bit N to traffic class N (TC N).INTERVAL
defines the duration of each interval in nanoseconds.- flags: control which additional flags are sent to taprio, in this case, we are enabling TxTime-assisted mode.
- txtime-delay: this argument is only used in TxTime-assisted mode, and allows to control the minimum time the transmission time of a packet is set in the future;
- clockid: defines against which clock reference these timestamps should be considered.
When TxTime-assisted mode is enabled, install the ETF qdisc on the hardware queue exposed by TAPRIO so LaunchTime is enabled on NIC and packets are ordered by transmission time before they are delivered to the controller. The ETF qdisc can be installed as follows:
sudo tc qdisc replace dev eth0 parent 100:1 etf \
clockid CLOCK_TAI \
delta 500000 \
offload \
skip_sock_check
Once both TAPRIO and ETF qdiscs are properly setup, the traffic generated by all applications running on top of eth0 interface are scheduled according to the GCL set configured.
Synchronizing Time with Linux* PTP¶
Introduction¶
Time synchronization is one of the core functionalities of TSN, and it is specified by IEEE 802.1AS, also known as Generalized Precision Time Protocol (gPTP). gPTP is a profile from IEEE 1588, also known as Precision Time Protocol (PTP). gPTP consists of simplifications and constraints to PTP to optimize it to time-sensitive applications. gPTP also introduces the Grand Master (GM) role which can be played by any node in the gPTP domain (such as end-stations or bridges) and is determined by the Best Master Clock Algorithm (BMCA). The GM provides clocking information to all other nodes in the gPTP domain.
In the Linux* ecosystem, Linux PTP is the most popular implementation of PTP. It supports several profiles including gPTP and AVNU automotive profile. Linux PTP provides some tools to carry out time synchronization:
- ptp4l: daemon that synchronizes the PTP Hardware Clock (PHC) from the NIC;
- phc2sys: daemon that synchronizes the PHC and the System clock;
- pmc: utility tool to configure ptp4l in run-time.
Although this tutorial was tested with an Intel(R) Ethernet Controller I210 NIC, it is hardware-agnostic and applies to any 802.1AS-capable NIC with proper device driver support. By following this tutorial, the reader will be able to synchronize time using Linux PTP.
Installing Linux PTP¶
Several Linux distributions provide a package for Linux PTP. This section discusses how to get Linuxptp source, build, and install the tools.
git clone http://git.code.sf.net/p/linuxptp/code linuxptp
cd linuxptp/
make
sudo make install
By default, Linux PTP artifacts are installed into /usr/local/sbin/.
Synchronizing the PHC¶
The PHC synchronization step is mandatory for all TSN systems. It guarantees the PHC from the NIC is in sync with the GM clock from the gPTP domain. This is achieved by the ptp4l daemon.
To synchronize PHC with the GM clock, run the command below. Make sure to
replace eth0
by the interface name corresponding to the TSN-capable NIC in
the system.
sudo ptp4l -i eth0 -f configs/gPTP.cfg --step_threshold=1 -m
The file gPTP.cfg
(available in configs folder of Linux PTP source)
specified by the -f
option contains the configuration options to required
to run ptp4l in gPTP mode while the -i
option specifies the network
interface this instance of ptp4l is controlling. The --step_threshold
is
set so ptp4l converges faster when “time jumps” occur (more on this later). The
-m
option enables log messages on standard output.
By default, ptp4l triggers the BMCA to determine if the PHC can be elected
GM. To force a particular role, check the masterOnly
and slaveOnly
configuration options from ptp4l. For further details, see ptp4l(8) manpage.
Run this step in all end-points of the network.
Synchronizing the System Clock¶
The System clock synchronization step is mandatory only for those systems where applications rely on system clock to schedule traffic or present data (such as AVTP plugins from ALSA and GStreamer frameworks).
PHC time is set in TAI coordinate [1] while System clock time is set in UTC
coordinate [2]. To ensure System clocks (CLOCK_REALTIME
and CLOCK_TAI
)
are properly set, configure the UTC-TAI offset in the system. This is done by a
run-time option from ptp4l that is set via pmc utility tool as shown below.
sudo pmc -u -b 0 -t 1 "SET GRANDMASTER_SETTINGS_NP clockClass 248 \
clockAccuracy 0xfe offsetScaledLogVariance 0xffff \
currentUtcOffset 37 leap61 0 leap59 0 currentUtcOffsetValid 1 \
ptpTimescale 1 timeTraceable 1 frequencyTraceable 0 \
timeSource 0xa0"
Once UTC-TAI offset is properly set, synchronize the System clock with PHC.
sudo phc2sys -s eth0 -c CLOCK_REALTIME --step_threshold=1 \
--transportSpecific=1 -w -m
The -s
option specifies the PHC from eth0
interface as the master clock
while the -c
option specifies the System clock as slave clock. In the
command above PHC disciplines the System clock, that is, the system clock is
adjusted. The --transportSpecific
option is required when running phc2sys
in a gPTP domain. The --step_threshold
is set so phc2sys converges faster
when “time jumps” occurs. Finally, the -w
option makes phc2sys wait until
ptp4l is synchronized and the -m
option enables log messages on standard
output. For more information about phc2sys configuration option refer to
phc2sys(8) manpage.
Checking Clocks Synchronization¶
On ptp4l, the slave devices report out the time offset calculated from the master. This information can be used to determine whether the systems have been synchronized. The output for ptp4l is:
ptp4l[5374018.735]: rms 787 max 1208 freq -38601 +/- 1071 delay -14 +/- 0
ptp4l[5374019.735]: rms 1314 max 1380 freq -36204 +/- 346 delay -14 +/- 0
ptp4l[5374020.735]: rms 836 max 1106 freq -35734 +/- 31 delay -14 +/- 0
ptp4l[5374021.736]: rms 273 max 450 freq -35984 +/- 97 delay -14 +/- 0
ptp4l[5374022.736]: rms 50 max 82 freq -36271 +/- 64 delay -14 +/- 0
ptp4l[5374023.736]: rms 81 max 86 freq -36413 +/- 17 delay -14 +/- 0
The rms
value reported by ptp4l once the slave has locked with the GM shows
the root mean square of the time offset between the PHC and the GM clock. If
ptp4l consistently reports rms
lower than 100 ns, the PHC is synchronized.
Like ptp4l, phc2sys reports the time offset between PHC and System Clock, which determines if the clocks are synchronized.
phc2sys[5374168.545]: CLOCK_REALTIME phc offset -372582 s0 freq +246 delay 6649
phc2sys[5374169.545]: CLOCK_REALTIME phc offset -372832 s1 freq -4 delay 6673
phc2sys[5374170.547]: CLOCK_REALTIME phc offset 68 s2 freq +64 delay 6640
phc2sys[5374171.547]: CLOCK_REALTIME phc offset -20 s2 freq -3 delay 6687
phc2sys[5374172.547]: CLOCK_REALTIME phc offset 47 s2 freq +58 delay 6619
phc2sys[5374173.548]: CLOCK_REALTIME phc offset -40 s2 freq -15 delay 6680
The offset
information reported by phc2sys shows the time offset between
the PHC and the System clock. If phc2sys consistently reports offset
lower
than 100 ns, the System clock is synchronized.
To verify the TAI offset set by the pmc command above has been correctly
propagated to the kernel, read this offset value using the adjtimex()
system call. For more information about the adjtimex()
system call, see
adjtimex(2) manpage.
To automate this process, this tutorial includes the check_clocks
utility tool to verify whether Linux PTP daemons (ptp4l
and phc2sys) have been properly configured and the clocks have been
synchronized. Run the following to compile and run the utility:
gcc -o check_clocks check_clocks.c
sudo check_clocks -d eth0
The exptected output from check_clocks
is:
Clocks on this system are synchronized :)
Avnu Automotive Profile¶
Due to the static nature of the automotive network, AVnu has specified the Automotive Profile which does some optimizations to gPTP to improve startup time and reduce network load. The main difference from gPTP is that the BMCA is disabled so each device is statically assigned as master or slave.
Linux PTP also supports the Automotive Profile. To run ptp4l in that mode, the
command-line is the same as presented in Synchronizing the PHC but with a
different configuration file. In the systems playing the master role, use the
automotive-master.cfg
file. In all other systems, use the
automotive-slave.cfg
file. For illustration, see the following command-line
examples:
sudo ptp4l -i eth0 -f configs/automotive-master.cfg --step_threshold=1 -m
sudo ptp4l -i eth0 -f configs/automotive-slave.cfg --step_threshold=1 -m
Both these config files are available in the configs folder in Linux PTP
source code. The slave devices should also configure the
servo_offset_threshold
and servo_num_offset_values
config options. More
information about the config options is available in ptp4l(8) manpage.
Time Jumps¶
When a jump in time occurs in the gPTP domain, the System clock can take a
considerable amount of time to converge to the new time. This happens because
the clock is synchronized by adjusting the frequency. For example, if the PHC
time jumps by 1 second, empirical tests have shown that System clock can take
up to 30 seconds to synchronize (considering offset less than 100 ns). The
phc2sys daemon provides the --step_threshold=n
option which sets a
threshold. If the time jump is greater than n
seconds, time is adjusted by
stepping the clock (that means to adjust current time) instead of changing the
frequency.
However, stepping the clock has its own downsides as well. All timers set to expire between the current time and the new time expire once the time is set. This can affect the real-time behavior of the systems. So, use clock stepping carefully.
Troubleshooting¶
In this section we discuss some issues that we have faced when trying to synchronize time using Linux PTP in different systems
System time isn’t synchronized with PHC¶
If PHC offset never goes below hundreds (of nanoseconds)- or if it suddenly spikes (as seen on phc2sys log) - leaving system time out of sync, this section provides some hints on what to do.
Confirm NTP is not running¶
An NTP service may be running and changing the system clock. On systems with systemd, run:
timedatectl | grep NTP
If the output shows NTP service: active
, disable it:
timedatectl set-ntp false
Check if NTP has been disabled and run the clock synchronization steps again and verify that the clocks are in sync.
Check NetworkManager is not messing with the NIC¶
When NetworkManager is running, it may reset the NIC after the qdisc setup. In
this situation, PHC and the system clock may be out of sync. Do not allow the
NetworkManager to manage the TSN capable NIC. Add the following to the
/etc/NetworkManager/NetworkManager.conf
file:
[main]
plugins=keyfile
[keyfile]
unmanaged-devices=interface-name:eth0
Restart NetworkManager. Run the clock synchronization steps again and verify the clocks are in sync.
Ensure qdisc setup is done before clock synchronization¶
Qdisc setup resets the NIC, and that can make ptp4l out of sync. If any qdisc setup needs to be done after clocks are already in sync, repeat clock synchronization steps again and verify that the clocks are still in sync.
Confirm only one instance of ptp4l or phc2sys is running¶
Multiple instances of ptp4l or phc2sys adjusting a single clocksource or
sending out Sync messages can put the clocks out of sync. So, ensure only a
single instance (per network interface) of both the daemons is running at a
time. pgrep
can be useful to ensure only one instance of a particular
process is running. Look at pgrep(1) manpage for more details.
Check power management settings¶
Several power management mechanisms exist that set components (such as the CPU, NIC and the PCIe bus between CPU and NIC) into modes where the power consumption is reduced if they are inactive for some time. However, in these modes it can take 100 µs or more until they are fully active again.
In such a situation, phc2sys might report timeouts or you might measure a large difference between System clock and PHC even though phc2sys reports only small offsets.
If your BIOS supports it, enable the Intel(R) Time Coordinated Computing (TCC) mode that optimizes/disables several power management mechanisms for usage in real-time applications. Otherwise, you should at least disable PCIe ASPM, e.g. with
echo performance > /sys/module/pcie_aspm/parameters/policy
References¶
[1] | https://patchwork.ozlabs.org/cover/831420 |
[2] | https://patchwork.ozlabs.org/cover/938991 |
[3] | https://patchwork.ozlabs.org/cover/976513 |
[4] | https://github.com/AVnu/OpenAvnu/pull/751 |
[5] | https://patchwork.kernel.org/cover/10655287 |
[6] | https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/merge_requests/361 |