GStreamer
|
Índex
|
|
General
|
|
Instal·lació / Installation
|
- From packages
-
- Mageia
-
urpmi ... gstreamer1.0-plugins-bad
gstreamer1.0-plugins-ugly gstreamer1.0-x264
- Mono repository: compilació des de codi
font / compilation
from source
- (28th September 2021) replaces gst-build and multiple
modules
- GStreamer
mono repository FAQ
- Dependències / Dependencies
|
GitLab |
dependencies
|
|
|
Mageia
|
Alma 8
|
Ubuntu (22.04) |
|
|
packages |
subproject |
package |
subproject |
package |
subproject |
general |
|
dnf install git ninja
sudo
pip install --upgrade meson
pipx
install meson
activate non-free and tainted for dnf |
|
dnf -y install epel-release
dnf config-manager --set-enabled powertools
# for libx264-devel, fdk-aac-devel:
dnf -y install --nogpgcheck
https://mirrors.rpmfusion.org/free/el/rpmfusion-free-release-8.noarch.rpm
https://mirrors.rpmfusion.org/nonfree/el/rpmfusion-nonfree-release-8.noarch.rpm
dnf install git python3 python3-pip
ninja-build
pip3 install meson |
|
apt install git python3 python3-pip
ninja-build |
|
gstreamer |
gstreamer |
dnf install flex bison lib64girepository-devel
|
|
dnf install gcc-c++ glib2-devel cmake flex
bison gtk3-devel libunwind-devel gmp-devel
gsl-devel gobject-introspection-devel
bash-completion libcap-devel elfutils-devel
1.24.8: gstreamer| Dependency
glib-2.0 found: NO. Found 2.56.4 but need: '>=
2.64.0'
1.22.12: gstreamer| Dependency
glib-2.0 found: NO. Found 2.56.4 but need: '>=
2.62.0'
|
|
apt install g++ libglib2.0-dev-bin cmake
flex bison libgtk-3-dev libunwind-14-dev
libgmp-dev libgsl-dev gobject-introspection
bash-completion libcap-dev libdw-dev |
|
gst-plugins-base |
gst-plugins-base |
dnf install lib64ogg-devel lib64opus-devel
lib64vorbis-devel |
|
dnf install libogg-devel opus-devel
libvorbis-devel libtheora-devel SDL2-devel
iso-codes-devel libgudev-devel mesa-libgbm-devel
alsa-lib-devel cdparanoia-devel
libjpeg-turbo-devel
# do not activate raven, as it messes up with
x264-devel and installs ffmpeg-libraries
# dnf install https://pkgs.dyn.su/el8/base/x86_64/raven-release-1.0-3.el8.noarch.rpm
# dnf install graphene-devel |
|
apt install zlib1g-dev libopus-dev
libogg-dev libvorbis-dev libtheora-dev libsdl2-dev
iso-codes libgudev-1.0-dev libgbm-dev
libasound2-dev libcdparanoia-dev
libjpeg-turbo8-dev |
... |
gst-plugins-good |
gst-plugins-good |
dnf install lib64mp3lame-devel lib64dv-devel
lib64jpeg-devel lib64qt5platformsupport-devel
|
|
dnf install libsoup-devel nasm libnice-devel
libvpx-devel pulseaudio-libs-devel lame-devel
libdv-devel libcaca-devel libv4l-devel flac-devel
jack-audio-connection-kit-devel libshout-devel
optional: dnf install qt5-devel
|
|
apt install libsoup2.4-dev nasm libnice-dev
libvpx-dev libpulse-dev libmp3lame-dev libdv-dev
libcaca-dev
apt install libv4l-dev libflac-dev libshout-dev |
... |
gst-plugins-bad |
gst-plugins-bad
|
dnf install lib64openjpeg2-devel
lib64microdns-devel lib64fdk-aac-devel nasm
|
|
dnf -y install --nogpgcheck
https://mirrors.rpmfusion.org/free/el/rpmfusion-free-release-8.noarch.rpm
https://mirrors.rpmfusion.org/nonfree/el/rpmfusion-nonfree-release-8.noarch.rpm
dnf install fdk-aac-devel
libmicrodns-devel openjpeg2-devel libva-devel
libdrm-devel libass-devel opencv-devel
|
|
apt install libdrm-dev libass-dev
libopencv-dev libva-dev libfdk-aac-dev
libopenjp2-7-dev |
... |
gst-plugins-ugly |
gst-plugins-ugly |
dnf install lib64x264-devel
|
|
dnf install libmpeg2-devel x264-devel
to activate x264 and mpeg2dec, meson must be called
with -Dgpl=enabled |
|
apt install libmpeg2-4-dev |
|
gst-libav |
gst-libav |
only when not using ffmpeg compiled from source:
dnf install lib64ffmpeg-devel
|
|
if you must compile ffmpeg, do it before
compiling gstreamer, so as it does not install
ffmpeg as a subproject; but you must call meson
with:
PKG_CONFIG_PATH=/usr/local/lib/pkgconfig/
meson ... |
|
|
|
gst-rtsp-server |
|
|
|
|
|
|
|
gst-devtools |
|
dnf install lib64json-glib-devel
|
|
dnf install json-glib-devel |
|
apt install libjson-glib-dev |
|
gst-integration-testsuites |
|
|
|
|
|
|
|
gst-editing-services |
gst-editing-services
|
dnf install python3-gobject-devel
|
|
dnf install pygobject3-devel
python3-cairo-devel |
|
apt install python-gi-dev python3-cairo-dev |
|
gstreamer-vaapi |
|
|
|
(disabled) |
|
|
|
gst-omx |
|
|
|
(disabled) |
|
|
|
gstreamer-sharp |
|
|
|
(disabled) |
|
|
|
gst-python |
gst-python |
|
|
|
|
|
|
gst-examples |
|
dnf install lib64soup-devel
|
|
|
|
|
|
gst-plugins-rs |
|
|
|
|
|
|
|
- Passos / Steps
sudo sh -c 'echo "/usr/local/lib64" >
/etc/ld.so.conf.d/local64.conf'
sudo ldconfig
mkdir -p ~/src && cd ~/src
git clone
https://gitlab.freedesktop.org/gstreamer/gstreamer.git
cd gstreamer
- if you compiled ffmpeg:
PKG_CONFIG_PATH=/usr/local/lib/pkgconfig meson
-Dgpl=enabled -Dtests=disabled -Dexamples=disabled
-Dgst-python:libpython-dir=/usr/lib/
-DFFmpeg:nonfree=enabled [--wipe] build
- PKG_CONFIG_PATH=/usr/local/lib/pkgconfig:$PWD/build/meson-private
ninja -C build
- if you installed ffmpeg-devel:
meson
-Dgpl=enabled -Dtests=disabled -Dexamples=disabled
-Dgst-python:libpython-dir=/usr/lib/
-DFFmpeg:nonfree=enabled [--wipe] build
- PKG_CONFIG_PATH=$PWD/build/meson-private
ninja
-C build
sudo ninja -C build install
- sudo ldconfig
- to uninstall
sudo ninja -C build uninstall
- Problems
- ...
- check
jq ''
./build/meson-info/intro-installed.json
- ges-timeline.h:31:
syntax error, unexpected ';' in '# 31
- ...
- Compilació
des
de codi font / Compilation from source
-
- Mòduls i dependències / Modules
and dependencies
-
-
module |
desc
|
GitLab |
dependencies
|
|
|
|
Mageia |
... |
CentOS 7 |
Ubuntu
|
/data/doc/gstreamer/head
|
|
|
urpmi ...
|
|
yum install
...
|
apt-get
install ...
|
gstreamer
|
|
|
automake
autoconf gettext-devel libtool bison flex
gtk-doc yasm |
|
autoconf
gettext-devel libtool bison flex gtk-doc yasm
glib2-devel gcc-c++ freetype freetype-devel |
autoconf
bison flex ...
|
gst-plugins-base
|
|
|
lib64opus-devel
libvorbis-devel
libogg-devel libtheora-devel libxv-devel |
|
opus-devel
libvorbis-devel libogg-devel libtheora-devel
libxv-devel pango-devel wayland-devel |
libopus-dev
libvorbis-dev libogg-dev libtheora-dev
libxv-dev libpango1.0-dev
|
gst-plugins-good
|
|
|
libvpx-devel
libsoup-devel |
|
libvpx-devel
pulseaudio-libs-devel
libsoup-devel |
libvpx-dev |
gst-plugins-bad
|
|
gst-plugins-bad |
librtmp-devel
(from source: srt) |
|
librtmp-devel
(from source: srt)
|
librtmp-dev
(from source: srt)
|
gst-plugins-ugly
|
|
|
libx264-devel |
|
libx264-devel |
libx264-dev |
gst-python
|
|
|
python-gobject3-devel
|
|
python-devel
pygobject3-devel |
|
gst-libav
|
|
|
|
|
|
|
gstreamer-editing-services
|
|
gst-editing-services |
libxml2-devel |
|
libxml2-devel
|
|
- Mageia
-
urpmi autoconf gettext-devel
libtool bison flex gtk-doc yasm
- For plugins-base:
-
urpmi lib64opus-devel lib64vorbis-devel
lib64ogg-devel lib64theora-devel lib64xv-devel
libsoup-devel
- Raspbian
-
- CentOS
-
- CentOS 8
- CentOS 7
- gstreamer
- automake
>=1.14 (CentOS 7 provides version 1.13)
yum install -y autoconf
gettext-devel libtool bison flex gtk-doc
yasm glib2-devel gcc-c++ freetype
freetype-devel
- gst-plugins-base
-
yum install opus-devel
libvorbis-devel libogg-devel
libtheora-devel libxv-devel pango-devel
- gst-plugins-good
-
- gst-plugins-bad
-
yum -y install
http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-5.el7.nux.noarch.rpm
yum -y install librtmp-devel
- gst-plugins-ugly
-
yum install libx264-devel
- Modules
-
module
|
|
/usr/lib64/
/usr/local/lib/
|
/usr/lib64/gstreamer-1.0/
/usr/local/lib/gstreamer-1.0/ |
/usr/lib64/girepository-1.0/
/usr/local/lib/girepository-1.0/
|
/usr/share/gir-1.0/
/usr/local/share/gir-1.0/
|
/usr/lib64/pkgconfig/
/usr/local/lib/pkgconfig/ |
gstreamer
|
packages
|
urpmi
lib64gstreamer1.0_0 |
urpmi
lib64gst-gir1.0
|
urpmi
lib64gstreamer1.0-devel |
files
|
- libgstreamer-1.0.so
- libgstbase-1.0.so
- libgstcheck-1.0.so
- libgstcontroller-1.0.so
- libgstnet-1.0.so
|
- libgstcoreelements.so
- libgstcoretracers.so
|
- Gst-1.0.typelib
- GstBase-1.0.typelib
- GstController-1.0.typelib
- GstNet-1.0.typelib
- GstCheck-1.0.typelib
|
- Gst-1.0.gir
- GstBase-1.0.gir
- GstController-1.0.gir
- GstNet-1.0.gir
- GstCheck-1.0.gir
|
- gstreamer-1.0.pc
- gstreamer-base-1.0.pc
- gstreamer-check-1.0.pc
- gstreamer-controller-1.0.pc
- gstreamer-net-1.0.pc
|
gst-plugins-base
|
packages
|
urpmi
lib64gstreamer-plugins-base1.0_0
|
urpmi
lib64gstreamer-plugins-base-gir1.0 |
|
|
files
|
|
|
|
|
|
gst-plugins-good |
packages |
urpmi
gstreamer1.0-plugins-good
|
|
|
|
files |
|
|
|
|
|
gst-plugins-bad |
packages |
urpmi
gstreamer1.0-plugins-bad |
urpmi
lib64gstreamer-plugins-bad-gir1.0 |
lib64gstreamer-plugins-bad1.0-devel
|
files |
|
|
- GstGL-1.0.typelib
- GstInsertBin-1.0.typelib
- GstMpegts-1.0.typelib
|
- GSTGL-1.0.gir
- GstInsertBin-1.0.gir
- GstMpegts-10.0.gir
- GstPlayer-1.0.gir
|
- gstreamer-bad-audio-1.0.pc
- gstreamer-bad-base-1.0.pc
- gstreamer-bad-video-1.0.pc
- gstreamer-codeparsers-1.0.pc
- gstreamer-gl-1.0.pc
- gstreamer-insertbin-1.0.pc
- gstreamer-mpegts-1.0.pc
- gstreamer-player-1.0.pc
- gstreamer-plugins-bad-1.0.pc
|
gst-plugins-ugly |
packages |
urpmi
gstreamer1.0-plugins-ugly
|
|
|
|
files |
|
|
|
|
|
gst-python |
packages |
|
|
|
|
|
files |
|
|
|
|
|
gst-libav |
packages |
|
|
|
|
|
files |
|
|
|
|
|
gstreamer-editing-services
|
packages
|
|
|
|
urpmi
lib64ges1.0-devel |
files
|
|
|
|
|
- gst-editing-services-1.0.pc
|
- From git
-
- From tar files
-
- gst-build
(NOTE: replaced by mono
repo; NOTE: this is the default method
from version ==1.18)
- Getting
started with GStreamer's gst-build (Collabora)
- meson
- list available options
meson configure
meson configure gst-python
- get present values for options
- set options
meson -D...=... build
- ...
- ninja
- build all
- clean all
- install
sudo ninja -C build install
- uninstall
sudo ninja -C build uninstall
- Dependencies
-
Project name |
CentOS 8 |
... |
|
dnf -y install ... |
|
(repos) |
dnf -y install epel-release
dnf config-manager --set-enabled
powertools
dnf -y install --nogpgcheck
https://mirrors.rpmfusion.org/free/el/rpmfusion-free-release-8.noarch.rpm
https://mirrors.rpmfusion.org/nonfree/el/rpmfusion-nonfree-release-8.noarch.rpm
|
|
All GStreamer modules |
dnf install git python3 gcc-c++
ninja-build |
|
|
pip3 install --user meson |
|
orc |
- |
|
gstreamer |
glib2-devel cmake flex bison
gtk3-devel |
|
gst-plugins-base |
zlib-devel opus-devel
libogg-devel libvorbis-devel
libtheora-devel qt5-devel
SDL2-devel iso-codes-devel (meson
-Dexamples=disabled
-Dtests=disabled) (qt5-devel
breaks compilation on AWS EC2 instance) |
|
- gl-headers |
- |
|
- graphene |
|
|
- mutest (?) |
|
|
gst-plugins-good |
libsoup-devel
libjpeg-turbo-devel nasm libnice-devel
libvpx-devel pulseaudio-libs-devel |
|
libnice |
openssl-devel |
|
gst-plugins-bad |
libdrm-devel libass-devel
librtmp-devel opencv-devel
gobject-introspection-devel
libmicrodns-devel libva-devel ... |
|
- libdrm |
|
|
- libavtp |
libbs2b-devel lcms2-devel
libcurl-devel libdca-devel faac-devel
faad2-devel fdk-aac-devel srt-devel
libsrtp-devel x265-devel libexif-devel |
|
- dssim |
|
|
- microdns |
|
|
- openh264 |
gtest-devel |
|
- gtest |
|
|
- openjp2 |
wxBase3-devel wxGTK3-devel
libtiff-devel |
|
gst-plugins-ugly |
libmpeg2-devel x264-devel |
|
gst-libav |
|
|
- FFmpeg |
libgcrypt-devel twolame-libs |
|
gst-rtsp-server |
libcgroup |
|
gst-devtools |
json-glib-devel |
|
- json-glib |
|
|
gst-integration-testsuites |
|
|
gst-editing-services |
|
|
pygobject |
pygobject3-devel |
|
- pycairo |
python3-cairo
platform-python-devel |
|
gst-python |
"Python dynamic library path could not
be determined"
Solution:
meson.build:
- ['gst-python', { 'option':
get_option('python')}],
+ ['gst-python', { 'option':
get_option('python'),
'libpython-dir':'/usr/lib/'}],
or:
meson
-Dgst-python:libpython-dir=/usr/lib/
build |
|
gst-examples |
|
|
- CentOS
- CentOS 8
# gstreamer
sudo dnf install ninja-build git
gcc-c++ libmount-devel flex bison
glib2-devel python3-cairo
libpng-devel libpciaccess-devel nasm
cairo-devel wxBase3-devel
pip3 install --user meson
- #
gst-plugins-base
sudo dnf install ...
- #
gst-plugins-base
...
- #
gst-plugins-good
sudo dnf install
libsoup-devel
- #
gst-plugins-bad
...
- #
gst-plugins-ugly
...
- #
gst-python
sudo dnf install pygobject3-devel
- CentOS 7
- Mageia
pip3 install --user meson
- Passos / Steps
cd ~/src
git clone
https://gitlab.freedesktop.org/gstreamer/gst-build
cd gst-build
meson build --buildtype=debug
ninja -C build
sudo ninja -C build install
- Checkout
another branch using worktrees
cd ~/src
git clone
https://gitlab.freedesktop.org/gstreamer/gst-build
cd gst-build
- ./gst-worktree.py
add gst-build-1.18 origin/1.18
- cd gst-build-1.18
- meson
build
- meson
-Dgst-python:libpython-dir=/usr/lib/
-Dtests=disabled -Dexamples=disabled build
- ninja
-C build
sudo
pip3 install meson
sudo
ninja -C build install
sudo sudo sh -c 'echo "/usr/local/lib64"
>/etc/ld.so.conf.d/local.conf'
sudo ldconfig
- uninstall
- cd
gst-build-1.18
- sudo
ninja -C build uninstall
- Optional
Installation
- Problemes / Problems
syntax error, unexpected ';' in ...
- gst-uninstalled
- https://gstreamer.freedesktop.org/src/
- if you want to be
able to access GStreamer by using PyGObject
(applications
made in Python):
-
- Dependencies
-
- Mageia
-
urpmi lib64girepository-devel
- CentOS
-
sudo yum install
gobject-introspection-devel
- Check config.log
-
HAVE_INTROSPECTION_TRUE=''
INTROSPECTION_COMPILER='/usr/bin/g-ir-compiler'
...
- Check that these files exist after compilation:
-
/usr/local/lib/girepository-1.0/Gst*.typelib
- In order to access Gst from your applications, you
will need to set environment variable (to avoid
error:
ValueError: Namespace Gst not
available ):
-
export
GI_TYPELIB_PATH=/usr/local/lib/girepository-1.0
gstreamer_install.sh
1.16.2
- gstreamer_install.sh
-
#!/bin/bash
-e
# defaults
modules="gstreamer gst-plugins-base
gst-plugins-good gst-plugins-bad gst-plugins-ugly
gst-python gst-libav gstreamer-editing-services"
EXPECTED_ARGS=2
if (( $# != $EXPECTED_ARGS ))
then
cat <<EOF
Usage: `basename $0` [get,install,uninstall]
version
Modules that will be built:
${modules}
Dependencies:
- gstreamer: automake autoconf gettext-devel
libtool bison flex gtk-doc yasm glib2-devel
gcc-c++ freetype freetype-devel
- plugins-base: libogg-devel libtheora-devel
libvorbis-devel opus-devel wayland-devel
- plugins-good: libvpx-devel>=1.4.0
pulseaudio-libs-devel libsoup-devel
- plugins-bad:
http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-5.el7.nux.noarch.rpm
librtmp-devel libsrtp-devel>=2.1.0
libcurl-devel>=7.35.0 fdk-aac>=2.0.0
libxml-2.0-devel>=2.9.2
libnice-devel>=0.1.14
- plugins-ugly: libx264-devel
- gst-python: python-devel pygobject3-devel
- gstreamer-editing-services: libxml2-devel
Examples:
- `basename $0` get 1.16.2
- `basename $0` install 1.16.2
- `basename $0` uninstall 1.16.2
EOF
exit 1
fi
# parameters
action=$1
version=$2
# update ldconfig
sudo sudo sh -c 'echo "/usr/local/lib" >
/etc/ld.so.conf.d/local.conf'
sudo ldconfig
mkdir -p gst-${version}
cd gst-${version}
for module in $modules
do
src_name="${module}-${version}"
echo
"=============================================================
$src_name
============================================================="
tar_filename="${src_name}.tar.xz"
case $action in
get)
url=https://gstreamer.freedesktop.org/src/${module}/${tar_filename}
echo "
getting from: ${url}"
curl -s -L
-O ${url}
;;
install)
echo "
installing from: ${tar_filename}"
tar xJf
${tar_filename}
export
XDG_DATA_DIRS="/usr/local/share/:/usr/share/"
cd
${src_name}
./autogen.sh
PKG_CONFIG_PATH= /usr/local/lib64/pkgconfig/: /usr/local/lib/pkgconfig/
--disable-gtk-doc
make
sudo make
install
sudo
ldconfig
cd ..
;;
uninstall)
echo "
uninstalling: ${src_name}"
cd
${src_name}
sudo make
uninstall
cd ..
;;
esac
done
exit 0
- Compilation problems
- plugins-good
gstvp9dec.c: In function
'gst_vp9_dec_get_valid_format':
gstvp9dec.c:148:14: error: 'vpx_image_t' has
no member named 'cs'
if
(img->cs == VPX_CS_SRGB)
- Solució / Solution:
- install libvpx-devel >=1.4.0
- vpx_image.h
- CentOS from repositories:
libvpx-devel 1.3.0-5.el7_0
- plugins-bad:
- gstsrtp.c:
In function 'set_crypto_policy_cipher_auth':
gstsrtp.h:68:28: error: 'AES_128_GCM'
undeclared (first use in this function)
# define SRTP_AES_GCM_128
AES_128_GCM
- gstreamer will be installed in:
-
/usr/local/lib/gstreamer-1.0/
- Incidències / Issues (GitLab
Issues)
(others: gstreamer-devel
list, Bugs)
-
- Issues
- HLS seek
- 1.16.0
.../tmp-introspectAx4R5G/.libs/lt-GstMpegts-1.0:
error while loading shared libraries:
libgstvideo-1.0.so.0: cannot open shared object file:
No such file or directory
(gst-plugin-scanner:21672): GStreamer-WARNING
**: 15:56:30.108: Failed to load plugin
'/usr/local/lib/gstreamer-1.0/libgstpango.so':
/lib64/libcairo.so.2: undefined symbol:
FT_Get_Var_Design_Coordinates
gst-inspect-1.0 timeoverlay
- Solució / Solution
- Opus audio encoder not found
-
gst-plugins-bad/ext/opus/opusenc.c
was present until version
1.6; then it was moved to plugins-base
- Solució / Solution
-
- install system-wide Opus devel libraries and
reconfigure and make plugins-base
|
Ús / Usage
|
- Edició / Edit
- Info
- Eines / Tools
- Desenvolupament / Development
- Acceleració / Acceleration
-
- Raspberry Pi
-
- Tools
(CLI)
- How
do
I use the GStreamer command line interface ?
- Command
line
tools
- gst-validate
- ges-launch
-
- GES in Python
- GES
development
- Install
- Mageia
urpmi gstreamer1.0-nle
gstreamer1.0-editing-services
- Ubuntu
- sudo
apt-get install ges1.0-tools ...
- Help
- ges-videocrop-effect.sh
- Sintaxi / Syntax
-
ges-launch-1.0 --help-all
|
|
exemples |
python
equivalent |
Project
Options
|
-l, --load=<path>
-s, --save=<path>
- --save-only=<path>
-p
--sample-path
-r
--sample-path-recurse
|
|
|
Rendering
Options
|
-o --outputuri=<uri>
()
-f --format=<profile>
(specified serialized
encoding-profile;
if not specified: application/ogg:video/x-theora:audio/x-vorbis)
-e --encoding-profile=<profile-name>
(from a preset file)
--smart-rendering
|
|
--format=...
--smart-rendering
|
Playback
Options
|
-v --videosink=<videosink>
-a --audiosink=<audiosink>
-m --mute
|
|
|
Informative
Options
|
--inspect-action-type=
--list-transitions
|
|
|
Application
Options
|
--disable-mixing
-r
--repeat=
-t,
--track-types=<track-types>
- --video-caps
- --audio-caps
--set-scenario
|
--track-types="audio"
--video-caps="video/x-raw,width=640,height=272"
|
- --disable-mixing
tracks =
timeline.get_tracks()
for track in tracks:
track.set_mixing(False)
- --video-caps=...
|
+clip
|
<path|uri>
name=
track-types=
inpoint[i]= (referred to
clip)
duration[d]=
start[s]= (referred to
timeline)
layer[l]=
set-
alpha
posx
posy
width
height
volume
mute
|
+clip toto.mp4
- +clip
toto.mp4 track-types=audio
+clip toto.mp4 set-alpha 0.9
|
|
+test-clip
|
<test_clip_pattern>
name=
start=
duration=
inpoint=
layer=
|
|
|
+effect
|
<bin-description>
- effectv
agingtv
- dicetv
- edgetv
- optv
- quarktv
- radioactv
- revtv
- rippletv
- shagadelictv
- streaktv
- vertigotv
- warptv
- audiofx
videocrop
...
- ...
element-name=
inpoint=
name=
|
+effect "agingtv"
- +effect
"agingtv" set-dusts false
- +effect
"audiopanorama panorama=-1.00"
|
|
set-
|
|
|
|
+title
|
<text>
start=
duration[d]=
- inpoint=
- track-types=
- layer=
|
+title "abc" duration=2.0
|
|
+track |
<track_type>
restrictions=
|
|
|
+keyframes |
<property_name>
- binding-type=
- interpolation-mode=
- ...
|
|
|
set-... |
|
+clip /path/to/media +effect
"agingtv" set-dusts false
|
|
- Exemples / Examples
- help
- play a clip from second 4.0 to second 6.0:
ges-launch-1.0 +clip bbb_720p.mp4 i=4.0
d=2.0
- play a clip from second 4.0 to second 6.0 and then
another clip from the beginning:
ges-launch-1.0 +clip bbb_720p.mp4 i=4.0
d=2.0 +clip sintel_720p.mp4
- play a clip with a logo during 10 seconds on top
right:
ges-launch-1.0 +clip bbb_720p.mp4 +clip
logo.jpeg s=0 d=10 set-alpha 0.8 set-width 200
set-height 100 set-posx 1000 set-posy 20
- play a title during 2 seconds, then a video, then
a title during 3 seconds:
ges-launch-1.0 +title "begin"
duration=2.0 +clip toto.mp4 +title "end"
duration=3.0
- play a clip with a simultaneous title
- play to a window with the same dimensions as clip
(otherwise, a 1280x720 window is created, the
default for VideoTrack)
- see
issue #139
- if you know the dimensions of the source (e.g.
640x272)
ges-launch-1.0 --disable-mixing
--video-caps="video/x-raw,width=640,height=272"
+clip video.mp4
- GESSmartMixer
- save project to play a clip from second 4.0 to
second 6.0:
ges-launch-1.0 +clip bbb_720p.mp4 i=4.0
d=2.0 --save bbb.xges
- play according to project (can also be generated
by pitivi):
ges-launch-1.0 --load bbb.xges
- render to file according to project:
# output is forced to 720p
ges-launch-1.0 --load bbb.xges
-o toto.mp4
# output preserves input resolution
ges-launch-1.0 --load bbb.xges --smart-rendering
-o toto.mp4
- launch pitivi
with this project:
- render without reencoding (fast) (NOTE: not
working with HLS):
- get_smart_profile
ges-launch-1.0 --smart-rendering +clip
bbb_720p.mp4 i=4.0 d=2.0 -o bbb.mp4
- Problemes / Problems
- with m3u8 input
sys:1: Warning: g_source_remove:
assertion 'tag > 0' failed
- render to Ogg - Theora - Vorbis (default encoding profile):
ges-launch-1.0 +clip bbb_720p.mp4 i=4.0
d=2.0 -o bbb.ogg
ges-launch-1.0 +clip bbb_720p.mp4 i=4.0
d=2.0 -o bbb.ogg -f
"video/ogg:video/x-theora:audio/x-vorbis"
- render to WebM - VP8 - Vorbis:
ges-launch-1.0 +clip bbb_720p.mp4 i=4.0
d=2.0 -o bbb.webm -f
"video/webm:video/x-vp8:audio/x-vorbis"
- Problemes / Problems
ERROR from element qtdemux1:
Internal data stream error.
- render to MP4 - H.264 - MP3:
ges-launch-1.0 +clip bbb_720p.mp4 i=4.0
d=2.0 -o bbb.mp4 -f
"video/quicktime,variant=iso:video/x-h264:audio/mpeg,mpegversion=1,layer=3"
- render to MP4 - H.264 - AAC encoding
profile (Mageia: gstreamer1.0-plugins-bad...tainted):
ges-launch-1.0 +clip bbb_720p.mp4 i=4.0
d=2.0 -o bbb.mp4 -f
"video/quicktime,variant=iso:video/x-h264:audio/mpeg,mpegversion=4"
- Problemes / Problems
Invalid format specified:
video/quicktime,variant=iso:video/x-h264:audio/mpeg,mpegversion=4
- Solution
urpmi gstreamer1.0-x264
urpmi gstreamer1.0-fdkaac
ERROR from element qtdemux1:
Internal data stream error.
- specify output video dimensions, as a canvas
(clips will not be rescaled) (NOTE: this is needed
for gstreamer 1.18; if not specified, output will be
1280x720 (TBC))
ges-launch-1.0 ... -o
bbb.mp4 -f
"video/quicktime,variant=iso:video/x-h264,width=1920,height=1080:audio/mpeg,mpegversion=1,layer=3"
- specify input video dimensions (clips will be
rescaled and output dimensions will be those of the
first clip)
ges-launch-1.0 +clip clip_1.mp4 set-width
1280 set-height 720 +clip clip_2.mp4
set-width
1280 set-height 720 ...
- ...
- gst-transcoder
gst-transcoder-1.0 [OPTION?] <source uri>
<destination uri> <encoding
target name>[/<encoding
profile name>]
- Exemples / Examples
- Create a target file called device/mp4target.gep
gst-transcoder-1.0 <input_file>
<output_file>.mp4 mp4target/mp4
gst-transcoder-1.0 input.mp4 output.mkv
matroska
- Problemes / Problems
WARN: ... no such element factory
"uritranscodebin"!
- gst-discoverer
- gst-inspect
-
- list of all plug-ins
-
- available properties for a
specified plugin
-
gst-inspect-1.0 videoconvert
...
- ...
- gst-launch
(wp)
-
gst-launch-1.0 ... ! ... ! ...
gst-launch-1.0 ...
!
... ! ...demux
name=mydemux ...mux
name=mymux !
... ! ... mydemux. !
... ! mymux. mydemux.
! ... ! mymux.
-
input + demux
mux + output
audio
video
- Options
-
-e : end of stream on shutdown
-f, --no_fault : ...
--help : ...
-q, --quiet : ...
-m, --messages : ...
-o FILE, --output=FILE : ...
-t, --tags : ...
-T, --trace : ...
-v : verbose : ...
--gst-debug-level=2
- Verbose messages (
-v, -q ): sent to stdout (1)
/<element>:<name>/<element>:<name>.<subelement>:<name>:
<property>=<value>,
<property>=<value> ...
- gst_format_logs.sh
#!/bin/bash
input_path=$1
awk -F'\\\\ ' 'BEGIN {OFS="\n";ORS="\n\n"} $1
~ /^\/GstPipeline/ {$1=$1;print $0}'
${input_path}
exit 0
- Debug:
sent
to stderr
(2)
-
- Debugging
tools
export GST_DEBUG=1 # default
export GST_DEBUG="*:2"
export GST_DEBUG=WARN,udpsrc:INFO,videodecoder:DEBUG
export GST_DEBUG=3,rtpjitterbuffer:3,rtpbasedepayload:6,videodecoder:4
- export
GST_DEBUG=3,hlsdemux:5 # to see the retrieved ts
files
-
number
|
name
|
1
|
ERROR
|
2
|
WARNING
|
3
|
FIXME
|
4
|
INFO
|
5
|
DEBUG
|
6
|
LOG
|
7
|
TRACE
|
8
|
|
9
|
MEMDUMP
|
-
- categories (modules)
export
GST_DEBUG=3,basesrc:4,basesink:4,...
-
grep GST_DEBUG_CATEGORY_INIT -R
gstreamer
|
examples (use grep to detect them) |
adapter |
|
audio |
|
audioencoder |
|
audioresample |
|
basesrc |
|
basesink |
- udpsink0
- udpsink1
- udpsink2
- udpsink3
|
basetransform |
- audioconvert0
- audioresample0
- capsfilter0
- capsfilter1
- videoconvert0
- videoscale0
|
bin |
|
bufferpool |
|
capsfilter |
- capsfilter0
- capsfilter1
- capsfilter2
|
default |
|
hlsdemux |
|
libav |
|
opusenc |
|
queue_dataflow |
|
query |
|
rtpbin |
|
rtpsession |
- rtpsession0:send_rtp_sink
- rtpsession1:send_rtp_sink
|
videodecoder |
|
videoconvert |
|
videopool |
|
videoscale |
|
- Gstreamer pipeline diagram
-
- How
to
generate a Gstreamer pipeline diagram (graph)
- Dependencies
-
- Utilització / Usage
-
mkdir /tmp/dots
export
GST_DEBUG_DUMP_DOT_DIR=/tmp/dots
gst-launch ...
cd /tmp/dots
- to generate svg:
-
dot
-Tsvg ...-gst-launch...PLAYING....dot
>pipeline.svg
gwenview pipeline.svg
- to generate png:
-
dot
-Tpng ...-gst-launch...PLAYING....dot
>pipeline.png
gwenview pipeline.png
- Sintaxi / Syntax
-
-
element:
ELEMENTTYPE [PROPERTY1
...]
|
elements can
be put into bins:
[BINTYPE.] ( [ PROPERTY1
...] PIPELINE-DESCRIPTION )
|
property:
NAME=*[(TYPE)]*VALUE
in lists and ranges: *[(TYPE)]*VALUE
- range:
[VALUE,VALUE]
- list:
{VALUE[,VALUE...]}
type:
-i int
-f float
-4 fourcc
-b bool boolean
-s str string
-fraction
|
link:
[[SRCELEMENT].[PAD1,...]] !
[[SINKELEMENT].[PAD1,...]]
[[SRCELEMENT].[PAD1,...]] ! CAPS !
[[SINKELEMENT].[PAD1,...]]
|
caps:
MIMETYPE [, PROPERTY [,
PROPERTY
...]]] [; CAPS [;
CAPS
...]]
|
-
input
|
demuxer
|
decoder
|
filter
|
encoder |
muxer
|
|
demux
|
buffer
|
parse
(to get specific packets from demuxer)
|
decode
|
filter
|
encode |
codec
parameters
(CAPS)
|
parse
(to prepare specific packets for muxer) |
mux
|
output
|
file
|
- filesrc
location=videofile
- uri=file:///path/to/test.ts
|
devices
|
- dvbsrc
modulation="QAM 64"
trans-mode=8k bandwidth=8
frequency=658000000
code-rate-lp=AUTO
code-rate-hp=2/3 guard=4
hierarchy=0 pids=111:112
|
network
|
|
unix
|
|
|
decodebin
name=decoder
(does not include sdpdemux)
|
- corresponding parameters (caps) can be
grouped, but must appear after filter
declaration (except videoconvert:
videoconvert ! <caps> !
videoconvert)
- if a value in caps must be used, the
corresponding filter must be present. If
the value in caps is not used, because
the input already has this value, the
filter is not needed
|
filter
|
description
|
mimetype,
comma
separated key=value
|
video |
|
|
video/x-raw |
videoscale |
|
width=360
height=288
pixel-aspect-ratio=1/1
|
videorate |
|
framerate=25/1 |
videoconvert |
|
format=BGRA |
?
|
|
interlace-mode=progressive
|
audio
|
|
|
audio/x-raw
|
audiorate
|
Drops/duplicates/adjusts
timestamps
on audio samples to make a perfect
stream
|
tolerance=...
...
|
audioresample
|
Resamples
audio
|
rate=48000
|
audioconvert |
Convert
audio
to different formats
|
format=S16LE
channels=2
layout=interleaved
|
|
- video
-
- audio
-
- lamemp3enc
- twolamemp2enc
- avenc_mp2
- avenc_aac
- faac
- opusenc
bitrate=128000
(b/s)
|
- video
-
- video/x-h264,
profile=baseline
|
|
- mpegtsmux
name=mux
- flvmux
streamable=true name=mux
- mp4mux
faststart=true
|
file
|
- filesink
location=music.ogg
|
devices
|
|
network
|
- udpsink
host=192.168.0.8 port=5004
sync=false
- rtmpsink
location=rtmp://rtmp_server:1935/app/stream
|
unix
|
- fdsink
- shmsink
socket-path=...
shm-size=...
wait-for-connection=...
|
|
demux
|
specific
stream
(source Element Pads)
|
|
|
matroskademux
|
- demux.audio_%u
- demux.video_%u
- demux.subtitle_%u
|
qtdemux
name=demux |
- demux.video_0
- demux.audio_0
- demux.audio_1
- ...
|
sdpdemux
name=demux
|
- demux.stream_0
- demux.stream_1
- ...
|
tsdemux
program-number=805
name=demux |
|
flvdemux
name=demux
|
|
...
|
|
|
|
|
- decodebin
- video
-
- omxmpeg2videodec
- omxh264dec
- audio
-
|
|
|
|
|
|
- video
-
- video/x-raw,
framerate=25/1, width=640,
height=360,
- if format is specified,
videoconvert must be specified after
it
-
- audio
-
- audio/x-raw,
format=...,
layout=...,
rate=...,
channels=...
|
|
-
sdpdemux
|
|
rtpbin
|
udpsrc
|
rtpsession
|
rtpssrcdemux
|
rtpjitterbuffer
|
rtpptdemux
|
- Sources and sinks
-
- GstBaseSrc
-
- do-timestamp
-
- it has to be specified at source if we
want lipsync at the output: sync=true
- GstBaseSink
-
- async
- sync
-
- the source must be specified with
do-timestamp=true
- fd
(file
descriptor) (see Snowmix
audio)
-
- fdsink
-
- fdsrc
-
- Exemple / Example
- send audio to file descriptor 3 and listen
to it. Verbose and warning logs are shown in
terminal
export GST_DEBUG=WARN
AUDIOCAPS="audio/x-raw,format=S16LE,layout=interleaved,rate=44100,channels=2"
gst-launch-1.0 -v audiotestsrc wave=5
! volume volume=0.1 ! ${AUDIOCAPS} !
fdsink fd=3 3>&1 1>&2 |
gst-launch-1.0 fdsrc ! ${AUDIOCAPS} !
autoaudiosink
- shm
(shared memory) (see Snowmix
video)
-
- shmsink
-
gst-launch-1.0 -v videotestsrc !
video/x-raw,framerate=25/1,width=640,height=480 ,format=BGRA
! videoconvert ! shmsink
socket-path=/tmp/feed1 shm-size=`echo
640*480*4*22 | bc` wait-for-connection=0
gst-launch-1.0 -v videotestsrc
is-live=true
do-timestamp=true !
video/x-raw,framerate=25/1,width=640,height=480,format=BGRA
! videoconvert ! clockoverlay
halignment=right valignment=top
shaded-background=true font-desc="Sans,
24" ! shmsink socket-path=/tmp/feed1
shm-size=`echo 640*480*4*22 | bc`
wait-for-connection=1 sync=true
gst-launch-1.0 filesrc
location=sintel_timecode_640x272_44100_stereo.mp4
! qtdemux name=demux demux. ! decodebin !
videoconvert ! videoscale ! videorate !
video/x-raw,width=320,height=136,format=BGRA
! shmsink socket-path=/tmp/feed1
shm-size=`echo 320*136*4*22 | bc -l`
wait-for-connection=1 sync=true
- shmsrc
-
- you must specify:
width, height,
framerate, format+videoconvert , and
they shoud match values specified in shmsink
gst-launch-1.0 -v shmsrc
socket-path=/tmp/feed1 do-timestamp=true
is-live=true !
video/x-raw,width=640,height=480,framerate='25/1',format=BGRA
! videoconvert ! autovideosink
- to play at a framerate different from the
input, specify a different framerate and add
videorate
du -h /dev/shm
ls -l /dev/shm
netstat -pena --unix
- shm (video) + fd (audio)
-
|
send
|
play
|
Notes
|
audio
|
feed_rate=44100
feed_channels=2
AUDIOCAPS="audio/x-raw,format=S16LE,layout=interleaved,rate=${feed_rate},channels=${feed_channels}"
|
|
gst-launch-1.0
\
audiotestsrc wave=5
! volume volume=0.1 ! ${AUDIOCAPS} !
fdsink fd=3 3>&1 1>&2 \
|
|
gst-launch-1.0 fdsrc ! ${AUDIOCAPS} !
autoaudiosink
|
|
video
|
ratefraction="25/1"
feed_width=320
feed_height=180
video_input_sar="1:1"
pixel_aspect_ratio=${video_input_sar/:/\/}
# replace : -> /
VIDEOCAPS="video/x-raw,framerate=${ratefraction},width=${feed_width},height=${feed_height},pixel-aspect-ratio=${pixel_aspect_ratio}"
FORMAT_SHM="BGRA"
VIDEOCAPS_WITH_FORMAT="${VIDEOCAPS},format=${FORMAT_SHM}"
shm_socket_path=/tmp/shm_toto
FORMAT_DISPLAY="I420"
VIDEOCONVERT_DISPLAY="video/x-raw,format=${FORMAT_DISPLAY}"
FORMAT_SHM="BGRA"
VIDEOCONVERT_SHM="video/x-raw,format=${FORMAT_SHM}"
|
|
gst-launch-1.0 \
videotestsrc !
${VIDEOCAPS_WITH_FORMAT} !
videoconvert ! shmsink
socket-path=${shm_socket_path}
shm-size=`echo
${feed_width}*${feed_height}*4*22
| bc` wait-for-connection=0 \
# send video with tee to monitor
gst-launch-1.0 \
filesrc
location=${video_path} ! qtdemux
name=demux \
demux.video_0 !
queue ! decodebin ! videoscale
method=5 ! queue ! videorate !
${VIDEOCAPS} ! tee name=tv \
tv. ! queue !
videoconvert ! ${VIDEOCONVERT_SHM}
! shmsink
socket-path=${shm_socket_path}
shm-size=`echo
${feed_width}*${feed_height}*4*22
| bc` wait-for-connection=0 \
tv. ! queue !
videoconvert !
${VIDEOCONVERT_DISPLAY} !
autovideosink sync=true \
3>&1
1>&2
|
gst-launch-1.0
shmsrc
socket-path=${shm_socket_path}
do-timestamp=true is-live=true !
${VIDEOCAPS_WITH_FORMAT} !
videoconvert ! autovideosink
|
|
audio +
video
|
gst-launch-1.0 \
videotestsrc !
${VIDEOCAPS_WITH_FORMAT} !
videoconvert ! shmsink
socket-path=${shm_socket_path}
shm-size=`echo
${feed_width}*${feed_height}*4*22
| bc` wait-for-connection=0 \
audiotestsrc
wave=5 ! volume volume=0.1 !
${AUDIOCAPS} ! fdsink fd=3
3>&1 1>&2 \
gst-launch-1.0 \
filesrc
location=${video_path} ! qtdemux
name=demux \
demux.video_0 !
queue ! decodebin ! videoscale
method=5 ! queue ! videorate !
videoconvert !
${VIDEOCAPS_WITH_FORMAT} ! \
shmsink
socket-path=${shm_socket_path}
shm-size=`echo
${feed_width}*${feed_height}*4*22
| bc` wait-for-connection=0 \
demux.audio_0 !
queue ! decodebin ! audioconvert !
audioresample ! queue ! audiorate
! $AUDIOCAPS ! tee name=ta \
ta. ! queue !
fdsink fd=3 \
3>&1
1>&2 \
# send audio and video with
tee
to monitor
gst-launch-1.0 \
filesrc
location=${video_path} ! qtdemux
name=demux \
demux.video_0 !
queue ! decodebin ! videoscale
method=5 ! queue ! videorate !
${VIDEOCAPS} ! tee name=tv \
tv. ! queue !
videoconvert ! ${VIDEOCONVERT_SHM}
! shmsink
socket-path=${shm_socket_path}
shm-size=`echo
${feed_width}*${feed_height}*4*22
| bc` wait-for-connection=0 \
tv. ! queue !
videoconvert !
${VIDEOCONVERT_DISPLAY} !
autovideosink sync=true \
demux.audio_0 !
queue ! decodebin ! audioconvert !
audioresample ! queue ! audiorate
! ${AUDIOCAPS} ! tee name=ta \
ta. ! queue !
autoaudiosink sync=true \
ta. ! queue !
fdsink fd=3 \
3>&1
1>&2 \
|
|
gst-launch-1.0 \
shmsrc
socket-path=${shm_socket_path}
do-timestamp=true is-live=true !
${VIDEOCAPS_WITH_FORMAT} !
videoconvert ! autovideosink \
fdsrc !
${AUDIOCAPS} ! autoaudiosink
|
- es queden fitxers creats
(/tmp/shm_toto, /dev/shm/shmpipe*)
quan:
- no arrenca
- s'atura amb la X de la
finestra de la sortida
- s'esborren bé els fitxers creats
quan:
|
-
- Demux
-
- sdpdemux
-
- Play
from SDP file
- Includes
-
- Parameters
-
latency (ms)
-
INFO
rtpjitterbuffer
gstrtpjitterbuffer.c:3942:do_deadline_timeout:<rtpjitterbuffer0>
got
deadline timeout
- Codecs
-
- Bins
-
-
- decodebin3
- rtpbin
-
-
|
|
|
rtpbin
|
|
Examples
|
|
|
|
sink
|
src
|
|
|
transmission
|
video
|
rtppay
|
send_rtp_sink_0
|
send_rtp_src_0
|
udpsink
|
|
|
|
send_rtcp_src_0
|
udpsink
sync=false
async=false (SR) |
audio
|
rtppay
|
send_rtp_sink_1
|
send_rtp_src_1
|
udpsink
|
|
|
send_rtcp_src_1
|
udpsink
sync=false
async=false (SR) |
reception
|
video
|
udpsrc
|
recv_rtp_sink_0
|
recv_rtp_src_0_<ssrc_0>_<pt_0>
|
rtpdepay
|
|
udprsc
(SR) |
recv_rtcp_sink_0
|
send_rtcp_src_0
|
udpsink
sync=false async=false (RR)
|
audio
|
udpsrc
|
recv_rtp_sink_1
|
recv_rtp_src_1_<ssrc_1>_<pt_1>
|
rtpdepay
|
udpsrc
(SR) |
recv_rtcp_sink_1
|
send_rtcp_src_1
|
udpsink
sync=false async=false (RR)
|
- Play
-
- general: using
playbin
-
gst-launch-1.0 -v playbin
uri=...
- Problemes / Problems:
map_vaapi_memory: failed to make
image current
-
gst-launch-1.0
-v playbin
uri=...
|
gst-launch-1.0
uridecodebin
uri=... name=decoder
|
playsink
name=sink decoder.src_0 ! sink.video_sink
decoder.src_1 ! sink.audio_sink |
gst-launch-1.0 rtmpsrc location=${source}
! queue2 ! decodebin name=decoder |
playsink
name=sink decoder.src_0 ! sink.video_sink
decoder.src_1 ! sink.audio_sink |
- from testsrc
-
- videotestsrc
-
- source code
- subprojects/gst-plugins-base/gst/videotestsrc/
- gstvideotestsrc.h
- gstvideotestsrc.c
- videotestsrc.h
- videotestsrc.c
gst-launch-1.0 -v videotestsrc
!
video/x-raw,framerate=25/1,width=1280,height=720
! autovideosink
- using OpenGL (e.g. with Nvidia cards)
gst-launch-1.0 -v videotestsrc
!
video/x-raw,framerate=25/1,width=1280,height=720
! glimagesink
gst-launch-1.0 -v videotestsrc pattern=snow
!
video/x-raw,framerate=12/1,width=1280,height=720
! autovideosink
gst-launch-1.0 -v videotestsrc !
video/x-raw,framerate=12/1,width=1280,height=720,format=BGRA
! videoconvert ! autovideosink
- test audio/video sync
- test judder
gst-launch-1.0 -v videotestsrc
pattern=bar horizontal-speed=-40 !
video/x-raw,framerate=25/1,width=1280,height=720
! autovideosink
gst-launch-1.0 -v videotestsrc
pattern=bar horizontal-speed=-40 !
video/x-raw,framerate=60000/1001,width=1280,height=720
! autovideosink
- clock overlay
-
gst-launch-1.0 -v videotestsrc
is-live=true ! clockoverlay
halignment=right valignment=top
shaded-background=true
font-desc="Sans, 24" ! autovideosink
- time overlay
-
gst-launch-1.0 -v videotestsrc
is-live=true ! timecodestamper
! timeoverlay
shaded-background=true
'time-mode=time-code'
font-desc="Sans, 24" ! autovideosink
gst-launch-1.0 -v videotestsrc
is-live=true ! video/x-raw,
framerate=25/1, width=640, height=360
! timecodestamper ! timeoverlay
halignment=right valignment=bottom
text="Stream time:"
shaded-background=true
font-desc="Sans, 24" ! autovideosink
- clock + time overlay
-
gst-launch-1.0 -v videotestsrc
is-live=true ! video/x-raw,
framerate=25/1, width=640, height=360
! timecodestamper ! timeoverlay
halignment=left valignment=top
shaded-background=true
font-desc="Sans, 24" ! clockoverlay
halignment=right valignment=top
shaded-background=true
font-desc="Sans, 24" ! autovideosink
- write 10s (25*10=250 buffers) to an mp4
file:
gst-launch-1.0 -v videotestsrc
pattern=sync num-buffers=250 !
video/x-raw,framerate=25/1,width=1280,height=720
! x264enc ! video/x-h264,profile=high
! mux. mp4mux name=mux ! filesink
location=sync.mp4
- ...
|
pattern |
|
|
parameters |
|
|
example |
|
|
|
|
- background-color
- foreground-color
|
horizontal-speed |
specific to pattern |
|
0 |
smpte |
SMPTE 100% color bars |
|
|
|
|
|
1
|
snow
|
|
|
|
|
|
|
2
|
black
|
|
|
|
|
|
|
3
|
white
|
|
|
|
|
|
|
4
|
red |
|
|
|
|
|
|
5
|
green |
|
|
|
|
|
|
6
|
blue |
|
|
|
|
|
|
7
|
checkers-1 |
|
|
|
|
|
|
8
|
checkers-2 |
|
|
|
|
|
|
9
|
checkers-4 |
|
|
|
|
|
|
10
|
checkers-8 |
|
|
|
|
|
|
11
|
circular |
|
|
|
|
|
|
12
|
blink |
|
|
|
|
|
|
13
|
smpte75 |
|
|
|
|
|
|
14
|
zone-plate |
BBC R&D Report 1978/23 |
|
|
|
- k0
- kt
- kt2
- kx
- kx2
- kxt
- ky
- ky2
- kyt
- xoffset
- yoffset
|
-
videotestsrc
pattern=zone-plate kx2=20
ky2=20 kt=1
|
15
|
gamut |
|
|
|
|
|
|
16
|
chrome-zone-plate |
|
|
|
|
|
|
17
|
solid-color |
|
|
|
|
|
|
18
|
ball |
|
|
|
|
|
videotestsrc
pattern=ball
animation-mode=wall-time
motion=sweep
|
19
|
smpte100 |
|
|
|
|
|
|
20
|
bar
|
|
|
|
|
|
|
21
|
pinwheel |
|
|
|
|
|
|
22
|
spokes |
|
|
|
|
|
|
23
|
gradient |
|
|
|
|
|
|
24 |
colors |
|
|
|
|
|
|
25 |
smpte-rp-219 |
|
|
|
|
|
|
- audiotestsrc
-
gst-launch-1.0 -v audiotestsrc !
autoaudiosink
- white noise (
wave=5 ),
stereo
(channels=2 )
-
gst-launch-1.0 -v audiotestsrc
is-live=true wave=5 !
'audio/x-raw,format=S16LE,layout=interleaved,rate=48000,channels=2'
! autoaudiosink
- white nois during 10s (audio:
10s*48000samples/s * 1 buffer/1024 samples =
468.75buffers)
gst-launch-1.0 -v audiotestsrc
is-live=true wave=5 num-buffers=469 !
'audio/x-raw,format=S16LE,layout=interleaved,rate=48000,channels=2'
! autoaudiosink
- wave
wave |
|
parameters |
0 |
sine |
|
1 |
square |
|
2 |
saw |
|
3 |
triangle |
|
4 |
silence |
|
5 |
white-noise |
|
6 |
pink-noise |
|
7 |
sine-table |
|
8 |
ticks |
- marker-tick-period
- marker-tick-volume
- sine-periods-per-tick
- tick-interval (ns)
|
9 |
gaussian-noise |
|
10 |
red-noise |
|
11 |
blue-noise |
|
12 |
violet-noise |
|
- test video + audio
-
gst-launch-1.0 -v videotestsrc !
video/x-raw,framerate=25/1,width=1280,height=720
! autovideosink audiotestsrc !
autoaudiosink
- gst-launch-1.0
-v videotestsrc pattern=ball
animation-mode=wall-time motion=sweep !
autovideosink
audiotestsrc
wave=ticks ! autoaudiosink
- gst-launch-1.0
-v videotestsrc pattern=sync !
autovideosink
audiotestsrc
wave=ticks ! autoaudiosink
- write to a TS file:
gst-launch-1.0 -v videotestsrc
pattern=sync !
video/x-raw,framerate=25/1,width=1280,height=720
! x264enc ! video/x-h264,profile=high
! mux. audiotestsrc wave=ticks !
avenc_aac ! mux. mpegtsmux name=mux !
filesink location=sync.ts
- Problemes / Problems
- avenc_aac not found
- verify:
gst-inspect-1.0 |
grep avenc
- sudo
dnf install
gstreamer1.0-libav
- if you installed
ffmpeg aac after
gstreamer, reinstall
gstreamer1.0-libav
- ffmpeg
-codecs | grep aac
- write 10s to an mp4 file (video:
10s*25f/s=250 buffers; audio:
10s*48000samples/s * 1 buffer/1024 samples =
468.75buffers):
gst-launch-1.0 -v videotestsrc
pattern=sync num-buffers=250 !
video/x-raw,framerate=25/1,width=1280,height=720
! x264enc ! video/x-h264,profile=high
! mux. audiotestsrc wave=ticks
num-buffers=469 ! avenc_aac ! mux.
mp4mux name=mux faststart=true !
filesink location=sync.mp4
- from DVB device
-
- only video from DVB device (mpegvideoparse is
needed because if not, maybe teletext is taken;
and error is shown):
-
gst-launch-1.0 -v dvbsrc
modulation="QAM 64" trans-mode=8k
bandwidth=8 frequency=658000000
code-rate-lp=AUTO code-rate-hp=2/3 guard=4
hierarchy=0 ! tsdemux
program-number=805 ! queue !
mpegvideoparse ! decodebin ! autovideosink
- only audio:
-
gst-launch-1.0 -v dvbsrc
modulation="QAM 64" trans-mode=8k
bandwidth=8 frequency=658000000
code-rate-lp=AUTO code-rate-hp=2/3 guard=4
hierarchy=0 ! tsdemux
program-number=805 ! queue !
mpegaudioparse ! decodebin ! autoaudiosink
gst-launch-1.0 dvbsrc
modulation="QAM 64" trans-mode=8k
bandwidth=8 frequency=658000000
code-rate-lp=AUTO code-rate-hp=2/3 guard=4
hierarchy=0 ! tsdemux program-number=806
name=demux demux. ! queue ! mpegaudioparse
! decodebin ! omxanalogaudiosink
- audio and video from program 805 in DVB input:
-
gst-launch-1.0 -v dvbsrc
modulation="QAM
64" trans-mode=8k bandwidth=8
frequency=658000000 code-rate-lp=AUTO
code-rate-hp=2/3 guard=4 hierarchy=0 !
tsdemux program-number=805 name="demux"
\
demux. ! queue !
mpegaudioparse ! decodebin !
autoaudiosink \
demux. ! queue !
mpegvideoparse ! decodebin !
autovideosink
- from file
-
gst-launch-1.0 -v playbin
uri=file:/absoulte/path/to/your_video_file
gst-launch-1.0 \
filesrc
location=${video_path} ! decodebin name=dec \
dec. ! queue !
autovideosink \
dec. ! queue !
autoaudiosink
- MP4
-
gst-launch-1.0 -v playbin uri=file:/absoulte/path/to/toto.mp4
- audio and video from an MP4
file (queue is needed when playin audio and
video)
-
gst-launch-1.0 filesrc
location=sintel-1024-stereo.mp4 ! qtdemux
name=demux \
demux. ! queue
! decodebin ! autovideosink \
demux. ! queue
! decodebin ! autoaudiosink
- rescale an anamorphic video
gst-launch-1.0 filesrc
location=toto_720x576_anamorphic.mp4 !
qtdemux name=demux demux. ! queue !
decodebin ! videoscale !
video/x-raw,width=176,height=140,pixel-aspect-ratio=64/45
! autovideosink
- only audio from an MP4 file
-
gst-launch-1.0 filesrc
location=sintel-1024-stereo.mp4 !
qtdemux name=demux \
demux.audio_0 ! decodebin !
autoaudiosink
- TS
-
- program in a TS file (first program
found?):
-
gst-launch-1.0 -v playbin
uri=file:/tmp/toto.ts
- only video from TS file
(program_number=802)
-
gst-launch-1.0 -v filesrc
location=/disc/videos/tvc/tvc_794_20140821_1709.ts
! tsdemux program-number=802 !
mpegvideoparse ! decodebin !
autovideosink
- Problemes
mpegts_packetizer_pts_to_ts_internal:
No groups, can't calculate timestamp
mpegts_packetizer_pts_to_ts_internal:
Not enough information to calculate
proper timestamp
mpegts_packetizer_offset_to_ts: Not
enough observations to return a
duration estimate
- OGG
-
- audio and video from an OGG
file (queue is needed when playin audio and
video)
-
gst-launch-1.0 filesrc
location=sintel_trailer-720p.ogv ! oggdemux
name=demux \
demux. ! queue
! decodebin ! autovideosink \
demux. ! queue
! decodebin ! autoaudiosink
- WebM / Matroska
-
gst-launch-1.0 -v playbin
uri=file:/absoulte/path/to/toto.webm
- playbinaudio
and
video from a webm file:
-
gst-launch-1.0 -v \
filesrc location=/path/to/toto.webm !
matroskademux name=demux \
demux.video_0 ! queue ! decodebin !
autovideosink sync=true \
demux.audio_0 ! queue ! decodebin !
autoaudiosink sync=true
gst-launch-1.0 -v \
filesrc location=/path/to/toto.webm !
matroskademux name=demux \
demux. ! queue ! vp8dec !
autovideosink sync=true \
demux. ! queue ! opusdec !
autoaudiosink sync=true
- SDP
-
- See also: play
from RTP
- sdpdemux
- UDP
buffer
-
- value is taken from kernel
parameter net.core.rmem_default
GST_DEBUG=3,udpsrc:4
gst-launch filesrc
location=toto.sdp ! sdpdemux
name=demux ...
-
udpsrc
gstudpsrc.c:1428:gst_udpsrc_open:<udpsrc0>
have udp buffer of 212992 bytes
- audio and video from SDP
file (RTP):
-
gst-launch-1.0 filesrc
location=toto.sdp do-timestamp=true
! sdpdemux
latency=1000
debug=true name=bin \
bin. !
"application/x-rtp,
media=(string)video" ! decodebin !
autovideosink sync=true
\
bin. !
"application/x-rtp,
media=(string)audio" ! decodebin !
autoaudiosink sync=true
# RTP + RTCP, using sdpdemux
gst-launch-1.0 \
filesrc
location=$sdp_path do-timestamp=true !
sdpdemux
latency=${sdpdemux_latency_ms}
name=bin \
bin. !
"application/x-rtp,
media=(string)video" ! queue !
decodebin ! videoconvert ! videoscale
! videorate ! autovideosink sync=true
\
bin. !
"application/x-rtp,
media=(string)audio" ! queue !
decodebin ! audioconvert !
audioresample ! audiorate !
autoaudiosink sync=true
- H.264 + AAC
-
gst-launch-1.0 filesrc
location=toto.sdp
do-timestamp=true ! sdpdemux
name=demux \
demux. ! queue ! rtph264depay !
decodebin ! autovideosink
sync=true \
demux. ! queue ! rtpmp4gdepay !
decodebin ! autoaudiosink
sync=true
- only video from SDP
file (RTP).
Make sure that video is the first stream
specified in sdp file:
-
gst-launch-1.0 filesrc
location=toto.sdp ! sdpdemux
name=demux \
demux.stream_0
! queue ! decodebin ! autovideosink
- only audio from SDP
file (RTP).
Make sure that audio is the second stream
specified in sdp file:
-
gst-launch-1.0 filesrc
location=toto.sdp ! sdpdemux
name=demux \
demux.stream_1
! queue ! decodebin ! autoaudiosink
- Problemes / Problems
-
- Pèrdua
de
paquets / Packet loss
-
- Manca
de fluïdesa / Lack of smoothness
-
WARN
rtpjitterbuffer
rtpjitterbuffer.c:573:calculate_skew:
delta - skew: 0:00:02.543695912
too big, reset skew
videodecoder
gstvideodecoder.c:2775:gst_video_decoder_prepare_finish_frame:<avdec_h264-0>
decreasing
timestamp
(0:00:00.008558259 <
0:00:00.058900688)
-
- Solució / Solution
-
- Increase latency parameter
(default: 200 ms) for
sdpdemux:
gst-launch-1.0 filesrc
location=toto.sdp
! sdpdemux latency=400
name=demux ...
audiobasesink
gstaudiobasesink.c:1787:gst_audio_base_sink_get_alignment:<autoaudiosink0-actual-sink-alsa>
Unexpected
discontinuity in audio timestamps
of +0:00:00.131360544, resyncing
-
- “delayed linking failed”
-
- lipsync
- from network
-
- RTP
-
- See also: Play
from SDP file
- Problemes / Problems
-
- udpsrc
-
videodecoder
gstvideodecoder.c:2775:gst_video_decoder_prepare_finish_frame:<avdec_h264-0>
decreasing
timestamp (0:00:45.183879818 <
0:00:45.188331700)
gst-launch-1.0 udpsrc
address=127.0.0.1 port=5004 !
"application/x-rtp, media=(string)video,
clock-rate=(int)90000,
encoding-name=(string)H264,
packetization-mode=(string)1,
profile-level-id=(string)64001f,
payload=(int)96" ! queue ! \
rtph264depay ! decodebin ! autovideosink
gst-launch-1.0 -v \
udpsrc
address=${address} port=${video_port}
do-timestamp=true ! queue !
"application/x-rtp, media=(string)video,
payload=(int)96, clock-rate=(int)90000,
encoding-name=(string)H264,
packetization-mode=(string)1,
sprop-parameter-sets=(string)\"Z2QAHqzZQKAv+XARAAADAAEAAAMAPA8WLZY\=\,aOvssiw\=\",
profile-level-id=(string)64001E"
! rtph264depay ! decodebin ! autovideosink
sync=true \
udpsrc
address=${address} port=${audio_port}
do-timestamp=true ! queue !
"application/x-rtp, media=(string)audio,
payload=(int)97, clock-rate=(int)44100,
encoding-name=(string)MPEG4-GENERIC,
encoding-params=(string)2,
profile-level-id=(string)1,
mode=(string)AAC-hbr,
sizelength=(string)13,
indexlength=(string)3,
indexdeltalength=(string)3,
config=(string)121056E500" ! rtpmp4gdepay
! decodebin ! autoaudiosink sync=true
-
- caps can be obtained e.g. by
executing:
gst-launch -v ...
sdpdemux ...
- common variables:
-
# ffmpeg -re -i easylife.mp4
-c:v copy -an -f rtp -cname toto
rtp://234.1.2.3:5004 -vn -c:audio copy
-f rtp -cname toto
rtp://234.1.2.3:5006 -sdp_file
/mnt/nfs/sdp/toto.sdp
address=234.1.2.3
video_rtp_port=5004
video_rtcp_port=$(( video_rtp_port + 1
))
VIDEOCAPS="application/x-rtp,
media=(string)video, payload=(int)96,
clock-rate=(int)90000,
encoding-name=(string)H264,
packetization-mode=(string)1,
sprop-parameter-sets=(string)\"Z2QAHqzZQKAv+XARAAADAAEAAAMAPA8WLZY\=\,aOvssiw\=\",
profile-level-id=(string)64001E"
audio_rtp_port=$(( video_rtp_port + 2
))
audio_rtcp_port=$(( video_rtp_port + 3
))
AUDIOCAPS="application/x-rtp,
media=(string)audio, payload=(int)97,
clock-rate=(int)44100,
encoding-name=(string)MPEG4-GENERIC,
encoding-params=(string)2,
profile-level-id=(string)1,
mode=(string)AAC-hbr,
sizelength=(string)13,
indexlength=(string)3,
indexdeltalength=(string)3,
config=(string)121056E500"
- RTP
-
# RTP
gst-launch-1.0 \
udpsrc
address=${address}
port=${video_rtp_port}
do-timestamp=true ! "$VIDEOCAPS" ! \
rtph264depay !
queue ! decodebin ! autovideosink
sync=true \
udpsrc
address=${address}
port=${audio_rtp_port}
do-timestamp=true ! "$AUDIOCAPS" ! \
rtpmp4gdepay !
queue ! decodebin ! autoaudiosink
sync=true
- RTP (using rtpbin)
-
# RTP, using rtpbin
gst-launch-1.0 \
rtpbin name=bin \
udpsrc
address=${address}
port=${video_rtp_port}
do-timestamp=true ! "$VIDEOCAPS" !
bin.recv_rtp_sink_0 \
bin. ! rtph264depay
! queue ! decodebin ! autovideosink
sync=true \
udpsrc
address=${address}
port=${audio_rtp_port}
do-timestamp=true ! "$AUDIOCAPS" !
bin.recv_rtp_sink_1 \
bin. ! rtpmp4gdepay
! queue ! decodebin ! autoaudiosink
sync=true
- RTP
with RTCP (using rtpbin)
-
# RTP + RTCP, using rtpbin
gst-launch-1.0 \
rtpbin name=bin \
udpsrc
address=${address}
port=${video_rtp_port}
do-timestamp=true ! "$VIDEOCAPS" !
bin.recv_rtp_sink_0 \
bin. ! "application/x-rtp,
media=(string)video" ! queue
! decodebin ! autovideosink sync=true
\
udpsrc
address=${address}
port=${video_rtcp_port} !
"application/x-rtcp" !
bin.recv_rtcp_sink_0 \
udpsrc
address=${address}
port=${audio_rtp_port}
do-timestamp=true ! "$AUDIOCAPS" !
bin.recv_rtp_sink_1 \
bin. ! "application/x-rtp,
media=(string)audio" ! queue
! decodebin ! autoaudiosink sync=true
\
udpsrc
address=${address}
port=${audio_rtcp_port} !
"application/x-rtcp" !
bin.recv_rtcp_sink_1
# RTP + RTCP, using rtpbin
gst-launch-1.0 \
rtpbin name=bin \
udpsrc
address=${address}
port=${video_rtp_port}
do-timestamp=true ! "$VIDEOCAPS" !
bin.recv_rtp_sink_0 \
bin. ! rtph264depay
! queue ! decodebin ! autovideosink
sync=true \
udpsrc
address=${address}
port=${video_rtcp_port} !
"application/x-rtcp" !
bin.recv_rtcp_sink_0 \
udpsrc
address=${address}
port=${audio_rtp_port}
do-timestamp=true ! "$AUDIOCAPS" !
bin.recv_rtp_sink_1 \
bin. ! rtpmp4gdepay
! queue ! decodebin ! autoaudiosink
sync=true \
udpsrc
address=${address}
port=${audio_rtcp_port} !
"application/x-rtcp" !
bin.recv_rtcp_sink_1
- HTTP
-
- check that souphttpsrc
is present
-
gst-inspect-1.0 | grep
souphttpsrc
- if not present, compile it
-
- Dependencies
-
- CentOS
-
sudo yum install
libsoup-devel
- Mageia
-
- gst-plugins-good
-
./configure
make
sudo make install
gst-launch-1.0 playbin
uri=http://download.blender.org/peach/bigbuckbunny_movies/BigBuckBunny_320x180.mp4
- RTMP
-
gst-launch-1.0 -v playbin
uri=rtmp://nginx-server/myapp/mystream
gst-launch-1.0 -v playbin
uri=${source}
- gst-launch-1.0
uridecodebin
uri=${source} name=decoder playsink
name=sink decoder.src_0 ! sink.video_sink
decoder.src_1 ! sink.audio_sink
gst-launch-1.0 rtmpsrc
location=${source} ! queue2
use-buffering=true ! decodebin
name=decoder playsink name=sink
decoder.src_0 ! sink.video_sink
decoder.src_1 ! sink.audio_sink
gst-launch-1.0 rtmpsrc
location=${source} do-timestamp=true
! queue2
use-buffering=true
! decodebin name=decoder playsink
name=sink decoder.src_0 ! sink.video_sink
decoder.src_1 ! sink.audio_sink
gst-launch-1.0 rtmpsrc
location=${source} do-timestamp=true
! queue2
use-buffering=true
! decodebin name=decoder
decoder.src_0 ! queue ! autovideosink
sync=true decoder.src_1 ! queue !
audioconvert ! autoaudiosink sync=true
gst-launch-1.0 -v \
rtmpsrc
location=${source} do-timestamp=true !
queue2 ! decodebin name=mydecoder \
mydecoder.
! autovideosink sync=true \
mydecoder.
! autoaudiosink sync=true
(not working?)
source=rtmp://nginx-server/myapp/mystream
gst-launch-1.0 \
rtmpsrc
location=${source} do-timestamp=true !
flvdemux name=demux \
demux.video ! queue !
decodebin ! autovideosink sync=true \
demux.audio ! queue !
decodebin ! autoaudiosink sync=true
- Mux to
-
- test to TS
-
gst-launch-1.0 -v videotestsrc !
video/x-raw,framerate=24/1,width=1280,height=720
! videoconvert ! x264enc !
video/x-h264,profile=high ! mpegtsmux !
filesink location=toto.ts
- test to MP4
-
- NOTE:
stream-format=(string)byte-stream
is not supported by MP4
gst-launch-1.0 -v videotestsrc !
video/x-raw,framerate=24/1,width=1280,height=720
! videoconvert ! x264enc !
video/x-h264,profile=high ! mp4mux ! filesink
location=toto.mp4
-
- Problem:
-
- moov atom not found
-
- Solution
-
- (?) mp4mux faststart=true
- Captura / Capture
- capture a single frame to a png file
gst-launch-1.0 -v videotestsrc num-buffers=1
! pngenc ! filesink location=toto.png
- capture
timestamped
frames (BeagleCam)
-
gst-launch v4l2src num-buffers=1 !
video/x-raw-yuv,width=640,height=480,framerate=30/1
! ffmpegcolorspace ! jpegenc ! filesink
location=$(date +"%s").jpg
- Transmux
-
- rtmp
-> rtmp
gst-launch-1.0 rtmpsrc
location=rtmp://server.org/my_app/first
do-timestamp=true ! queue2 ! flvdemux
name=demux \
flvmux name=mux \
demux.video ! queue ! mux.video \
demux.audio ! queue ! mux.audio \
mux.src ! queue ! rtmpsink
location=rtmp://server.org/my_app/second
- Note: if using nginx-rtmp-module as
destination, check
nginx.conf configuration
- only video (mp4 -> ts):
-
gst-launch-1.0 filesrc
location=sintel-1024-stereo.mp4 ! qtdemux
name=demux \
mpegtsmux
name=mux ! filesink location=toto.ts \
demux. ! queue !
h264parse ! mux.
gst-launch-1.0 filesrc
location=sintel-1024-stereo.mp4 ! qtdemux
name=demux \
mpegtsmux
name=mux ! filesink location=toto.ts \
demux. ! video/x-h264
! queue ! h264parse ! mux.
- video and audio (mp4 -> ts)
-
gst-launch-1.0 filesrc
location=sintel-1024-stereo.mp4 ! qtdemux
name=demux \
mpegtsmux
name=mux ! filesink location=toto2015.ts \
demux. ! queue !
h264parse ! mux. \
demux. ! queue !
aacparse ! mux.
- video and audio (mp4 -> flv)
-
gst-launch-1.0 -v filesrc
location=sintel-1024-stereo.mp4 ! qtdemux
name=demux \
flvmux
streamable=true name=mux ! filesink
location=toto.flv \
demux. ! queue !
h264parse ! mux. \
demux. ! queue !
aacparse ! mux.
- video and audio (flv ->mp4)
-
- video (H.264) (sdp -> ts)
-
gst-launch-1.0 -v filesrc
location=toto.sdp ! sdpdemux name=demux
\
mpegtsmux
name=mux ! filesink location=toto.ts
\
demux. ! queue !
rtph264depay ! mux.
- video (H.264) and audio (AAC) (sdp -> ts)
-
gst-launch-1.0 -v filesrc
location=toto.sdp ! sdpdemux name=demux
\
mpegtsmux
name=mux ! filesink location=toto.ts
\
demux. ! queue !
rtph264depay ! mux. \
demux. ! queue !
rtpmp4gdepay ! mux.
- video (sdp->mp4) (not working)
-
gst-launch-1.0 filesrc
location=/tmp/bbb.sdp ! sdpdemux name=demux \
mp4mux name=mux ! filesink
location=/tmp/toto.mp4 \
demux. ! rtph264depay ! h264parse ! mux.
-
- problem
-
ffplay toto.mp4
[mov,mp4,m4a,3gp,3g2,mj2 @
0x7f128c0008c0] moov atom not found
toto.mp4: Invalid data found when
processing input
- Solution?
-
- video and audio (sdp -> mp4) (not working)
-
gst-launch-1.0 -v filesrc
location=toto.sdp ! sdpdemux name=demux
\
mp4mux name=mux !
filesink location=toto.mp4 \
demux. ! queue !
h264parse ! mux. \
demux. ! queue !
aacparse ! mux.
- Transcode
-
- only video, to file:
-
gst-launch-1.0 -v dvbsrc modulation="QAM
64" trans-mode=8k bandwidth=8
frequency=658000000 code-rate-lp=AUTO
code-rate-hp=2/3 guard=4 hierarchy=0 ! tsdemux
program-number=805 name="demux" \
demux. ! queue ! mpegvideoparse ! decodebin !
videoconvert ! x264enc !
video/x-h264,stream-format=byte-stream,profile=high
! h264parse ! \
mpegtsmux ! filesink location=/tmp/toto.ts
- and resize, stream:
-
gst-launch-1.0 -v filesrc
location=tvc_20150604.ts ! tsdemux
program-number=806 ! \
mpegvideoparse ! decodebin ! videoscale !
video/x-raw, width=320, height=320 !
videoconvert ! omxh264enc ! h264parse ! \
mpegtsmux ! udpsink host=192.168.0.8 port=5004
sync=false
gst-launch-1.0 -v dvbsrc modulation="QAM
64" trans-mode=8k bandwidth=8
frequency=658000000 code-rate-lp=AUTO
code-rate-hp=2/3 guard=4 hierarchy=0 ! tsdemux
program-number=806 ! \
mpegvideoparse ! decodebin ! videoscale !
video/x-raw, width=320, height=320 !
videoconvert ! omxh264enc ! h264parse ! \
mpegtsmux ! udpsink host=192.168.0.8 port=5004
sync=false
gst-launch-1.0 -v dvbsrc modulation="QAM
64" trans-mode=8k bandwidth=8
frequency=658000000 code-rate-lp=AUTO
code-rate-hp=2/3 guard=4 hierarchy=0 ! tsdemux
program-number=805 name="demux" \
demux. ! queue ! mpegvideoparse ! decodebin !
videoscale !
'video/x-raw, width=360, height=288'
! videoconvert ! x264enc !
video/x-h264,stream-format=byte-stream,profile=main
! h264parse ! \
mpegtsmux ! udpsink host=192.168.0.8 port=5004
sync=false
- tee
- two windows from MP4 file:
gst-launch-1.0 \
filesrc
location=${video_path} ! qtdemux name=demux \
demux. ! queue ! decodebin
! tee
name=tv \
tv.
! queue ! autovideosink sync=true \
tv.
! queue ! autovideosink sync=true \
demux. ! queue ! decodebin
! tee
name=ta \
ta.
! queue ! autoaudiosink sync=true \
ta.
! queue ! autoaudiosink sync=true
- two windows from SDP file:
gst-launch-1.0 \
filesrc
location=${video_path} do-timestamp=true !
sdpdemux name=bin \
bin. ! "application/x-rtp,
media=(string)audio" ! queue ! decodebin ! tee name=ta
\
ta.
! queue ! autoaudiosink sync=true \
ta.
! queue ! autoaudiosink sync=true \
bin. ! "application/x-rtp,
media=(string)video" ! queue ! decodebin ! tee name=tv
\
tv.
! queue ! autovideosink sync=true \
tv.
! queue ! autovideosink sync=true
- Stream
-
- Introduction
to
network streaming using GStreamer
- TS
over UDP
-
- UDP unicast stream only audio:
-
gst-launch-1.0 dvbsrc
modulation="QAM 64" trans-mode=8k
bandwidth=8 frequency=658000000
code-rate-lp=AUTO code-rate-hp=2/3 guard=4
hierarchy=0 ! tsdemux program-number=806
name=demux \
demux. ! queue ! mpegaudioparse !
decodebin ! audioconvert ! lamemp3enc ! \
mpegtsmux ! udpsink host=192.168.0.8
port=5004 sync=false
gst-launch-1.0 dvbsrc
modulation="QAM 64" trans-mode=8k
bandwidth=8 frequency=658000000
code-rate-lp=AUTO code-rate-hp=2/3 guard=4
hierarchy=0 ! tsdemux program-number=806
name=demux \
demux. ! queue ! mpegaudioparse !
decodebin ! audioconvert ! lamemp3enc !
mux. \
mpegtsmux name=mux ! udpsink
host=192.168.0.8 port=5004 sync=false
gst-launch-1.0 dvbsrc
modulation="QAM 64" trans-mode=8k
bandwidth=8 frequency=658000000
code-rate-lp=AUTO code-rate-hp=2/3 guard=4
hierarchy=0 ! tsdemux program-number=806
name=demux \
mpegtsmux name=mux ! udpsink
host=192.168.0.8 port=5004 sync=false \
demux. ! queue ! mpegaudioparse !
decodebin ! audioconvert ! lamemp3enc !
mux.
- Encode to H.264, mux to TS, UDP stream:
-
gst-launch-1.0 -e videotestsrc !
video/x-raw, framerate=25/1, width=640,
height=360 ! x264enc ! \
mpegtsmux ! udpsink host=192.168.0.8
port=5004 sync=false
gst-launch-1.0 -v -e videotestsrc !
video/x-raw, framerate=25/1, width=640,
height=360 ! x264enc bitrate=512 !
video/x-h264,profile=high ! h264parse ! \
mpegtsmux ! udpsink host=192.168.0.8
port=5004 sync=false
gst-launch-1.0 -e mpegtsmux
name="muxer" ! udpsink host=192.168.0.8
port=5004 sync=false \
videotestsrc !
video/x-raw, framerate=25/1, width=640,
height=360 ! x264enc bitrate=512 !
video/x-h264,profile=high ! h264parse !
muxer.
- Mux video and audio, UDP stream:
-
gst-launch-1.0 -e mpegtsmux
name="muxer" ! udpsink host=192.168.0.8
port=5004 sync=false \
videotestsrc !
video/x-raw, framerate=25/1, width=640,
height=360 ! x264enc bitrate=512 !
video/x-h264,profile=high ! h264parse !
muxer. \
audiotestsrc
wave=5 ! audioconvert ! lamemp3enc !
muxer.
gst-launch-1.0 dvbsrc
modulation="QAM
64" trans-mode=8k bandwidth=8
frequency=658000000 code-rate-lp=AUTO
code-rate-hp=2/3 guard=4 hierarchy=0 !
tsdemux program-number=806 name=demux
\
mpegtsmux
name=mux ! udpsink host=192.168.0.8
port=5004 sync=false \
demux. ! queue !
mpegaudioparse ! decodebin !
audioconvert ! lamemp3enc ! mux. \
demux. ! queue !
mpegvideoparse ! decodebin ! videoscale
! video/x-raw, width=320, height=320 !
videoconvert ! omxh264enc
inline-header=true periodicty-idr=50 !
h264parse ! mux.
- Notes:
-
- default for omxh264enc
is inline-header=true
- specification of periodicty-idr is
needed by vlc to be played
- these parameters are available on
latest version of gstreamer. Maybe you
need to compile
version
1.2.
- to RTP
-
- RTP
and
RTSP support
- gstreamer/gst-plugins-good/gst/rtp/README
- Streaming
H.264
via RTP
- Play
from SDP file
- SDP
generation
-
- RTP
components
- GStreamer SDP
library
- Structure
- gst-plugins-base / gst-libs / gst
/ sdp
- webrtcbidirectional.c
- gstwebrtcbin.h
-
media
|
caps
|
SDP |
ret,
media
= GstSdp.SDPMedia.new()
|
caps
=
pad.get_current_caps()
(all sets)
ret =
GstSdp.SDPMedia.set_media_from_caps(caps,
media)
|
media.as_text()
|
|
|
|
m=<caps.media>
<media.port>
<media.proto>
<caps.payload>
|
m=video
5004
RTP/AVP 96
|
|
|
i=<media.information>
|
i=my
info
|
- media.add_connection("IN",
"IP4", "1.2.3.4", 16, 1)
|
|
c=<media.connection.nettype>
<media.connection.addrtype>
<media.connection.address>
|
c=IN
IP4
1.2.3.4
|
|
|
a=rtpmap:<caps.payload>
<caps.encoding-name>/<caps.clock-rate>
|
a=rtpmap:96
H264/90000
|
|
(caps) |
a=fmtp:<caps.payload>
...
|
a=fmtp:96
...
|
- mp4rtp.py
class
Streamer(object):
def
build_sdp(self, rtpbin,
address, port, sdp_path,
session_name=None):
"""
rtpbin: Gst.RtpBin element
address: destination address
port: initial destination port
sdp_path: created file with
SDP
session_name: (optional)
session name (s=...)
"""
ret, sdp_message =
GstSdp.SDPMessage.new()
sdp_message.set_version('0')
ttl = 64
number_addresses = 1
sdp_message.set_connection("IN",
"IP4", address, ttl,
number_addresses)
if session_name:
sdp_message.set_session_name(session_name)
pads = rtpbin.iterate_pads()
while True:
ret, pad = pads.next()
if ret==Gst.IteratorResult.OK:
#
only source pads
if
pad.direction !=
Gst.PadDirection.SRC:
continue
#
only pads with name
send_rtp_src...
pad_name
= pad.get_name()
if
not
pad_name.startswith('send_rtp_src'):
continue
print("pad:
{0:s}".format(pad.name))
caps
= pad.get_current_caps()
print("
{0:s}".format(caps.to_string()))
ret,
media = GstSdp.SDPMedia.new()
if
ret != GstSdp.SDPResult.OK:
print("Error")
return
media.set_port_info(port,
1)
port
= port + 2
media.proto
= "RTP/AVP"
ret
=
GstSdp.SDPMedia.set_media_from_caps(caps,
media)
if
ret != GstSdp.SDPResult.OK:
print("Error")
sdp_message.add_media(media)
elif
ret==Gst.IteratorResult.DONE:
break
elif
ret==Gst.IteratorResult.ERROR:
break
print(sdp_message.as_text())
f = open(sdp_path, 'w')
f.write(sdp_message.as_text())
def
on_message(self, bus, msg,
user_data):
t = msg.type
...
elif t ==
Gst.MessageType.STATE_CHANGED:
old, new, pending =
msg.parse_state_changed()
if new == Gst.State.PAUSED:
if
msg.src.name=="bin":
print("RtpBin")
rtpbin
= msg.src
self.build_sdp(rtpbin,
self.dst_address,
self.initial_port,
"output.sdp", None)
...
- webrtcbin
- Generating
a
SDP file from a streaming pipeline
- caps
to SDP (README)
-
gstreamer
|
sdp file
|
command
|
gst-launch-1.0
-v
videotestsrc ! videoconvert !
x264enc ! rtph264pay
config-interval=10 pt=96
! udpsink host=234.1.2.3
port=5004
|
v=0
m=<media> <port> RTP/AVP
<payload>
c=IN IP4 <host>
a=rtpmap:<payload>
<encoding-name>/<clock-rate>
a=fmtp:96
packetization-mode=<packetization-mode>;
sprop-parameter-sets=<sprop-parameter-sets>;
profile-level-id=<profile-level-id>
|
caps
(given by -v)
|
application/x-rtp,
media=(string)video,
clock-rate=(int)90000,
encoding-name=(string)H264,
packetization-mode=(string)1,
profile-level-id=(string)f4000d,
sprop-parameter-sets=(string)"Z/QADZGbKCg/YC1BgEFQAAADABAAAAMDyPFCmWA\=\,aOvsRIRA",
payload=(int)96,
ssrc=(uint)3934427744,
timestamp-offset=(uint)2187273080,
seqnum-offset=(uint)1602,
a-framerate=(string)30
|
|
media=(string)audio,
... |
|
-
|
send
|
receive
|
|
from file
|
|
common
code
|
#!/bin/bash
input_path=$1
sdp_path=/tmp/toto.sdp
dst_address=234.1.2.3
video_rtp_port=5004
video_rtcp_port=$(( video_rtp_port + 1
))
video_media_subtype="H264"
rtp_video_payload_type=96
audio_rtp_port=$(( video_rtp_port + 2
))
audio_rtcp_port=$(( video_rtp_port + 3
))
audio_media_subtype="aac"
rtp_audio_payload_type=$((
rtp_video_payload_type + 1 ))
audio_rate=48000
audio_channels=2
|
#!/bin/bash
sdp_path=$1
address=234.1.2.3
video_rtp_port=5004
video_rtcp_port=$(( video_rtp_port + 1
))
rtp_video_payload_type=96
VIDEOCAPS="application/x-rtp,
media=(string)video, payload=(int)96,
clock-rate=(int)90000,
encoding-name=(string)H264,
packetization-mode=(string)1,
sprop-parameter-sets=(string)\"Z2QAHqzZQKAv+XARAAADAAEAAAMAPA8WLZY\=\,aOvssiw\=\",
profile-level-id=(string)64001E"
audio_rtp_port=$(( video_rtp_port + 2
))
audio_rtcp_port=$(( video_rtp_port + 3
))
rtp_audio_payload_type=$((
rtp_video_payload_type + 1 ))
AUDIOCAPS="application/x-rtp,
media=(string)audio, payload=(int)97,
clock-rate=(int)44100,
encoding-name=(string)MPEG4-GENERIC,
encoding-params=(string)2,
profile-level-id=(string)1,
mode=(string)AAC-hbr,
sizelength=(string)13,
indexlength=(string)3,
indexdeltalength=(string)3,
config=(string)121056E500"
|
SDP
|
function
create_sdp
{
local sdp_path=$1
# sdp
cat >$sdp_path
<<EOF
v=0
c=IN IP4 ${dst_address}
m=video ${video_rtp_port} RTP/AVP
${rtp_video_payload_type}
a=rtpmap:${rtp_video_payload_type}
${video_media_subtype}/90000
m=audio ${audio_rtp_port} RTP/AVP
${rtp_audio_payload_type}
a=rtpmap:${rtp_audio_payload_type}
${audio_media_subtype}/${audio_rate}/${audio_channels}
EOF
if (( channels == 2
)) && [[
${audio_media_subtype} == "opus" ]]
then
echo
"a=fmtp:${rtp_audio_payload_type}
sprop-stereo=1" >>${sdp_path}
fi
}
|
|
RTP
|
gst-launch-1.0
\
filesrc
location=${input_path} ! qtdemux
name=demux \
demux.video_0 !
queue ! rtph264pay
pt=$rtp_video_payload_type ! \
udpsink
host=${dst_address}
port=${video_rtp_port} sync=true \
demux.audio_0 !
queue ! rtpmp4gpay
pt=$rtp_audio_payload_type ! \
udpsink
host=${dst_address}
port=${audio_rtp_port} sync=true |
gst-launch-1.0
\
udpsrc
address=${address}
port=${video_rtp_port}
do-timestamp=true ! "$VIDEOCAPS" ! \
"application/x-rtp,
media=(string)video" !
queue ! decodebin ! autovideosink
sync=true \
udpsrc
address=${address}
port=${audio_rtp_port}
do-timestamp=true ! "$AUDIOCAPS" ! \
"application/x-rtp,
media=(string)audio" !
queue ! decodebin ! autoaudiosink
sync=true |
RTP using
rtpbin
|
gst-launch-1.0
\
rtpbin name=bin \
filesrc
location=${input_path} ! qtdemux
name=demux \
demux.video_0 !
queue ! rtph264pay
pt=$rtp_video_payload_type !
bin.send_rtp_sink_0 \
bin.send_rtp_src_0
! udpsink host=${dst_address}
port=${video_rtp_port} sync=true \
demux.audio_0 !
queue ! rtpmp4gpay
pt=$rtp_audio_payload_type !
bin.send_rtp_sink_1\
bin.send_rtp_src_1
! udpsink host=${dst_address}
port=${audio_rtp_port} sync=true |
gst-launch-1.0
\
rtpbin name=bin \
udpsrc
address=${address}
port=${video_rtp_port}
do-timestamp=true ! "$VIDEOCAPS" !
bin.recv_rtp_sink_0 \
bin. ! "application/x-rtp,
media=(string)video" !
queue ! decodebin ! autovideosink
sync=true \
udpsrc
address=${address}
port=${audio_rtp_port}
do-timestamp=true ! "$AUDIOCAPS" !
bin.recv_rtp_sink_1 \
bin. ! "application/x-rtp,
media=(string)audio" !
queue ! decodebin ! autoaudiosink
sync=true |
RTP+RTCP using rtpbin
|
gst-launch-1.0
\
rtpbin name=bin \
filesrc
location=${input_path} ! qtdemux
name=demux \
demux.video_0 !
queue ! rtph264pay
pt=$rtp_video_payload_type !
bin.send_rtp_sink_0 \
bin.send_rtp_src_0
! udpsink host=${dst_address}
port=${video_rtp_port} sync=true \
bin.send_rtcp_src_0
! udpsink host=${dst_address}
port=${video_rtcp_port} sync=false
async=false \
demux.audio_0 !
queue ! rtpmp4gpay
pt=$rtp_audio_payload_type !
bin.send_rtp_sink_1\
bin.send_rtp_src_1
! udpsink host=${dst_address}
port=${audio_rtp_port} sync=true \
bin.send_rtcp_src_1
! udpsink host=${dst_address}
port=${audio_rtcp_port} sync=false
async=false \ |
gst-launch-1.0
\
rtpbin name=bin \
udpsrc
address=${address}
port=${video_rtp_port}
do-timestamp=true ! "$VIDEOCAPS" !
bin.recv_rtp_sink_0 \
bin. ! "application/x-rtp,
media=(string)video" !
queue ! decodebin ! autovideosink
sync=true \
udpsrc
address=${address}
port=${video_rtcp_port} !
"application/x-rtcp" !
bin.recv_rtcp_sink_0 \
udpsrc
address=${address}
port=${audio_rtp_port}
do-timestamp=true ! "$AUDIOCAPS" !
bin.recv_rtp_sink_1 \
bin. ! "application/x-rtp,
media=(string)audio" !
queue ! decodebin ! autoaudiosink
sync=true \
udpsrc
address=${address}
port=${audio_rtcp_port} !
"application/x-rtcp" !
bin.recv_rtcp_sink_1 |
RTP+RTCP
using sdpdemux
|
|
gst-launch-1.0
-v
\
filesrc
location=$sdp_path do-timestamp=true !
sdpdemux
latency=${sdpdemux_latency_ms}
name=bin \
bin. !
"application/x-rtp,
media=(string)video" ! queue !
decodebin ! videoconvert ! videoscale
! queue ! videorate ! autovideosink
sync=true \
bin. !
"application/x-rtp,
media=(string)audio" ! queue !
decodebin ! audioconvert !
audioresample ! queue ! audiorate !
autoaudiosink sync=true
|
- encode webcam, UDP stream:
-
gst-launch v4l2src !
video/x-raw-yuv,width=128,height=96,format='(fourcc)'UYVY
! ffmpegcolorspace ! ffenc_h263 !
video/x-h263 ! rtph263ppay pt=96 ! udpsink
host=192.168.1.1 port=5000 sync=false
- test
VP8 / Opus to RTP
(no RTPC) (WebRTC
and Janus)
-
gst-launch-1.0 \
audiotestsrc is-live=true wave=5 ! audioresample
! audioconvert
! audio/x-raw,channels=2,rate=16000 !
opusenc bitrate=20000 ! rtpopuspay pt=97 !
udpsink host=127.0.0.1 port=5002 \
videotestsrc !
video/x-raw,width=320,height=240,framerate=15/1
! videoscale ! videorate ! videoconvert !
timeoverlay ! vp8enc ! rtpvp8pay pt=96 !
udpsink host=127.0.0.1 port=5004
- sdp
-
v=0
c=IN IP4 127.0.0.1
m=video 5100 RTP/AVP 96
a=rtpmap:96 VP8/90000
m=audio 5102 RTP/AVP 97
a=rtpmap:97 opus/48000/2
a=fmtp:97 sprop-stereo=1
- test
VP8 / Opus to RTP (with RTCP, using rtpbin):
-
sdp_path=/tmp/toto.sdp
dst_address=225.4.3.2
video_rtp_port=5100
video_rtcp_port=$(( video_rtp_port + 1 ))
video_media_subtype="VP8"
rtp_video_payload_type=96
audio_rtp_port=$(( video_rtp_port + 2 ))
audio_rtcp_port=$(( video_rtp_port + 3 ))
audio_media_subtype="opus"
rtp_audio_payload_type=$((
rtp_video_payload_type + 1 ))
rate=48000
channels=2
# sdp
cat >$sdp_path <<EOF
v=0
c=IN IP4 $dst_address
m=video $video_rtp_port RTP/AVP
$rtp_video_payload_type
a=rtpmap:$rtp_video_payload_type
${video_media_subtype}/90000
m=audio $audio_rtp_port RTP/AVP
$rtp_audio_payload_type
a=rtpmap:$rtp_audio_payload_type
${audio_media_subtype}/${rate}/${channels}
EOF
if (( channels == 2 )) && [[
${audio_media_subtype} == "opus" ]]
then
echo
"a=fmtp:${rtp_audio_payload_type}
sprop-stereo=1" >>${sdp_path}
fi
gst-launch-1.0 -v \
rtpbin name=bin \
videotestsrc !
video/x-raw,width=320,height=240,framerate=25/1
! videoscale ! videorate ! videoconvert !
timeoverlay ! vp8enc ! rtpvp8pay
pt=$rtp_video_payload_type ! bin.send_rtp_sink_0
\
bin.send_rtp_src_0
! udpsink host=${dst_address}
port=${video_rtp_port} sync=true \
bin.send_rtcp_src_0
! udpsink host=${dst_address}
port=${video_rtcp_port} sync=false
async=false \
audiotestsrc is-live=true wave=5 !
audioconvert ! audioresample !
audio/x-raw,channels=${channels},rate=${rate}
! opusenc bitrate=64000 ! rtpopuspay
pt=$rtp_audio_payload_type ! bin.send_rtp_sink_1
\
bin.send_rtp_src_1
! udpsink host=${dst_address}
port=${audio_rtp_port} sync=true \
bin.send_rtcp_src_1
! udpsink host=${dst_address}
port=${audio_rtcp_port} sync=false
async=false
- test H.264 to RTP (no RTCP)
-
gst-launch-1.0 -v videotestsrc !
videoconvert ! x264enc ! rtph264pay
config-interval=10
pt=96
! udpsink host=234.1.2.3 port=5004
- player:
-
- toto.sdp
-
v=0
m=video 5004 RTP/AVP 96
c=IN IP4 234.1.2.3
a=rtpmap:96 H264/90000
a=fmtp:96 packetization-mode=1
ffplay -i toto.sdp
- from file (VP8, Opus) to RTP (no RTCP)
-
sdp_path=/tmp/toto.sdp
dst_address=225.4.3.2
video_rtp_port=5100
video_rtcp_port=$(( video_rtp_port + 1 ))
video_media_subtype="VP8"
rtp_video_payload_type=96
audio_rtp_port=$(( video_rtp_port + 2 ))
audio_rtcp_port=$(( video_rtp_port + 3 ))
audio_media_subtype="opus"
rtp_audio_payload_type=$((
rtp_video_payload_type + 1 ))
rate=48000
channels=2
# sdp
cat >$sdp_path <<EOF
v=0
c=IN IP4 $dst_address
m=video $video_rtp_port RTP/AVP
$rtp_video_payload_type
a=rtpmap:$rtp_video_payload_type
${video_media_subtype}/90000
m=audio $audio_rtp_port RTP/AVP
$rtp_audio_payload_type
a=rtpmap:$rtp_audio_payload_type
${audio_media_subtype}/${rate}/${channels}
EOF
if (( channels == 2 )) && [[
${audio_media_subtype} == "opus" ]]
then
echo
"a=fmtp:${rtp_audio_payload_type}
sprop-stereo=1" >>${sdp_path}
fi
gst-launch-1.0 -v \
filesrc location=/path/to/toto.webm !
matroskademux name=demux \
demux.video_0 ! queue ! rtpvp8pay
pt=$rtp_video_payload_type ! udpsink
host=${dst_address} port=${video_rtp_port}
sync=true \
demux.audio_0 ! queue ! rtpopuspay
pt=$rtp_audio_payload_type ! udpsink
host=${dst_address} port=${audio_rtp_port}
sync=true
- from file (H.264, AAC) to RTP
-
gst-launch \
filesrc
location=${input_path} ! qtdemux
name=demux \
demux.video_0 ! queue !
rtph264pay pt=$rtp_video_payload_type !
udpsink host=${dst_address}
port=${video_rtp_port} sync=true \
demux.audio_0 ! queue !
rtpmp4gpay pt=$rtp_audio_payload_type !
udpsink host=${dst_address}
port=${audio_rtp_port} sync=true
- from file (H.264, AAC) to RTP, using rtpbin
-
- from file (H.264, AAC) to RTP + RTCP, using
rtpbin
-
- RTMP
to:
-
- rtmpsink uses librtmp
- rtmp2sink (rtmp2,
Alternative
RTMP Implementation) (GStreamer>=1.18)
does not use librtmp
- nginx-rtmp
-
- Video and audio:
-
- from test to H.264, AAC
-
gst-launch-1.0 -v flvmux
name=mux ! rtmpsink
location=rtmp://nginx-server/myapp/mystream
\
videotestsrc ! video/x-raw,
width=360, height=288 ! x264enc !
video/x-h264,profile=baseline,width=360,height=288
! h264parse ! mux. \
audiotestsrc wave=5 ! audioconvert
! avenc_aac
compliance=experimental ! aacparse
! mux.
- from test to H.264 (omx), MP3
-
gst-launch-1.0 -v flvmux
name=mux ! rtmpsink
location=rtmp://nginx-server/myapp/mystream \
videotestsrc
!
video/x-raw, width=360,
height=288 ! omxh264enc !
video/x-h264,profile=baseline,width=360,height=288
! h264parse ! mux. \
audiotestsrc
wave=5
! audioconvert ! lamemp3enc !
mpegaudioparse ! mux.
- from DVB
-
gst-launch-1.0 -v dvbsrc
modulation="QAM 64"
trans-mode=8k bandwidth=8
frequency=658000000
code-rate-lp=AUTO
code-rate-hp=2/3 guard=4
hierarchy=0 ! tsdemux
program-number=806 name=demux
\
flvmux
name=mux
! rtmpsink
location=rtmp://nginx-server/myapp/mystream \
demux. !
queue ! mpegvideoparse !
decodebin ! videoscale !
video/x-raw, width=320,
height=320 ! videoconvert !
omxh264enc inline-header=true
periodicty-idr=50 ! h264parse !
mux. \
demux. !
queue ! mpegaudioparse !
decodebin ! audioconvert !
avenc_aac
compliance=experimental !
aacparse ! mux.
- video, audio with PID 0x7c:
-
gst-launch-1.0 -vvv dvbsrc
modulation="QAM 64"
trans-mode=8k bandwidth=8
frequency=658000000
code-rate-lp=AUTO
code-rate-hp=2/3 guard=4
hierarchy=0 ! tsdemux
program-number=806 name=demux
\
flvmux
name=mux
! rtmpsink
location=rtmp://192.168.0.8/myapp/mystream \
demux. !
queue ! mpegvideoparse !
decodebin ! videoscale !
video/x-raw, width=320,
height=320 ! videoconvert !
omxh264enc inline-header=true
periodicty-idr=50 ! h264parse !
mux. \
demux.audio_007c
! queue ! mpegaudioparse !
decodebin ! audioconvert !
avenc_aac
compliance=experimental !
aacparse ! mux.
- video, test audio:
-
gst-launch-1.0 -vvv dvbsrc
modulation="QAM 64"
trans-mode=8k bandwidth=8
frequency=658000000
code-rate-lp=AUTO
code-rate-hp=2/3 guard=4
hierarchy=0 ! tsdemux
program-number=806 name=demux
\
flvmux
name=mux
! rtmpsink
location=rtmp://nginx-server/myapp/mystream \
demux. !
queue ! mpegvideoparse !
decodebin ! videoscale !
video/x-raw, width=320,
height=320 ! videoconvert !
omxh264enc inline-header=true
periodicty-idr=50 ! h264parse !
mux. \
audiotestsrc
wave=5
! audioconvert ! lamemp3enc !
mpegaudioparse ! mux.
- video, audio forced to 44100Hz (MP3 at
48000Hz is not supported by FLV)
(AAC at 48000Hz is supported, though) (queue
max-size-time must be increase
from 1000000ns [1s] to ... )
-
gst-launch-1.0 -vvv dvbsrc
modulation="QAM 64"
trans-mode=8k bandwidth=8
frequency=658000000
code-rate-lp=AUTO
code-rate-hp=2/3 guard=4
hierarchy=0 ! tsdemux
program-number=806 name=demux
\
flvmux
name=mux
! rtmpsink
location=rtmp://nginx-server/myapp/mystream \
demux. !
queue max-size-time=4000000000 !
mpegvideoparse ! decodebin !
videoscale ! video/x-raw,
width=320, height=320 !
videoconvert ! omxh264enc
inline-header=true
periodicty-idr=50 ! h264parse !
mux. \
demux.audio_007c
!
queue max-size-time=4000000000 !
mpegaudioparse ! decodebin !
audioconvert ! audioresample
! audio/x-raw,rate=44100 !
lamemp3enc ! mpegaudioparse
! mux.
- video, audio AAC
at 48000Hz:
-
gst-launch-1.0 -vvv dvbsrc
modulation="QAM 64"
trans-mode=8k bandwidth=8
frequency=658000000
code-rate-lp=AUTO
code-rate-hp=2/3 guard=4
hierarchy=0 ! tsdemux
program-number=806 name=demux
\
flvmux
name=mux
! rtmpsink
location=rtmp://nginx-server/myapp/mystream \
demux. !
queue ! mpegvideoparse !
decodebin ! videoscale !
video/x-raw, width=32, height=32
! videoconvert ! omxh264enc
inline-header=true
periodicty-idr=50 ! h264parse !
mux. \
demux.audio_007c
!
queue ! mpegaudioparse !
decodebin ! audioconvert ! audioresample
! audio/x-raw,rate=48000 !
avenc_aac
compliance=experimental !
aacparse ! mux.
- Problemes
/ Problems
-
- queues
-
- when sending to nginx-rtmp-module,
with nginx
logs at level info (
error_log
/var/log/nginx/error.log info ):
RTMP in chunk stream too
big: xx >= 32
too big message ...
chunk stream too big
-
- RTMPS (e.g. Facebook)
- Wowza
-
- Live Streaming
from RaspberryPi using GStreamer - Help
please?
-
- Incoming security / Flash Version
String:
-
Wirecast/|FME/|FMLE/|Wowza
GoCoder*|Gstreamer/|Gstreamer/*|Gstreamer*
- How
to
secure publishing from an RTMP encoder
that does not support authentication
(ModuleSecureURLParams)
- Streaming
to
a Flash Media Server using the rtmpsink
element
-
- woking / not working
-
- working
-
- omxh264enc
! video/x-h264,profile=high
- omxh264enc !
video/x-h264,profile=baseline
- x264enc !
video/x-h264,profile=baseline
- not working
-
- x264enc !
video/x-h264,profile=high
- only video:
-
gst-launch-1.0 -v -e flvmux
name=mux ! rtmpsink
location=rtmp://wowza_server/application/stream
\
videotestsrc ! video/x-raw,
framerate=25/1, width=640, height=360
! x264enc
bitrate=512 ! video/x-h264,profile=baseline
! h264parse ! mux.
gst-launch-1.0 -v dvbsrc
modulation="QAM
64" trans-mode=8k bandwidth=8
frequency=658000000
code-rate-lp=AUTO code-rate-hp=2/3
guard=4 hierarchy=0 ! tsdemux
program-number=805 name=demux
\
flvmux
name=mux ! rtmpsink
location=rtmp://wowza_server/application/stream \
demux. ! queue
! mpegvideoparse ! decodebin !
videoscale ! video/x-raw, width=360,
height=288 ! videoconvert ! omxh264enc
inline-header=true periodicty-idr=1
! video/x-h264,profile=baseline
! h264parse ! mux.
gst-launch-1.0 -v dvbsrc
modulation="QAM
64" trans-mode=8k bandwidth=8
frequency=658000000
code-rate-lp=AUTO code-rate-hp=2/3
guard=4 hierarchy=0 ! tsdemux
program-number=805 name=demux
\
flvmux
name=mux ! rtmpsink
location=rtmp://wowza-server/application/stream
\
demux. ! queue
! mpegvideoparse ! decodebin !
videoscale ! video/x-raw, width=360,
height=288 ! videoconvert !
omxh264enc inline-header=true
periodicty-idr=1 !
video/x-h264,profile=high !
h264parse ! mux.
gst-launch-1.0 -v flvmux
name=mux ! rtmpsink
location=rtmp://wowza-server/application/stream \
videotestsrc !
video/x-raw, width=360, height=288 !
omxh264enc !
video/x-h264,profile=high !
h264parse ! mux.
- video and audio:
-
gst-launch-1.0 -v -e flvmux
name=mux ! rtmpsink
location=rtmp://wowza-server/application/stream
videotestsrc ! video/x-raw,
framerate=24/1, width=1024, height=436
! x264enc bitrate=800 !
video/x-h264,profile=baseline !
h264parse ! mux. audiotestsrc wave=5 !
audioconvert ! lamemp3enc !
mpegaudioparse ! mux.
gst-launch-1.0 -v flvmux
name=mux ! rtmpsink
location=rtmp://wowza-server/application/stream
videotestsrc ! video/x-raw, width=360,
height=288 ! x264enc
!
video/x-h264,profile=baseline !
h264parse ! mux. audiotestsrc wave=5 !
audioconvert ! lamemp3enc !
mpegaudioparse ! mux.
gst-launch-1.0
-v
flvmux name=mux ! rtmpsink
location=rtmp://wowza-server/application/stream
videotestsrc ! video/x-raw, width=360,
height=288 ! omxh264enc !
video/x-h264,profile=baseline !
h264parse ! mux. audiotestsrc wave=5 !
audioconvert ! lamemp3enc !
mpegaudioparse ! mux.
gst-launch-1.0 -v flvmux
name=mux ! rtmpsink
location=rtmp://wowza-server/application/stream
videotestsrc ! video/x-raw, width=360,
height=288 ! omxh264enc !
video/x-h264,profile=baseline !
h264parse ! mux.
- Flash Media Server
-
gst-launch filesrc location=videofile !
decodebin name=decoder \
decoder. ! queue ! audioconvert ! audioresample !
osssink \
decoder. ! ffmpegcolorspace ! xvimagesink
- gstreamer
dvb streaming
- Graphical editor
- gst-editor (only for gstreamer 0.8)
|
Desenvolupament / Development
|
- Documentació / Documentation
- Llenguatges / Languages
- C
applications (using GObject
and GLib)
-
- Python applications
(using PyGObject)
-
- Configuració / Setup
- Python with PyGObject
- GStreamer
- from packages
- or compiled and installed to /usr/local/bin
- PyGObject
API
- Tutorials and examples
-
- Problemes / Problems
-
- CTRL-C not working
- Solució / Solution
if __name__ ==
'__main__':
# to be able to use
CTRL-C to quit
import signal
signal.signal(signal.SIGINT,
signal.SIG_DFL)
...
ValueError: Namespace Gst not available
-
- Solució / Solution
-
- if you compiled gstreamer from source,
-
- check that typelib files where
installed (
lib64girepository-devel
must be installed before compilation):
-
- /usr/local/lib/girepository-1.0/Gst*.typelib
- set environment variable:
-
export
GI_TYPELIB_PATH=/usr/local/lib/girepository-1.0
- aaaa
- aaaa
- GES
- ...
- ...
- Resum / Summary
task
|
options
|
steps
|
C
|
Python (base.py)
|
|
|
|
|
|
methods
|
|
headers
|
|
|
#include
<gst/gst.h>
|
#!/usr/bin/env
python3
import sys
import gi
gi.require_version('Gst', '1.0')
from gi.repository import Gst, GObject, GLib |
|
|
Initialize Gstreamer
|
|
|
/* init */
gst_init (&argc, &argv);
|
class
Player(object): |
def
__init__(self): |
# init GStreamer
Gst.init(None)
|
main GLib loop
(optional, but needed when using add_signal_watch)
|
|
|
GMainLoop
*main_loop;
main_loop = g_main_loop_new (NULL, FALSE);
|
self.loop =
GLib.MainLoop.new(None,
False) |
Arguments
|
|
Usage |
/* check args */
if (argc != 2) {
g_print ("Usage: %s <filename>\n",
argv[0]);
return -1;
}
|
# check input
arguments
if len(sys.argv) != 2:
print("Usage: {0:s}
<filename>".format(sys.argv[0]))
sys.exit(1)
|
|
Arguments
|
|
if Gst.Uri.is_valid(sys.argv[1]):
uri = sys.argv[1]
else:
uri =
Gst.filename_to_uri(sys.argv[1])
print("uri: {0:s}".format(uri)) |
Build
the pipeline
|
option 1:
Build the pipeline by parsing (basic
tutorial
1) |
|
/* create a new
pipeline with all the elements */
GstElement
*pipeline;
pipeline = gst_parse_launch
("playbin
uri=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm")
|
# create a new
pipeline with all the elements (remove double quotes
around caps, if any)
pipeline = Gst.parse_launch("playbin
uri=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm")
|
option
2:
Build the pipeline from elements
(basic
tutorial 2)
|
Create the elements |
/* create the
elements */
GstElement
*pipeline, *source, *sink;
source = gst_element_factory_make ("videotestsrc",
"source");
sink = gst_element_factory_make ("autovideosink",
"sink"); |
# create the
elements
source = Gst.ElementFactory.make("videotestsrc",
"source")
sink = Gst.ElementFactory.make("autovideosink",
"sink") |
(or create the factory
and then the element)
(Creating
a
GstElement, basic tutorial 6)
|
/* create factory
and element */
factory = gst_element_factory_find ("fakesrc");
element = gst_element_factory_create (factory,
"source");
|
# create factory
and element
source_factory = Gst.ElementFactory.find("videotestsrc")
sink_factory =
Gst.ElementFactory.find("autovideosink")
source = source_factory.create("source")
sink = sink_factory.create("sink")
|
Create the empty pipeline |
/* create an empty
pipeline */
pipeline = gst_pipeline_new ("test-pipeline"); |
# create an empty
pipeline
pipeline = Gst.Pipeline.new("test-pipeline")
|
Build the pipeline: add
and link elements
|
/* add elements to
the pipeline */
gst_bin_add_many
(GST_BIN (pipeline), source, sink, NULL);
if (gst_element_link (source, sink) != TRUE) {
g_printerr
("Elements could not be linked.\n");
gst_object_unref (pipeline);
return -1;
}
|
# add elements to
the pipeline
pipeline.add(source)
pipeline.add(sink)
if not source.link(sink):
print("ERROR: Could not link source
to sink")
sys.exit(1)
|
Modify properties |
/* set property of
an element */
g_object_set
(source, "pattern", 0, NULL);
|
# set property of
an element
source.set_property("pattern",
0)
|
|
Connect element signal to
a callback
|
|
def
on_have_type(self, element, probability, caps,
user_data): |
|
|
|
|
|
# connect signal to
a callback
typefind.connect("have-type",
on_have_type,
None) |
option
3:
Dinamically connect the elements in the pipeline
(basic
tutorial 3)
|
Callback |
pad_added_handler(...)
|
def
on_pad_added(self, ...): |
|
Connect signal to
callback |
g_signal_connect
(data.source, "pad-added", G_CALLBACK
(pad_added_handler), &data); |
|
# connect signal to
a callback source.connect("pad-added",
self.on_pad_added) |
Start playing
|
|
|
/* set the pipeline
to playing state */
gst_element_set_state (pipeline, GST_STATE_PLAYING);
|
# set the pipeline
to playing state
pipeline.set_state(Gst.State.PLAYING)
|
Main
loop
(two
ways
to use a bus)
|
|
Get bus associated to
pipeline |
/* get the bus from
the pipeline */
GstBus
*bus;
bus = gst_element_get_bus (pipeline); |
# get the bus from
the pipeline
bus = pipeline.get_bus() |
option 1:
Wait until error or EOS
(basic
tutorial 1)
|
|
/* wait until error
or end of stream */
GstMessage
*msg;
msg = gst_bus_timed_pop_filtered
(bus, GST_CLOCK_TIME_NONE, GST_MESSAGE_ERROR | GST_MESSAGE_EOS);
|
# wait until error
or end of stream
terminate = False
while True:
try:
msg = bus.timed_pop_filtered(0.5
*
Gst.SECOND, Gst.MessageType.ERROR |
Gst.MessageType.EOS)
if msg:
terminate
= True
except KeyboardInterrupt:
terminate =
True
if terminate:
break
|
option
2:
GMain loop with callback
(basic
tutorial 12)
|
start loop |
/* start main loop
*/
g_main_loop_run
(main_loop); |
# start main loop
self.loop.run() |
data structure (to be
available from callback)
|
typedef struct
_CustomData {
gboolean is_live;
GstElement *pipeline;
GMainLoop *loop;
} CustomData;
CustomData data;
data.loop = main_loop;
data.pipeline = pipeline; |
self.loop
self.pipeline
|
option
a:
one single callback for all messages
|
static void
cb_message (GstBus *bus, GstMessage *msg, CustomData
*data) {
const GstStructure *structure;
structure = gst_message_get_structure (msg);
g_print ("Message name: %s\n",
gst_structure_get_name (structure) );
switch (GST_MESSAGE_TYPE (msg)) {
...
} |
def on_message(self, bus, msg,
user_data):
def on_sync_message(self,
bus, msg, user_data):
|
|
/* message handler
*/
gst_bus_add_signal_watch
(bus);
g_signal_connect (bus, "message", G_CALLBACK
(cb_message), &data); |
|
# general message
handler
bus.add_watch(GLib.PRIORITY_DEFAULT,
self.on_message,
None)
# another option:
# general message handler
bus. add_signal_watch ()
bus.connect("message", self.on_message, None)
# sync message handler
bus.enable_sync_message_emission()
bus.connect('sync-message', self.on_sync_message,
None)
|
option
b:
each message has its callback
|
|
def on_error( self,
bus, msg, user_data ):
def on_eos( self, bus, msg,
user_data ):
def on_state_changed( self, bus,
msg, user_data ):
def on_application_message( self, bus,
msg, user_data ):
... |
|
|
|
# individual
message handler
bus.add_signal_watch()
bus.connect("message::error",
self.on_error)
bus.connect("message::eos", self.on_eos)
bus.connect("message::state-changed",
self.on_state_changed)
bus.connect("message::application",
self.on_application_message)
...
|
quit loop
|
|
self.loop.quit() |
Free resources
|
|
|
/* free resources
*/
if (msg != NULL)
gst_message_unref (msg);
gst_object_unref (bus);
gst_element_set_state (pipeline, GST_STATE_NULL);
gst_object_unref (pipeline);
return 0;
|
# free resources
pipeline. set_state (Gst.State.NULL)
|
|
|
|
|
if __name__ ==
'__main__':
# to be able to use CTRL-C to quit
import signal
signal.signal(signal.SIGINT,
signal.SIG_DFL)
p = Player() |
|
|
- Application
Development
Manual
(pdf,
ps, html)
- About GStreamer
- What is GStreamer?
- Design principles
- Foundations
- Communication between application / bus / pipeline:
- buffers:
streaming data between elements (downstream
(sources->sinks)) (buffering)
- events:
between elements or from the application to
elements (upstream (sinks->sources),
downstream (sources->sinks))
- messages:
posted by elements on the pipeline's message bus (message types)
- queries:
allow applications to request information such
as duration or current playback position from
the pipeline (upstream, downstream) (querying)
- Building
an
Application
- Initializing
GStreamer
-
C
|
Python |
#include
<gst/gst.h> |
import
gi
gi.require_version('Gst', '1.0')
from gi.repository import Gst, GObject,
GLib |
gst_init
(&argc, &argv); |
Gst.init(None)
|
- Elements
- Creating elements
|
C
|
Python |
option 1:
factory and element
|
factory
= gst_element_factory_find("fakesrc")
element =
gst_element_factory_create(factory,
"source") |
factory
= Gst.ElementFactory.find("fakesrc")
source = factory.create("source") |
option 2:
element
|
element
= gst_element_factory_make("fakesrc",
"source") |
source
= Gst.ElementFactory.make("fakesrc",
"source") |
- Accessing elements
pipeline = Gst.parse_launch("... ! queue
name=cua_1 ... queue name=cua_2 ...")
cua_1 = pipeline.get_by_name("cua_1")
- Properties
p1 = cua_1.get_property('propietat_1')
cua_1.set_property(...)
- Signals
- signals defined by element:
def on_queue_running(queue, udata):
current_level_buffers =
queue.get_property('current-level-buffers')
print("[on_cua_running]
[{}] current-level-buffers:
{}".format(udata, current_level_buffers))
cua_1 = pipeline.get_by_name('cua_1')
cua_1.connect('running',
self.on_queue_running, "cua_1") cua_2=
pipeline.get_by_name('cua_2')
cua_2.connect('running',
self.on_queue_running, "cua_2")
signal on modified (set_property)
property of element:
myelement.connect('notify::name_of_the_property',
on_mycallback)
- Element States
name
|
C
|
Python
|
description
|
|
gst_element_set_state
(pipeline,
GST_STATE_PLAYING); |
pipeline.set_state(Gst.State.PLAYING) |
|
NULL
|
GST_STATE_NULL
|
Gst.State.NULL
|
the NULL
state or initial state of an element.
Transition to it will free all resources.
|
READY
|
GST_STATE_READY |
Gst.State.READY |
the element
is ready to go to PAUSED
|
PAUSED
|
GST_STATE_PAUSED |
Gst.State.PAUSED |
the element
is PAUSED, it is ready to accept and process
data. Sink elements however only accept one
buffer and then block
|
PLAYING
|
GST_STATE_PLAYING |
Gst.State.PLAYING |
the element
is PLAYING, the clock is running and the
data is flowing
|
- Bins
-
|
C
|
Python
|
Bin
|
gst_bin_new() |
Gst.Bin.new() |
gst_bin_add()
|
|
gst_bin_remove()
|
|
gst_bin_get_by_name()
|
|
gst_bin_get_by_interface()
|
|
gst_bin_iterate_elements()
|
|
Pipeline
(special top-level type of bin)
|
gst_pipeline_new() |
pipeline
= Gst.Pipeline.new("test-pipeline") |
|
pipeline.add(...)
|
|
... |
- Bus
-
|
C
|
Python |
get all
messages
|
bus = gst_pipeline_get_bus
(GST_PIPELINE (pipeline));
bus_watch_id = gst_bus_add_watch
(bus, my_bus_callback, NULL);
static gboolean
my_bus_callback(GstBus *bus,
GstMessage *msg, gpointer data)
...
|
bus = self.pipeline.get_bus()
bus.add_watch(GLib.PRIORITY_DEFAULT,
self.on_message,
None) def on_message(bus,
msg, user_data):
if (msg.type ==
...):
...
elif (msg.type ==
...):
...
else:
# Unhandled message
pass
|
get selected
messages
|
bus =
gst_pipeline_get_bus (GST_PIPELINE
(pipeline);
gst_bus_add_signal_watch (bus);
g_signal_connect
(bus, "message::error", G_CALLBACK
(cb_message_error), NULL);
g_signal_connect (bus, "message::eos",
G_CALLBACK (cb_message_eos), NULL); |
bus =
self.pipeline.get_bus()
bus.add_signal_watch()
bus.connect("message::error",
self.on_error)
bus.connect("message::eos", self.on_eos) |
(needed loop)
|
loop =
g_main_loop_new (NULL, FALSE);
g_main_loop_run (loop);
...
g_main_loop_unref (loop); |
loop =
GLib.MainLoop.new(None,
False)
loop.run()
...
loop.quit() |
- Tipus de
missatges / Message
types
-
|
|
C
|
Python |
|
|
|
def on_message(bus,
msg, user_data):
msg_structure_name
= msg.get_structure().get_name()
def on_sync_message(bus,
msg, user_data):
... |
message type |
GstBus
Signals
to be specified in connect(...) |
|
Gst.MessageType
|
|
message |
|
|
Error
(fatal problem),
warning (non-fatal problem),
information (not problem)
|
message::error
|
GST_MESSAGE_ERROR
gst_message_parse_error()
_parse_warning ()
_parse_info () |
if
(msg.type == Gst.MessageType.ERROR):
err,
debug = msg.
parse_error ()
#
msg.parse_warning()
# msg.parse_info()
print("Error:
{0:s}".format(err.message))
self.loop.quit()
|
End-of-stream
|
message::eos
|
GST_MESSAGE_EOS
|
if
(msg.type == Gst.MessageType.EOS):
print("EOS")
self.loop.quit() |
Tags
|
|
GST_MESSAGE_TAG
gst_message_parse_tag() |
if
(msg.type == Gst.MessageType.TAG):
tags = msg.parse_tag()
tags.foreach(self.print_one_tag, None)
def print_one_tag(self, list, tag,
user_data):
res, val = Gst.TagList.copy_value(list,
tag)
print("%s: %s"%
(Gst.tag_get_nick(tag), val))
|
State-changes
|
|
GST_MESSAGE_STATE_CHANGE
gst_message_parse_state_changed ()
|
if
(msg.type == Gst.MessageType.STATE_CHANGED):
old, new, pending =
msg.parse_state_changed()
print("State
changed: {0:s} ->
{1:s}".format(Gst.Element.state_get_name(old),
Gst.Element.state_get_name(new)))
|
message::async-done
|
GST_MESSAGE_ASYNC_DONE |
if
(msg.type ==
Gst.MessageType.ASYNC_DONE):
...
|
Buffering
|
message::buffering
|
GST_MESSAGE_BUFFERING
gst_message_parse_buffering (message,
&percent); |
if
(msg.type ==
Gst.MessageType.BUFFERING
percent
= message.parse_buffering() |
|
sync-message |
|
|
Element
(specific to element; e.g. queue
signals)
|
sync-message::element
|
|
if
(msg.type ==
Gst.MessageType.ELEMENT):
... |
Application-specific
|
|
gst_message_get_structure() |
|
Threads
|
sync-message::stream-status
|
GST_MESSAGE_STREAM_STATUS
gst_message_parse_stream_status
(message, &type, &owner); |
if
(msg.type ==
Gst.MessageType.STREAM_STATUS):
type, owner =
message.parse_stream_status()
|
- Pads
and
capabilities
- Pads
availability
|
examples
|
C
|
Python |
|
|
pad =
gst_element_get_static_pad(...)
pad =
gst_element_get_compatible_pad (mux,
tolink_pad, NULL);
pad =
gst_element_get_request_pad (tee,
"src%d");
name = gst_pad_get_name (pad);
|
|
always
|
|
|
|
dynamic
(sometimes) pads
|
|
/* listen for newly created pads
*/
g_signal_connect (demux, "pad-added",
G_CALLBACK (cb_new_pad), NULL);
static void cb_new_pad(...)
gst_element_set_state ()
gst_element_sync_state_with_parent
()
|
# listen for newly created pads
self.demux.connect("pad-added",
self.on_new_pad)
def on_new_pad(self):
|
request pads
(basic
tutorial 3)
|
- multiplexer
- aggregator
- tee
|
|
|
- Capabilities
of
a pad
- GstCaps
- non-negotiated pad: one or more
GstStructure
- negotiated pad: only one GstStructure
(with fixed values)
- ...
- possible caps: obtained with gst-inspect
- allowed caps: subset of possible
capabilities, depending on the possible caps
of the peer pad
- negotiated caps:
- Types of caps:
-
type
|
Gst.Structure
|
values
|
empty
|
0
|
|
ANY
|
|
|
simple
|
1
|
variable
field types
|
fixed
|
1
|
no
variable field types
|
-
|
C
|
Python |
check
type
|
gst_caps_is_fixed
(caps) |
caps.is_fixed()
|
get
structure
|
str
= gst_caps_get_structure (caps, 0); |
str
= caps.get_structure(0)
|
get value
|
gst_structure_get_int
(str,
"width", &width) |
width
= str.get_int("width") |
creation
of simple caps
|
caps
= gst_caps_new_simple ("video/x-raw",
"format", G_TYPE_STRING, "I420",
"width", G_TYPE_INT, 384,
"height", G_TYPE_INT, 288,
"framerate", GST_TYPE_FRACTION, 25, 1,
NULL); |
caps
= Gst.Caps.new_empty_simple("video/x-raw")
caps.set_value("format",
"I420") caps.set_value("width",
384) caps.set_value("height",
288) caps.set_value("framerate",
...)
|
creation
of full caps
|
caps = gst_caps_new_full (
gst_structure_new ("video/x-raw",
"width",
G_TYPE_INT, 384,
"height",
G_TYPE_INT, 288,
"framerate",
GST_TYPE_FRACTION, 25, 1,
NULL),
gst_structure_new ("video/x-bayer",
"width",
G_TYPE_INT, 384,
"height",
G_TYPE_INT, 288,
"framerate",
GST_TYPE_FRACTION, 25, 1,
NULL),
NULL);
|
(unavailable)
|
filtering
using caps (internally creates a capsfilter)
|
link_ok
= gst_element_link_filtered (element1,
element2, caps); |
link_ok
= element1.link_filtered(element2,
caps) |
- Ghost pads
- "A ghost pad is a pad from some element in
the bin that can be accessed directly from
the bin as well."
-
- Buffers
and
events
- Buffers
- Events
- "Events are control particles that are sent
both upstream (right to left) and downstream
(left to right) in a pipeline along with
buffers."
- Examples: seeking, flushes, end-of-stream
notifications, ...
-
|
C
|
Python |
create
|
event
= gst_event_new_seek (1.0,
GST_FORMAT_TIME,
GST_SEEK_FLAG_NONE,
GST_SEEK_METHOD_SET,
time_ns,
GST_SEEK_TYPE_NONE,
G_GUINT64_CONSTANT (0)); |
event
= Gst.Event.new_seek(1.0,
Gst.Format.TIME,
Gst.SeekFlags.NONE,
Gst.SeekType.SET,
time_ns,
Gst.SeekType.NONE,
0) |
send
|
gst_element_send_event
(element,
event); |
element.send_event(event)
|
- Your
first
application
- Advanced
GStreamer
Concepts
- Position
tracking
and seeking (basic_tutorial_4)
-
|
C
|
Python |
Querying
(queries)
|
static gboolean
cb_print_position(GstElement
*pipeline)
gst_element_query_position
(pipeline, GST_FORMAT_TIME,
&pos)
gst_element_query_duration
(pipeline, GST_FORMAT_TIME,
&len)
g_timeout_add (200,
(GSourceFunc) cb_print_position,
pipeline);
g_timeout_add_seconds (1,
(GSourceFunc) cb_print_position,
pipeline);
g_main_loop_run (loop);
|
from helper import format_ns
class Player(object):
def
on_message(self, bus, msg, data):
if (msg.type ==
Gst.MessageType.ASYNC_DONE):
running_time = msg.parse_async_done()
# query_duration
if self.duration ==
Gst.CLOCK_TIME_NONE:
ret, duration =
self.pipeline.query_duration(Gst.Format.TIME)
if ret:
self.duration = duration
print("ret: {}, duration:
{}".format(ret, format_ns(duration)))
def
cb_print_position(self):
ret, position =
self.pipeline.query_position(Gst.Format.TIME)
if ret:
print("ret: {}, position:
{}".format(ret, format_ns(position)))
return True
def __init__(self):
# media duration (ns)
self.duration = Gst.CLOCK_TIME_NONE
self.loop = GLib.MainLoop.new(None,
False)
GLib.timeout_add(200,
self.cb_print_position)
bus.add_signal_watch()
bus.connect("message",
self.on_message, None)
self.loop.run()
def cb_print_position(pipeline)
ret, pos = pipeline.query_position(Gst.Format.TIME)
ret, len = pipeline.query_duration(Gst.Format.TIME)
print("{0:f}/{1:f}".format(pos,
len))
GLib.timeout_add(200,
cb_print_position,
pipeline)
- the function is called repeatedly
until it returns False
GLib.timeout_add_seconds(1,
cb_print_position,
pipeline)
loop.run()
|
Events:
seeking (and more)
|
gst_element_seek (pipeline, 1.0,
GST_FORMAT_TIME,
GST_SEEK_FLAG_FLUSH,
GST_SEEK_TYPE_SET,
time_nanoseconds,
GST_SEEK_TYPE_NONE,
GST_CLOCK_TIME_NONE)
gst_element_seek_simple (...)
|
|
- Metadata
- Types
- stream tags: non-technical information
(author, title, album ...)
- stream-info: technical information (GstPad,
GstCaps)
-
- Interfaces
- GstColorBalance
- GstVideoOverlay
- ...
- Clocks
and
synchronization in GStreamer
- Buffering
-
|
|
|
Stream
buffering |
"Buffering up
to a specific amount of data, in memory,
before starting playback so that network
fluctuations are minimized" |
buffer
element: queue2
- low watermark
- high watermark
|
Download
buffering |
"Download of
the network file to a local disk with fast
seeking in the downloaded data. This is
similar to the quicktime/youtube players." |
buffering.py |
Timeshift
buffering |
"Caching of
(semi)-live streams to a local, on disk,
ringbuffer with seeking in the cached area.
This is similar to tivo-like timeshifting." |
|
-
|
C
|
Python
|
|
|
buffering.py
(download buffering)
|
messages
|
gst_message_parse_buffering
(message,
&percent); |
percent
= message.parse_buffering() |
queries
|
query =
gst_query_new_buffering (GST_FORMAT_TIME);
gst_element_query (pipeline, query)
gst_query_parse_buffering_percent (query,
&busy, &percent);
gst_query_parse_buffering_range (query,
NULL, NULL, NULL, &estimated_total); |
query =
Gst.Query.new_buffering(Gst.Format.TIME)
pipeline.query(query)
busy, percent = query.parse_buffering_percent()
format, start, stop, estimated_total =
query.parse_buffering_range() |
- Dynamic
Controllable
Parameters
-
|
C
|
Python |
|
GstControlSource
|
Gst.ControlSource
|
create
|
csource
= gst_interpolation_control_source_new ();
g_object_set (csource, "mode",
GST_INTERPOLATION_MODE_LINEAR, NULL); |
|
attach
to the gobject property
|
gst_object_add_control_binding
(object,
gst_direct_control_binding_new (object,
"prop1", csource)); |
|
|
GstTimedValueControlSource
*tv_csource
= (GstTimedValueControlSource *)csource;
gst_timed_value_control_source_set
(tv_csource, 0 * GST_SECOND, 0.0);
gst_timed_value_control_source_set
(tv_csource, 1 * GST_SECOND, 1.0); |
|
- Threads
- Scheduling in GStreamer
- pad can:
- push from upstream
- pull to downstream
- Configuring threads in GStreamer
- message:
STREAM_STATUS
- GST_STREAM_STATUS_TYPE_CREATE: when a new
thread is about to be created -> you can
configure a GstTaskPool in the GstTask
- when a thread is entered or left -> you
can configure thread priorities
- when a thread starts, pauses and stops
-> you can visualize the status of
streaming in a gui application
- Boost priority of a thread
- When would you force a thread?
- Autoplugging
- Playback
components
- Media types as a way to identify streams
- Media stream type detection
- Dynamically autoplugging a pipeline
- Pipeline
manipulation
- Using probes
- Data probes
- Play a section of a media file
- Manually adding or removing data from/to a
pipeline
- Forcing a format
- Dynamically changing the pipeline
- Higher-level interfaces for GStreamer applications
- Appendices
- ...
- Additional documentation
- GStreamer design documents
- Tutorials (source code in gst-docs/examples/tutorials)
(playback
tutorials are based on playbin)
|
C
|
Python |
Table
of
Concepts
|
Basic
tutorials / Playback
tutorials
|
Basic
tutorials (lang=python) |
francesc.pinyol.m
/ python-gst-examples
/ tutorials
|
gkralik / python-gst-tutorial
|
GstreamerCodeSnippets
Python/pygst-sdk-tutorials |
GstreamerCodeSnippets
Others/0.10/Python/pygst-sdk-tutorials |
GstreamerCodeSnippets
pygst-tutorial
(class GTK_Main) |
|
Basic
tutorial
1: Hello world!
|
Basic
tutorial 1: Hello world! |
basic-tutorial-1.py
|
basic-tutorial-1.py |
basic-tutorial-1.py |
|
|
Bus
Elements
Links
Pipelines
|
Basic
tutorial
2: GStreamer concepts
|
Basic
tutorial 2: GStreamer concepts |
basic-tutorial-2.py
|
basic-tutorial-2.py (buggy)
basic-tutorial-2-ex-vertigo.py
|
basic-tutorial-2.py |
|
|
Pads
Signals
States
|
Basic
tutorial
3: Dynamic pipelines
- CustomData data
- data.source
- data.convert
- data.sink
- g_signal_connect (data.source, "pad-added",
G_CALLBACK
(pad_added_handler), &data);
|
- |
basic-tutorial-3.py
|
basic-tutorial-3-ex-video.py
basic-tutorial-3.py
- class Player
- def __init__(self)
- self.source
- self.convert
- self.sink
- self.source.connect("pad-added",
self.on_pad_added)
- def on_pad_added(self,
src, new_pad)
|
basic-tutorial-3.py
- def pad_added_handler(src, new_pad, data)
- data["source"]
- data["convert"]
- data["sink"]
- data["source"].connect("pad-added",
pad_added_handler, data)
|
|
|
|
(you can jump to Playback
tutorials)
|
- |
|
|
|
|
|
Queries
Seeks
|
Basic
tutorial
4: Time management
|
- |
basic-tutorial-4.py |
basic-tutorial-4.py
- class Player
- def __init__
- def play
- bus = self.playbin.get_bus()
- msg = bus.timed_pop_filtered(100 *
Gst.MSECOND, (Gst.MessageType.STATE_CHANGED
| Gst.MessageType.ERROR |
Gst.MessageType.EOS |
Gst.MessageType.DURATION_CHANGED))
- if msg: self.handle_message(msg)
- self.playbin.query_position
- self.playbin.query_duration
- self.playbin.seek_simple
- def handle_message
|
basic-tutorial-4.py |
|
|
GUI
|
Basic
tutorial
5: GUI toolkit integration
- gst_bus_add_signal_watch (bus);
|
- |
basic-tutorial-5.py |
basic-tutorial-5.py
- class Player
- def __init__
- Gtk.init(sys.argv)
- Gst.init(sys.argv)
- # connect to interesting signals in
playbin
self.playbin.connect("video-tags-changed",
self.on_tags_changed)
self.playbin.connect("audio-tags-changed",
self.on_tags_changed)
self.playbin.connect("text-tags-changed",
self.on_tags_changed)
- # instruct the bus to emit signals for
each received message
# and connect to the interesting signals
bus = self.playbin.get_bus()
bus.add_signal_watch()
bus.connect("message::error", self.on_error)
bus.connect("message::eos", self.on_eos)
bus.connect("message::state-changed", self.on_state_changed)
bus.connect("message::application", self.on_application_message)
- def start
- GLib.timeout_add_seconds(1,
self.refresh_ui)
- Gtk.main()
- def cleanup
- def build_ui
- def on_realize
- def on_play
- def on_pause
- def on_stop
- ...
- def on_tags_changed
- self.playbin.post_message
- def on_error
- def on_eos
- def on_state_changed
- ...
- def analyze_streams
|
|
|
|
Capabilities
|
Basic
tutorial
6: Media formats and Pad Capabilities
- static void print_pad_capabilities
- /* Retrieve negotiated caps (or acceptable
caps if negotiation is not finished yet) */
caps = gst_pad_get_current_caps (pad);
if (!caps)
caps = gst_pad_query_caps (pad, NULL);
|
- |
basic-tutorial-6.py |
basic-tutorial-6.py
- def print_field
- def print caps(caps, pfx)
- structure = caps.get_structure(i)
- structure.foreach(print_field, pfx)
- def print_pad_templates_information(factory)
- pads = factory.get_static_pad_templates()
- for pad in pads:
- padtemplate = pad.get()
- if padtemplate.get_caps():
- print_caps(padtemplate.get_caps(),
" ")
- def print_pad_capabilities(element, pad_name)
- pad = element.get_static_pad(pad_name)
- # retrieve negotiated caps (or acceptable caps
if negotiation is not yet finished)
caps = pad.get_current_caps()
if not caps:
caps = pad.get_allowed_caps()
- print_caps(caps,
" ")
- def main
- sink_factory =
Gst.ElementFactory.find("autoaudiosink")
- print_pad_templates_information(sink_factory)
- sink = sink_factory.create("sink")
- print_pad_capabilities(sink, "sink")
|
|
|
|
Pad availability
- always
- sometimes
- on request
Threads
|
Basic
tutorial
7: Multithreading and Pad Availability
|
- |
basic-tutorial-7.py |
basic-tutorial-7.py
- def main
- # manually link the tee, which has "Request"
pads
tee_src_pad_template = tee.get_pad_template("src_%u")
tee_audio_pad = tee.request_pad(tee_src_pad_template,
None,
None)
audio_queue_pad = audio_queue.get_static_pad("sink")
tee_audio_pad.link(audio_queue_pad)
tee_video_pad =
tee.request_pad(tee_src_pad_template, None,
None)
video_queue_pad =
video_queue.get_static_pad("sink")
tee_video_pad.link(video_queue_pad)
|
|
|
|
Buffers
- GstBuffer: chunk of data. Can contain multiple
GstMemory (memory buffer)
|
Basic
tutorial
8: Short-cutting the pipeline (same as 7,
replacing audiotestsrc -> appsrc; adding a third
branch appsink)
- appsrc
- appsink
- /* Configure appsrc */
gst_audio_info_set_format (&info,
GST_AUDIO_FORMAT_S16, SAMPLE_RATE, 1, NULL);
audio_caps = gst_audio_info_to_caps (&info);
g_object_set (data.app_source, "caps", audio_caps,
NULL);
g_signal_connect (data.app_source, "need-data",
G_CALLBACK
(start_feed), &data);
g_signal_connect (data.app_source, "enough-data",
G_CALLBACK
(stop_feed), &data);
Playback
tutorial
3: Short-cutting the pipeline
|
- |
basic-tutorial-8.py |
|
|
|
|
Discoverer
|
Basic
tutorial
9: Media information gathering
|
- |
basic-tutorial-9.py
- class Discoverer
- def print_tag_foreach
- def print_stream_info
- def print_topology
- def on_discovered
- def on_finished
- def __init__
- self.loop = GLib.MainLoop.new(None, False)
- self.loop.run()
- get video size:
|
|
|
|
|
gst-discoverer-1.0
gst-launch-1.0
Tools
|
Basic
tutorial
10: GStreamer tools
|
|
|
|
|
|
|
Debugging
|
Basic
tutorial
11: Debugging tools |
|
|
|
|
|
|
|
Basic
tutorial
12: Streaming
- Setting live streams to PAUSED succeeds, but
returns GST_STATE_CHANGE_NO_PREROLL, instead of
GST_STATE_CHANGE_SUCCESS to indicate that this is a
live stream.
|
- |
basic-tutorial-12.py
- class Player
- def on_message
- def __init__
- self.pipeline = Gst.parse_launch(...)
- bus = self.pipeline.get_bus()
- ret = self.pipeline.set_state(Gst.State.PLAYING)
print("ret: {0:d}".format(ret))
if ret == Gst.StateChangeReturn.FAILURE:
print("Unable
to set the pipeline to the playing state")
sys.exit(1)
elif ret == Gst.StateChangeReturn.NO_PREROLL:
self.is_live
= True
- self.loop = GLib.MainLoop.new(None, False)
- bus.add_signal_watch()
bus.connect("message", self.on_message)
- self.loop.run()
|
|
|
|
|
|
Basic
tutorial
13: Playback speed
|
- |
|
|
|
|
|
|
Basic
tutorial
14: Handy elements
- Bins
- playbin
- uridecodebin
- decodebin
- File input/output
- Network
- Test media generation
- videotestsrc
- audiotestsrc
- Video adapters
- videoconvert
- videorate
- videoscale
- Audio adapters
- audioconvert
- audioresample
- audiorate
- Multithreading
- queue
- queue2
- multiqueue
- tee
- Capabilities
- Debugging
|
- |
|
|
|
|
|
|
Basic
tutorial
16: Platform-specific elements
|
|
|
|
|
|
|
Action signals
Audio switching
Tags
|
Playback
tutorial
1: Playbin usage
|
|
|
|
|
|
|
Subtitles
|
Playback
tutorial
2: Subtitle management |
|
|
|
|
|
|
|
Playback
tutorial
3: Short-cutting the pipeline |
|
|
|
|
|
|
|
Playback
tutorial
4: Progressive streaming |
|
|
|
|
|
|
|
Playback
tutorial
5: Color Balance |
|
|
|
|
|
|
|
Playback
tutorial
6: Audio visualization |
|
|
|
|
|
|
|
Playback
tutorial
7: Custom playbin sinks |
|
|
|
|
|
|
|
Playback
tutorial
8: Hardware-accelerated video decoding |
|
|
|
|
|
|
|
Playback
tutorial
9: Digital audio pass-through |
|
|
|
|
|
|
...
- Encoding profiles and targets
- Exemples / Examples
- Estructura / Structure
- ...
-
|
|
syntax
|
C
|
Python
|
Encoding
target
|
|
- location of target files (*.gep)
$GST_DATADIR/gstreamer-GST_API_VERSION/encoding-profiles/
/usr/share/gstreamer-1.0/encoding-profiles/ *.gep
$HOME/gstreamer-GST_API_VERSION/encoding-profiles/
~/.local/share/gstreamer-1.0/ encoding-profiles/ <category>/<name>.gep
$GST_ENCODING_TARGET_PATH /
- Pitivi:
/usr/share/pitivi/gstpresets/*.gep
- $(target.category)/$(target.name).gep
[GStreamer
Encoding Target]
name : <encoding_target_name>
category : <category>
\description : <description>
#translatable
[profile-<profile1name>]
name : <encoding_profile_name>
\description : <description>
#optional
format : <format>
preset : <preset>
[streamprofile-<id>]
parent :
<encodingprofile.name>[,<encodingprofile.name>..]
\type : <type> # "audio", "video",
"text"
format : <format>
preset : <preset>
restriction : <restriction>
presence : <presence>
pass : <pass>
variableframerate :
<variableframerate>
- device/mp4target.gep
(encoding_target_name=mp4target,
encoding_profile_names=mp4, ...)
[GStreamer
Encoding Target]
name=mp4target
category=device
description=MP4 (H.264, AAC) target
[profile-mp4]
name=mp4
type=container
description[c]=MP4 container profile
format=video/quicktime,
variant=(string)iso
[streamprofile-mp4-0]
parent=mp4
type=video
format=video/x-h264
restriction=video/x-raw
presence=0
pass=0
variableframerate=false
[streamprofile-mp4-1]
parent=mp4
type=audio
format=audio/mpeg, mpegversion=(int)4
restriction=audio/x-raw
presence=0
[profile-(null)]
type=audio
format=audio/mpeg, mpegversion=(int)4
|
GstEncodingTarget
- gst_encoding_list_all_targets
|
GstPbutils.EncodingTarget
# create target with
all profiles
# The name and category can only consist of
lowercase ASCII letters for the first
character, followed by either lowercase ASCII
letters, digits or hyphens (‘-‘).
name = "mp4target"
category = GstPbutils.ENCODING_CATEGORY_DEVICE
# "device"
description = "MP4 (H.264, AAC) target"
profiles = [container_profile, video_profile,
audio_profile]
target = GstPbutils.EncodingTarget.new(name,
category, description, profiles)
# save target to
~/.local/share/gstreamer-1.0/encoding-profiles/<category>/<name>.gep
ret = target.save()
# list targets for all
categories
category = None
target_list = GstPbutils.encoding_list_all_targets (category)
print("target_list: {}".format(target_list))
|
Encoding profile
(gst-validate-transcoding)
|
|
- general syntax of serialized encoding profile:
mux_format:[video_restriction->]video_format[+video_preset][|video_presence]:[audio_restriction->]audio_format[+audio_preset][|audio_presence]
- element
factory:
- <muxer_factory_name>:<video_encoder_factory_name>:<audio_encoder_factory_name>
webmmux:vp8enc:vorbisenc
- caps:
- <muxer_source_caps>:<video_encoder_source_caps>:<audio_encoder_source_caps>
- WebM (VP8 + Vorbis):
video/webm:video/x-vp8:audio/x-vorbis
- MP4 (H.264 + MP3):
video/quicktime,variant=iso:video/x-h264:audio/mpeg,mpegversion=1,layer=3
- MP4 (H.264 + AAC):
video/quicktime,variant=iso:video/x-h264:audio/mpeg,mpegversion=4
- OGG (Theora + Vorbis):
application/ogg:video/x-theora:audio/x-vorbis
- MPEG-TS (H.264 + AC3):
video/mpegts:video/x-h264:audio/x-ac3
- caps
+ preset:
- location of preset
files
video/webm:video/x-vp8+youtube-preset:audio/x-vorbis
"video/quicktime,variant=iso:video/x-h264
+Profile
Main:audio/mpeg,mpegversion=1,layer=3"
"video/quicktime,variant=iso:video/x-h264
+slow12mbps:audio/mpeg,mpegversion=1,layer=3"
"video/quicktime,variant=iso:video/x-raw,width=1920,height=1080->video/x-h264+slow12mbps:audio/mpeg,mpegversion=1,layer=3"
"video/quicktime,variant=iso:video/x-h264 ,width=1920,height=1080 +slow12mbps:audio/mpeg,mpegversion=1,layer=3"
- How
to use the x264 encoding presets when
rendering an XGES project
- How
to use the x264 encoding presets when
rendering an XGES project (April 12,
2018)
- caps + presence (number of times an encoding
profile can be used inside an encodebin; 0:
any):
video/webm:video/x-vp8|1:audio/x-vorbis
- caps + restriction:
- ...:restriction_caps->encoded_format_caps:...
"video/webm:video/x-raw,width=1920,height=1080->video/x-vp8:audio/x-vorbis"
- "
video/quicktime,variant=iso:video/x-raw,format=I420->video/x-h264+Profile
High:audio/mpeg,mpegversion=4 "
"matroskamux:x264enc,width=1920,height=1080:audio/x-vorbis"
- loading profile from encoding
target:
target_name[/profilename/category]
/path/to/target.gep:profilename
|
GstEncodingProfile
|
GstPbutils.EncodingProfile
|
Container profile
|
GstEncodingContainerProfile
|
GstPbutils.EncodingContainerProfile
# container profile
name = "mp4"
description = "MP4 container profile"
#container_caps = "video/webm"
container_caps = "video/quicktime,variant=iso"
format = Gst.Caps(container_caps)
preset = None
container_profile =
GstPbutils.EncodingContainerProfile.new(name,
description, format, preset)
|
Video profile
|
GstEncodingVideoProfile
|
GstPbutils.EncodingVideoProfile
# video profile
#video_caps = "video/x-vp8"
video_caps = "video/x-h264"
format = Gst.Caps(video_caps)
preset =
None
restriction = Gst.Caps("video/x-raw")
presence = 0 # allow any number of instances
of this profile
video_profile =
GstPbutils.EncodingVideoProfile.new(format,
preset, restriction, presence)
container_profile.add_profile(video_profile)
|
Audio profile
|
GstEncodingAudioProfile
|
GstPbutils.EncodingAudioProfile
# audio profile
#audio_caps = "audio/x-vorbis"
audio_caps = "audio/mpeg,mpegversion=4" # AAC
format = Gst.Caps(audio_caps)
preset = None
restriction = Gst.Caps("audio/x-raw")
presence = 0 # allow any number of instances
of this profile
audio_profile =
GstPbutils.EncodingAudioProfile.new(format,
preset, restriction, presence)
container_profile.add_profile(audio_profile)
|
Preset
|
|
- location of preset files (if a file with the
same name is found in more than one directory,
only the last one is taken into account):
- /usr/share/gstreamer-1.0/presets/*.prs
- /usr/local/share/gstreamer-1.0/presets/*.prs
- /usr/share/pitivi/gstpresets/*.prs,
- ~/.local/share/gstreamer-1.0/presets/*.prs
- GST_PRESET_PATH
- ~/.local/share/gstreamer-1.0/presets/GstX264Enc.prs
[_presets_]
version=0.10
element-name=GstX264Enc
[slow12mbps]
speed-preset=slow
bitrate=12288
[slow700kbps]
speed-preset=slow
bitrate=700
- <profile_name>.prs
|
Example:
Using an encoder preset with a profile
preset = GST_PRESET
(gst_element_factory_make ("theoraenc",
"theorapreset"));
g_object_set (preset, "bitrate", 1000, NULL);
// The preset will be saved on the filesystem,
// so try to use a descriptive name
gst_preset_save_preset (preset,
"theora_bitrate_preset");
gst_object_unref (preset);
|
Gst.Preset
preset =
Gst.ElementFactory.make("theoraenc",
"theorapreset")
preset.set_property("bitrate", 1000)
# save to
~/.local/share/gstreamer-1.0/presets/GstTheoraEnc.prs
preset.save_preset("theora_bitrate_preset")
preset =
Gst.ElementFactory.make("x264enc",
"x264preset")
# will create a preset based on existing
preset "Profile High"
preset.load_preset("Profile High")
preset.set_property("bitrate", 1000) #
save to
~/.local/share/gstreamer-1.0/presets/GstX264Enc.prs
# (it will also include presets in
/usr/local/share/gstreamer-1.0/presets/GstX264Enc.prs)
preset.save_preset("profile_high_1000")
|
- gst-editing-services
- GStreamer
Editing Services (API Reference)
- Exemples / Examples
- Resum / Summary
- Estructura / Structure (see Pitivi):
- pipeline
- timeline
- output (usually one video track and one
audio track)
- input (several layers,
each one with several clips; each clip is a
fragment of an asset (uri), put at certain
position at the layer)
- layer_1
- clip_asset_1.1
- clip_asset_1.2
- ...
-
|
|
Python
(see complete
examples)
|
headers
|
|
import sys
import gi
gi.require_version('Gst', '1.0')
from gi.repository import Gst, GLib |
|
|
init Gst
|
|
class
Player(object): |
def
__init__(self): |
# init
GStreamer
Gst.init(None) |
GES
Initialization
|
|
|
|
#
workaround to avoid "g_array_append_vals:
assertion 'array' failed" when importing GES
before Gst.init (using python3)
gi.require_version('GES', '1.0')
from gi.repository import GES
# init GES
GES.init()
|
main GLib loop
|
|
|
|
# create
main glib loop
self.loop = GLib.MainLoop.new(None, False) |
create timeline (GESTimeline)
|
|
|
|
# create
timeline with one audio track and one video
track
#timeline = GES.Timeline.new_audio_video()
# when using this helper
function, VideoTrack is 1280*720
timeline = GES.Timeline.new()
video_track = GES.VideoTrack.new()
# if update_restriction_caps was not called,
VideoTrack would be 1280*720
video_track.update_restriction_caps(
Gst.Caps.from_string("video/x-raw,width=1920,height=1080")
)
timeline.add_track(video_track)
audio_track = GES.AudioTrack.new()
timeline.add_track(audio_track)
|
create asset/clip
|
|
|
|
# create
asset
asset = GES.UriClipAsset.request_sync(uri) |
create layer in
timeline
|
|
|
|
# create
layer
layer = timeline.append_layer()
|
put clips in
layer
|
|
|
|
# put clip
in layer
# start=0.0
start_on_timeline = 0
# inpoint=60.0
start_position_asset = inpoint * Gst.SECOND
# duration=5.0
duration = duration * Gst.SECOND
clip = layer.add_asset(asset,
start_on_timeline, start_position_asset,
duration, GES.TrackType.UNKNOWN) |
create GES
pipeline
|
|
|
|
# create
GES pipeline
pipeline = GES.Pipeline() |
connect message
bus to callback
|
|
|
|
# connect
bus messages to callback
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect("message", self.on_message, None)
|
add timeline to
pipeline
|
|
|
|
# add
timeline to pipeline
pipeline.set_timeline(timeline) |
(optional:
only render)
ges_base_renderer.py
|
containers for
output format
|
|
|
# container
profile
name = "mp4"
description = "MP4 container profile" #container_caps
= "video/webm"
container_caps = "video/quicktime,variant=iso"
format = Gst.Caps(container_caps)
preset = None
container_profile =
GstPbutils.EncodingContainerProfile.new(name,
description, format, preset)
# video profile
#video_caps = "video/x-vp8"
video_caps = "video/x-h264"
format = Gst.Caps(video_caps)
preset = None
restriction = Gst.Caps("video/x-raw")
presence = 0 # allow any number of instances
of this profile
video_profile =
GstPbutils.EncodingVideoProfile.new(format,
preset, restriction, presence)
container_profile.add_profile(video_profile)
# audio profile
#audio_caps = "audio/x-vorbis"
audio_caps = "audio/mpeg,mpegversion=4"
format = Gst.Caps(audio_caps)
preset = None
restriction = Gst.Caps("audio/x-raw")
presence = 0 # allow any number of instances
of this profile
audio_profile =
GstPbutils.EncodingAudioProfile.new(format,
preset, restriction, presence)
container_profile.add_profile(audio_profile) |
pipeline in
render mode
|
|
|
# pipeline
in render mode
pipeline.set_render_settings(output_uri,
container_profile)
pipeline.set_mode(GES.PipelineFlags.RENDER)
# same as: ges-launch-1.0
--smart-rendering
# pipeline.set_mode(GES.PipelineFlags.SMART_RENDER)
|
progress
|
|
|
# progress
GLib.timeout_add(300, self.duration_querier,
pipeline) |
start
|
|
|
|
# start
playing pipeline
pipeline.set_state(Gst.State.PLAYING)
self.loop.run() |
stop
|
|
|
|
# unset
pipeline.set_state(Gst.State.NULL) |
- gst-transcoding
- ...
|
|
- Web overlay
- Based on WPEWebKit
- Based on CEF (Chromium Embedded Framework)
- Live mixers
|
http://www.francescpinyol.cat/gstreamer.html
Primera versió: / First version: 27.X.2018
Darrera modificació: 29 de setembre de 2024 / Last update: 29th
September 2024
Cap a casa / Back home |