summaryrefslogtreecommitdiff
path: root/ffmpeg1/doc/protocols.texi
diff options
context:
space:
mode:
Diffstat (limited to 'ffmpeg1/doc/protocols.texi')
-rw-r--r--ffmpeg1/doc/protocols.texi790
1 files changed, 0 insertions, 790 deletions
diff --git a/ffmpeg1/doc/protocols.texi b/ffmpeg1/doc/protocols.texi
deleted file mode 100644
index 9940b67..0000000
--- a/ffmpeg1/doc/protocols.texi
+++ /dev/null
@@ -1,790 +0,0 @@
-@chapter Protocols
-@c man begin PROTOCOLS
-
-Protocols are configured elements in FFmpeg which allow to access
-resources which require the use of a particular protocol.
-
-When you configure your FFmpeg build, all the supported protocols are
-enabled by default. You can list all available ones using the
-configure option "--list-protocols".
-
-You can disable all the protocols using the configure option
-"--disable-protocols", and selectively enable a protocol using the
-option "--enable-protocol=@var{PROTOCOL}", or you can disable a
-particular protocol using the option
-"--disable-protocol=@var{PROTOCOL}".
-
-The option "-protocols" of the ff* tools will display the list of
-supported protocols.
-
-A description of the currently available protocols follows.
-
-@section bluray
-
-Read BluRay playlist.
-
-The accepted options are:
-@table @option
-
-@item angle
-BluRay angle
-
-@item chapter
-Start chapter (1...N)
-
-@item playlist
-Playlist to read (BDMV/PLAYLIST/?????.mpls)
-
-@end table
-
-Examples:
-
-Read longest playlist from BluRay mounted to /mnt/bluray:
-@example
-bluray:/mnt/bluray
-@end example
-
-Read angle 2 of playlist 4 from BluRay mounted to /mnt/bluray, start from chapter 2:
-@example
--playlist 4 -angle 2 -chapter 2 bluray:/mnt/bluray
-@end example
-
-@section concat
-
-Physical concatenation protocol.
-
-Allow to read and seek from many resource in sequence as if they were
-a unique resource.
-
-A URL accepted by this protocol has the syntax:
-@example
-concat:@var{URL1}|@var{URL2}|...|@var{URLN}
-@end example
-
-where @var{URL1}, @var{URL2}, ..., @var{URLN} are the urls of the
-resource to be concatenated, each one possibly specifying a distinct
-protocol.
-
-For example to read a sequence of files @file{split1.mpeg},
-@file{split2.mpeg}, @file{split3.mpeg} with @command{ffplay} use the
-command:
-@example
-ffplay concat:split1.mpeg\|split2.mpeg\|split3.mpeg
-@end example
-
-Note that you may need to escape the character "|" which is special for
-many shells.
-
-@section data
-
-Data in-line in the URI. See @url{http://en.wikipedia.org/wiki/Data_URI_scheme}.
-
-For example, to convert a GIF file given inline with @command{ffmpeg}:
-@example
-ffmpeg -i "data:image/gif;base64,R0lGODdhCAAIAMIEAAAAAAAA//8AAP//AP///////////////ywAAAAACAAIAAADF0gEDLojDgdGiJdJqUX02iB4E8Q9jUMkADs=" smiley.png
-@end example
-
-@section file
-
-File access protocol.
-
-Allow to read from or read to a file.
-
-For example to read from a file @file{input.mpeg} with @command{ffmpeg}
-use the command:
-@example
-ffmpeg -i file:input.mpeg output.mpeg
-@end example
-
-The ff* tools default to the file protocol, that is a resource
-specified with the name "FILE.mpeg" is interpreted as the URL
-"file:FILE.mpeg".
-
-@section gopher
-
-Gopher protocol.
-
-@section hls
-
-Read Apple HTTP Live Streaming compliant segmented stream as
-a uniform one. The M3U8 playlists describing the segments can be
-remote HTTP resources or local files, accessed using the standard
-file protocol.
-The nested protocol is declared by specifying
-"+@var{proto}" after the hls URI scheme name, where @var{proto}
-is either "file" or "http".
-
-@example
-hls+http://host/path/to/remote/resource.m3u8
-hls+file://path/to/local/resource.m3u8
-@end example
-
-Using this protocol is discouraged - the hls demuxer should work
-just as well (if not, please report the issues) and is more complete.
-To use the hls demuxer instead, simply use the direct URLs to the
-m3u8 files.
-
-@section http
-
-HTTP (Hyper Text Transfer Protocol).
-
-This protocol accepts the following options.
-
-@table @option
-@item seekable
-Control seekability of connection. If set to 1 the resource is
-supposed to be seekable, if set to 0 it is assumed not to be seekable,
-if set to -1 it will try to autodetect if it is seekable. Default
-value is -1.
-
-@item chunked_post
-If set to 1 use chunked transfer-encoding for posts, default is 1.
-
-@item headers
-Set custom HTTP headers, can override built in default headers. The
-value must be a string encoding the headers.
-
-@item content_type
-Force a content type.
-
-@item user-agent
-Override User-Agent header. If not specified the protocol will use a
-string describing the libavformat build.
-
-@item multiple_requests
-Use persistent connections if set to 1. By default it is 0.
-
-@item post_data
-Set custom HTTP post data.
-
-@item timeout
-Set timeout of socket I/O operations used by the underlying low level
-operation. By default it is set to -1, which means that the timeout is
-not specified.
-
-@item mime_type
-Set MIME type.
-
-@item cookies
-Set the cookies to be sent in future requests. The format of each cookie is the
-same as the value of a Set-Cookie HTTP response field. Multiple cookies can be
-delimited by a newline character.
-@end table
-
-@subsection HTTP Cookies
-
-Some HTTP requests will be denied unless cookie values are passed in with the
-request. The @option{cookies} option allows these cookies to be specified. At
-the very least, each cookie must specify a value along with a path and domain.
-HTTP requests that match both the domain and path will automatically include the
-cookie value in the HTTP Cookie header field. Multiple cookies can be delimited
-by a newline.
-
-The required syntax to play a stream specifying a cookie is:
-@example
-ffplay -cookies "nlqptid=nltid=tsn; path=/; domain=somedomain.com;" http://somedomain.com/somestream.m3u8
-@end example
-
-@section mmst
-
-MMS (Microsoft Media Server) protocol over TCP.
-
-@section mmsh
-
-MMS (Microsoft Media Server) protocol over HTTP.
-
-The required syntax is:
-@example
-mmsh://@var{server}[:@var{port}][/@var{app}][/@var{playpath}]
-@end example
-
-@section md5
-
-MD5 output protocol.
-
-Computes the MD5 hash of the data to be written, and on close writes
-this to the designated output or stdout if none is specified. It can
-be used to test muxers without writing an actual file.
-
-Some examples follow.
-@example
-# Write the MD5 hash of the encoded AVI file to the file output.avi.md5.
-ffmpeg -i input.flv -f avi -y md5:output.avi.md5
-
-# Write the MD5 hash of the encoded AVI file to stdout.
-ffmpeg -i input.flv -f avi -y md5:
-@end example
-
-Note that some formats (typically MOV) require the output protocol to
-be seekable, so they will fail with the MD5 output protocol.
-
-@section pipe
-
-UNIX pipe access protocol.
-
-Allow to read and write from UNIX pipes.
-
-The accepted syntax is:
-@example
-pipe:[@var{number}]
-@end example
-
-@var{number} is the number corresponding to the file descriptor of the
-pipe (e.g. 0 for stdin, 1 for stdout, 2 for stderr). If @var{number}
-is not specified, by default the stdout file descriptor will be used
-for writing, stdin for reading.
-
-For example to read from stdin with @command{ffmpeg}:
-@example
-cat test.wav | ffmpeg -i pipe:0
-# ...this is the same as...
-cat test.wav | ffmpeg -i pipe:
-@end example
-
-For writing to stdout with @command{ffmpeg}:
-@example
-ffmpeg -i test.wav -f avi pipe:1 | cat > test.avi
-# ...this is the same as...
-ffmpeg -i test.wav -f avi pipe: | cat > test.avi
-@end example
-
-Note that some formats (typically MOV), require the output protocol to
-be seekable, so they will fail with the pipe output protocol.
-
-@section rtmp
-
-Real-Time Messaging Protocol.
-
-The Real-Time Messaging Protocol (RTMP) is used for streaming multimedia
-content across a TCP/IP network.
-
-The required syntax is:
-@example
-rtmp://@var{server}[:@var{port}][/@var{app}][/@var{instance}][/@var{playpath}]
-@end example
-
-The accepted parameters are:
-@table @option
-
-@item server
-The address of the RTMP server.
-
-@item port
-The number of the TCP port to use (by default is 1935).
-
-@item app
-It is the name of the application to access. It usually corresponds to
-the path where the application is installed on the RTMP server
-(e.g. @file{/ondemand/}, @file{/flash/live/}, etc.). You can override
-the value parsed from the URI through the @code{rtmp_app} option, too.
-
-@item playpath
-It is the path or name of the resource to play with reference to the
-application specified in @var{app}, may be prefixed by "mp4:". You
-can override the value parsed from the URI through the @code{rtmp_playpath}
-option, too.
-
-@item listen
-Act as a server, listening for an incoming connection.
-
-@item timeout
-Maximum time to wait for the incoming connection. Implies listen.
-@end table
-
-Additionally, the following parameters can be set via command line options
-(or in code via @code{AVOption}s):
-@table @option
-
-@item rtmp_app
-Name of application to connect on the RTMP server. This option
-overrides the parameter specified in the URI.
-
-@item rtmp_buffer
-Set the client buffer time in milliseconds. The default is 3000.
-
-@item rtmp_conn
-Extra arbitrary AMF connection parameters, parsed from a string,
-e.g. like @code{B:1 S:authMe O:1 NN:code:1.23 NS:flag:ok O:0}.
-Each value is prefixed by a single character denoting the type,
-B for Boolean, N for number, S for string, O for object, or Z for null,
-followed by a colon. For Booleans the data must be either 0 or 1 for
-FALSE or TRUE, respectively. Likewise for Objects the data must be 0 or
-1 to end or begin an object, respectively. Data items in subobjects may
-be named, by prefixing the type with 'N' and specifying the name before
-the value (i.e. @code{NB:myFlag:1}). This option may be used multiple
-times to construct arbitrary AMF sequences.
-
-@item rtmp_flashver
-Version of the Flash plugin used to run the SWF player. The default
-is LNX 9,0,124,2.
-
-@item rtmp_flush_interval
-Number of packets flushed in the same request (RTMPT only). The default
-is 10.
-
-@item rtmp_live
-Specify that the media is a live stream. No resuming or seeking in
-live streams is possible. The default value is @code{any}, which means the
-subscriber first tries to play the live stream specified in the
-playpath. If a live stream of that name is not found, it plays the
-recorded stream. The other possible values are @code{live} and
-@code{recorded}.
-
-@item rtmp_pageurl
-URL of the web page in which the media was embedded. By default no
-value will be sent.
-
-@item rtmp_playpath
-Stream identifier to play or to publish. This option overrides the
-parameter specified in the URI.
-
-@item rtmp_subscribe
-Name of live stream to subscribe to. By default no value will be sent.
-It is only sent if the option is specified or if rtmp_live
-is set to live.
-
-@item rtmp_swfhash
-SHA256 hash of the decompressed SWF file (32 bytes).
-
-@item rtmp_swfsize
-Size of the decompressed SWF file, required for SWFVerification.
-
-@item rtmp_swfurl
-URL of the SWF player for the media. By default no value will be sent.
-
-@item rtmp_swfverify
-URL to player swf file, compute hash/size automatically.
-
-@item rtmp_tcurl
-URL of the target stream. Defaults to proto://host[:port]/app.
-
-@end table
-
-For example to read with @command{ffplay} a multimedia resource named
-"sample" from the application "vod" from an RTMP server "myserver":
-@example
-ffplay rtmp://myserver/vod/sample
-@end example
-
-@section rtmpe
-
-Encrypted Real-Time Messaging Protocol.
-
-The Encrypted Real-Time Messaging Protocol (RTMPE) is used for
-streaming multimedia content within standard cryptographic primitives,
-consisting of Diffie-Hellman key exchange and HMACSHA256, generating
-a pair of RC4 keys.
-
-@section rtmps
-
-Real-Time Messaging Protocol over a secure SSL connection.
-
-The Real-Time Messaging Protocol (RTMPS) is used for streaming
-multimedia content across an encrypted connection.
-
-@section rtmpt
-
-Real-Time Messaging Protocol tunneled through HTTP.
-
-The Real-Time Messaging Protocol tunneled through HTTP (RTMPT) is used
-for streaming multimedia content within HTTP requests to traverse
-firewalls.
-
-@section rtmpte
-
-Encrypted Real-Time Messaging Protocol tunneled through HTTP.
-
-The Encrypted Real-Time Messaging Protocol tunneled through HTTP (RTMPTE)
-is used for streaming multimedia content within HTTP requests to traverse
-firewalls.
-
-@section rtmpts
-
-Real-Time Messaging Protocol tunneled through HTTPS.
-
-The Real-Time Messaging Protocol tunneled through HTTPS (RTMPTS) is used
-for streaming multimedia content within HTTPS requests to traverse
-firewalls.
-
-@section rtmp, rtmpe, rtmps, rtmpt, rtmpte
-
-Real-Time Messaging Protocol and its variants supported through
-librtmp.
-
-Requires the presence of the librtmp headers and library during
-configuration. You need to explicitly configure the build with
-"--enable-librtmp". If enabled this will replace the native RTMP
-protocol.
-
-This protocol provides most client functions and a few server
-functions needed to support RTMP, RTMP tunneled in HTTP (RTMPT),
-encrypted RTMP (RTMPE), RTMP over SSL/TLS (RTMPS) and tunneled
-variants of these encrypted types (RTMPTE, RTMPTS).
-
-The required syntax is:
-@example
-@var{rtmp_proto}://@var{server}[:@var{port}][/@var{app}][/@var{playpath}] @var{options}
-@end example
-
-where @var{rtmp_proto} is one of the strings "rtmp", "rtmpt", "rtmpe",
-"rtmps", "rtmpte", "rtmpts" corresponding to each RTMP variant, and
-@var{server}, @var{port}, @var{app} and @var{playpath} have the same
-meaning as specified for the RTMP native protocol.
-@var{options} contains a list of space-separated options of the form
-@var{key}=@var{val}.
-
-See the librtmp manual page (man 3 librtmp) for more information.
-
-For example, to stream a file in real-time to an RTMP server using
-@command{ffmpeg}:
-@example
-ffmpeg -re -i myfile -f flv rtmp://myserver/live/mystream
-@end example
-
-To play the same stream using @command{ffplay}:
-@example
-ffplay "rtmp://myserver/live/mystream live=1"
-@end example
-
-@section rtp
-
-Real-Time Protocol.
-
-@section rtsp
-
-RTSP is not technically a protocol handler in libavformat, it is a demuxer
-and muxer. The demuxer supports both normal RTSP (with data transferred
-over RTP; this is used by e.g. Apple and Microsoft) and Real-RTSP (with
-data transferred over RDT).
-
-The muxer can be used to send a stream using RTSP ANNOUNCE to a server
-supporting it (currently Darwin Streaming Server and Mischa Spiegelmock's
-@uref{http://github.com/revmischa/rtsp-server, RTSP server}).
-
-The required syntax for a RTSP url is:
-@example
-rtsp://@var{hostname}[:@var{port}]/@var{path}
-@end example
-
-The following options (set on the @command{ffmpeg}/@command{ffplay} command
-line, or set in code via @code{AVOption}s or in @code{avformat_open_input}),
-are supported:
-
-Flags for @code{rtsp_transport}:
-
-@table @option
-
-@item udp
-Use UDP as lower transport protocol.
-
-@item tcp
-Use TCP (interleaving within the RTSP control channel) as lower
-transport protocol.
-
-@item udp_multicast
-Use UDP multicast as lower transport protocol.
-
-@item http
-Use HTTP tunneling as lower transport protocol, which is useful for
-passing proxies.
-@end table
-
-Multiple lower transport protocols may be specified, in that case they are
-tried one at a time (if the setup of one fails, the next one is tried).
-For the muxer, only the @code{tcp} and @code{udp} options are supported.
-
-Flags for @code{rtsp_flags}:
-
-@table @option
-@item filter_src
-Accept packets only from negotiated peer address and port.
-@item listen
-Act as a server, listening for an incoming connection.
-@end table
-
-When receiving data over UDP, the demuxer tries to reorder received packets
-(since they may arrive out of order, or packets may get lost totally). This
-can be disabled by setting the maximum demuxing delay to zero (via
-the @code{max_delay} field of AVFormatContext).
-
-When watching multi-bitrate Real-RTSP streams with @command{ffplay}, the
-streams to display can be chosen with @code{-vst} @var{n} and
-@code{-ast} @var{n} for video and audio respectively, and can be switched
-on the fly by pressing @code{v} and @code{a}.
-
-Example command lines:
-
-To watch a stream over UDP, with a max reordering delay of 0.5 seconds:
-
-@example
-ffplay -max_delay 500000 -rtsp_transport udp rtsp://server/video.mp4
-@end example
-
-To watch a stream tunneled over HTTP:
-
-@example
-ffplay -rtsp_transport http rtsp://server/video.mp4
-@end example
-
-To send a stream in realtime to a RTSP server, for others to watch:
-
-@example
-ffmpeg -re -i @var{input} -f rtsp -muxdelay 0.1 rtsp://server/live.sdp
-@end example
-
-To receive a stream in realtime:
-
-@example
-ffmpeg -rtsp_flags listen -i rtsp://ownaddress/live.sdp @var{output}
-@end example
-
-@section sap
-
-Session Announcement Protocol (RFC 2974). This is not technically a
-protocol handler in libavformat, it is a muxer and demuxer.
-It is used for signalling of RTP streams, by announcing the SDP for the
-streams regularly on a separate port.
-
-@subsection Muxer
-
-The syntax for a SAP url given to the muxer is:
-@example
-sap://@var{destination}[:@var{port}][?@var{options}]
-@end example
-
-The RTP packets are sent to @var{destination} on port @var{port},
-or to port 5004 if no port is specified.
-@var{options} is a @code{&}-separated list. The following options
-are supported:
-
-@table @option
-
-@item announce_addr=@var{address}
-Specify the destination IP address for sending the announcements to.
-If omitted, the announcements are sent to the commonly used SAP
-announcement multicast address 224.2.127.254 (sap.mcast.net), or
-ff0e::2:7ffe if @var{destination} is an IPv6 address.
-
-@item announce_port=@var{port}
-Specify the port to send the announcements on, defaults to
-9875 if not specified.
-
-@item ttl=@var{ttl}
-Specify the time to live value for the announcements and RTP packets,
-defaults to 255.
-
-@item same_port=@var{0|1}
-If set to 1, send all RTP streams on the same port pair. If zero (the
-default), all streams are sent on unique ports, with each stream on a
-port 2 numbers higher than the previous.
-VLC/Live555 requires this to be set to 1, to be able to receive the stream.
-The RTP stack in libavformat for receiving requires all streams to be sent
-on unique ports.
-@end table
-
-Example command lines follow.
-
-To broadcast a stream on the local subnet, for watching in VLC:
-
-@example
-ffmpeg -re -i @var{input} -f sap sap://224.0.0.255?same_port=1
-@end example
-
-Similarly, for watching in @command{ffplay}:
-
-@example
-ffmpeg -re -i @var{input} -f sap sap://224.0.0.255
-@end example
-
-And for watching in @command{ffplay}, over IPv6:
-
-@example
-ffmpeg -re -i @var{input} -f sap sap://[ff0e::1:2:3:4]
-@end example
-
-@subsection Demuxer
-
-The syntax for a SAP url given to the demuxer is:
-@example
-sap://[@var{address}][:@var{port}]
-@end example
-
-@var{address} is the multicast address to listen for announcements on,
-if omitted, the default 224.2.127.254 (sap.mcast.net) is used. @var{port}
-is the port that is listened on, 9875 if omitted.
-
-The demuxers listens for announcements on the given address and port.
-Once an announcement is received, it tries to receive that particular stream.
-
-Example command lines follow.
-
-To play back the first stream announced on the normal SAP multicast address:
-
-@example
-ffplay sap://
-@end example
-
-To play back the first stream announced on one the default IPv6 SAP multicast address:
-
-@example
-ffplay sap://[ff0e::2:7ffe]
-@end example
-
-@section tcp
-
-Trasmission Control Protocol.
-
-The required syntax for a TCP url is:
-@example
-tcp://@var{hostname}:@var{port}[?@var{options}]
-@end example
-
-@table @option
-
-@item listen
-Listen for an incoming connection
-
-@item timeout=@var{microseconds}
-In read mode: if no data arrived in more than this time interval, raise error.
-In write mode: if socket cannot be written in more than this time interval, raise error.
-This also sets timeout on TCP connection establishing.
-
-@example
-ffmpeg -i @var{input} -f @var{format} tcp://@var{hostname}:@var{port}?listen
-ffplay tcp://@var{hostname}:@var{port}
-@end example
-
-@end table
-
-@section tls
-
-Transport Layer Security/Secure Sockets Layer
-
-The required syntax for a TLS/SSL url is:
-@example
-tls://@var{hostname}:@var{port}[?@var{options}]
-@end example
-
-@table @option
-
-@item listen
-Act as a server, listening for an incoming connection.
-
-@item cafile=@var{filename}
-Certificate authority file. The file must be in OpenSSL PEM format.
-
-@item cert=@var{filename}
-Certificate file. The file must be in OpenSSL PEM format.
-
-@item key=@var{filename}
-Private key file.
-
-@item verify=@var{0|1}
-Verify the peer's certificate.
-
-@end table
-
-Example command lines:
-
-To create a TLS/SSL server that serves an input stream.
-
-@example
-ffmpeg -i @var{input} -f @var{format} tls://@var{hostname}:@var{port}?listen&cert=@var{server.crt}&key=@var{server.key}
-@end example
-
-To play back a stream from the TLS/SSL server using @command{ffplay}:
-
-@example
-ffplay tls://@var{hostname}:@var{port}
-@end example
-
-@section udp
-
-User Datagram Protocol.
-
-The required syntax for a UDP url is:
-@example
-udp://@var{hostname}:@var{port}[?@var{options}]
-@end example
-
-@var{options} contains a list of &-separated options of the form @var{key}=@var{val}.
-
-In case threading is enabled on the system, a circular buffer is used
-to store the incoming data, which allows to reduce loss of data due to
-UDP socket buffer overruns. The @var{fifo_size} and
-@var{overrun_nonfatal} options are related to this buffer.
-
-The list of supported options follows.
-
-@table @option
-
-@item buffer_size=@var{size}
-Set the UDP socket buffer size in bytes. This is used both for the
-receiving and the sending buffer size.
-
-@item localport=@var{port}
-Override the local UDP port to bind with.
-
-@item localaddr=@var{addr}
-Choose the local IP address. This is useful e.g. if sending multicast
-and the host has multiple interfaces, where the user can choose
-which interface to send on by specifying the IP address of that interface.
-
-@item pkt_size=@var{size}
-Set the size in bytes of UDP packets.
-
-@item reuse=@var{1|0}
-Explicitly allow or disallow reusing UDP sockets.
-
-@item ttl=@var{ttl}
-Set the time to live value (for multicast only).
-
-@item connect=@var{1|0}
-Initialize the UDP socket with @code{connect()}. In this case, the
-destination address can't be changed with ff_udp_set_remote_url later.
-If the destination address isn't known at the start, this option can
-be specified in ff_udp_set_remote_url, too.
-This allows finding out the source address for the packets with getsockname,
-and makes writes return with AVERROR(ECONNREFUSED) if "destination
-unreachable" is received.
-For receiving, this gives the benefit of only receiving packets from
-the specified peer address/port.
-
-@item sources=@var{address}[,@var{address}]
-Only receive packets sent to the multicast group from one of the
-specified sender IP addresses.
-
-@item block=@var{address}[,@var{address}]
-Ignore packets sent to the multicast group from the specified
-sender IP addresses.
-
-@item fifo_size=@var{units}
-Set the UDP receiving circular buffer size, expressed as a number of
-packets with size of 188 bytes. If not specified defaults to 7*4096.
-
-@item overrun_nonfatal=@var{1|0}
-Survive in case of UDP receiving circular buffer overrun. Default
-value is 0.
-
-@item timeout=@var{microseconds}
-In read mode: if no data arrived in more than this time interval, raise error.
-@end table
-
-Some usage examples of the UDP protocol with @command{ffmpeg} follow.
-
-To stream over UDP to a remote endpoint:
-@example
-ffmpeg -i @var{input} -f @var{format} udp://@var{hostname}:@var{port}
-@end example
-
-To stream in mpegts format over UDP using 188 sized UDP packets, using a large input buffer:
-@example
-ffmpeg -i @var{input} -f mpegts udp://@var{hostname}:@var{port}?pkt_size=188&buffer_size=65535
-@end example
-
-To receive over UDP from a remote endpoint:
-@example
-ffmpeg -i udp://[@var{multicast-address}]:@var{port}
-@end example
-
-@c man end PROTOCOLS