[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

1. Introduction

FFmpeg is a very fast video and audio converter. It can also grab from a live audio/video source.

The command line interface is designed to be intuitive, in the sense that FFmpeg tries to figure out all parameters that can possibly be derived automatically. You usually only have to specify the target bitrate you want.

FFmpeg can also convert from any sample rate to any other, and resize video on the fly with a high quality polyphase filter.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2. Quick Start


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.1 Video and Audio grabbing

FFmpeg can grab video and audio from devices given that you specify the input format and device.

 
ffmpeg -f audio_device -i /dev/dsp -f video4linux2 -i /dev/video0 /tmp/out.mpg

Note that you must activate the right video source and channel before launching FFmpeg with any TV viewer such as xawtv (http://bytesex.org/xawtv/) by Gerd Knorr. You also have to set the audio recording levels correctly with a standard mixer.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.2 X11 grabbing

FFmpeg can grab the X11 display.

 
ffmpeg -f x11grab -i :0.0 /tmp/out.mpg

0.0 is display.screen number of your X11 server, same as the DISPLAY environment variable.

 
ffmpeg -f x11grab -i :0.0+10,20 /tmp/out.mpg

0.0 is display.screen number of your X11 server, same as the DISPLAY environment variable. 10 is the x-offset and 20 the y-offset for the grabbing.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

2.3 Video and Audio file format conversion

* FFmpeg can use any supported file format and protocol as input:

Examples:

* You can use YUV files as input:

 
ffmpeg -i /tmp/test%d.Y /tmp/out.mpg

It will use the files:

 
/tmp/test0.Y, /tmp/test0.U, /tmp/test0.V,
/tmp/test1.Y, /tmp/test1.U, /tmp/test1.V, etc...

The Y files use twice the resolution of the U and V files. They are raw files, without header. They can be generated by all decent video decoders. You must specify the size of the image with the ‘-s’ option if FFmpeg cannot guess it.

* You can input from a raw YUV420P file:

 
ffmpeg -i /tmp/test.yuv /tmp/out.avi

test.yuv is a file containing raw YUV planar data. Each frame is composed of the Y plane followed by the U and V planes at half vertical and horizontal resolution.

* You can output to a raw YUV420P file:

 
ffmpeg -i mydivx.avi hugefile.yuv

* You can set several input files and output files:

 
ffmpeg -i /tmp/a.wav -s 640x480 -i /tmp/a.yuv /tmp/a.mpg

Converts the audio file a.wav and the raw YUV video file a.yuv to MPEG file a.mpg.

* You can also do audio and video conversions at the same time:

 
ffmpeg -i /tmp/a.wav -ar 22050 /tmp/a.mp2

Converts a.wav to MPEG audio at 22050Hz sample rate.

* You can encode to several formats at the same time and define a mapping from input stream to output streams:

 
ffmpeg -i /tmp/a.wav -ab 64 /tmp/a.mp2 -ab 128 /tmp/b.mp2 -map 0:0 -map 0:0

Converts a.wav to a.mp2 at 64 kbits and to b.mp2 at 128 kbits. '-map file:index' specifies which input stream is used for each output stream, in the order of the definition of output streams.

* You can transcode decrypted VOBs

 
ffmpeg -i snatch_1.vob -f avi -vcodec mpeg4 -b 800k -g 300 -bf 2 -acodec mp3 -ab 128 snatch.avi

This is a typical DVD ripping example; the input is a VOB file, the output an AVI file with MPEG-4 video and MP3 audio. Note that in this command we use B-frames so the MPEG-4 stream is DivX5 compatible, and GOP size is 300 which means one intra frame every 10 seconds for 29.97fps input video. Furthermore, the audio stream is MP3-encoded so you need to enable LAME support by passing --enable-mp3lame to configure. The mapping is particularly useful for DVD transcoding to get the desired audio language.

NOTE: To see the supported input formats, use ffmpeg -formats.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3. Invocation


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.1 Syntax

The generic syntax is:

 
ffmpeg [[infile options][‘-iinfile]]... {[outfile options] outfile}...

As a general rule, options are applied to the next specified file. Therefore, order is important, and you can have the same option on the command line multiple times. Each occurrence is then applied to the next input or output file.

* To set the video bitrate of the output file to 64kbit/s:

 
ffmpeg -i input.avi -b 64k output.avi

* To force the frame rate of the input and output file to 24 fps:

 
ffmpeg -r 24 -i input.avi output.avi

* To force the frame rate of the output file to 24 fps:

 
ffmpeg -i input.avi -r 24 output.avi

* To force the frame rate of input file to 1 fps and the output file to 24 fps:

 
ffmpeg -r 1 -i input.avi -r 24 output.avi

The format option may be needed for raw input files.

By default, FFmpeg tries to convert as losslessly as possible: It uses the same audio and video parameters for the outputs as the one specified for the inputs.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.2 Main options

-L

Show license.

-h

Show help.

-version

Show version.

-formats

Show available formats, codecs, protocols, ...

-f fmt

Force format.

-i filename

input filename

-y

Overwrite output files.

-t duration

Set the recording time in seconds. hh:mm:ss[.xxx] syntax is also supported.

-fs limit_size

Set the file size limit.

-ss position

Seek to given time position in seconds. hh:mm:ss[.xxx] syntax is also supported.

-itsoffset offset

Set the input time offset in seconds. [-]hh:mm:ss[.xxx] syntax is also supported. This option affects all the input files that follow it. The offset is added to the timestamps of the input files. Specifying a positive offset means that the corresponding streams are delayed by 'offset' seconds.

-title string

Set the title.

-timestamp time

Set the timestamp.

-author string

Set the author.

-copyright string

Set the copyright.

-comment string

Set the comment.

-album string

Set the album.

-track number

Set the track.

-year number

Set the year.

-v verbose

Control amount of logging.

-target type

Specify target file type ("vcd", "svcd", "dvd", "dv", "dv50", "pal-vcd", "ntsc-svcd", ... ). All the format options (bitrate, codecs, buffer sizes) are then set automatically. You can just type:

 
ffmpeg -i myfile.avi -target vcd /tmp/vcd.mpg

Nevertheless you can specify additional options as long as you know they do not conflict with the standard, as in:

 
ffmpeg -i myfile.avi -target vcd -bf 2 /tmp/vcd.mpg
-dframes number

Set the number of data frames to record.

-scodec codec

Force subtitle codec ('copy' to copy stream).

-newsubtitle

Add a new subtitle stream to the current output stream.

-slang code

Set the ISO 639 language code (3 letters) of the current subtitle stream.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.3 Video Options

-b bitrate

Set the video bitrate in bit/s (default = 200 kb/s).

-vframes number

Set the number of video frames to record.

-r fps

Set frame rate (Hz value, fraction or abbreviation), (default = 25).

-s size

Set frame size. The format is ‘wxh’ (ffserver default = 160x128, ffmpeg default = same as source). The following abbreviations are recognized:

sqcif

128x96

qcif

176x144

cif

352x288

4cif

704x576

-aspect aspect

Set aspect ratio (4:3, 16:9 or 1.3333, 1.7777).

-croptop size

Set top crop band size (in pixels).

-cropbottom size

Set bottom crop band size (in pixels).

-cropleft size

Set left crop band size (in pixels).

-cropright size

Set right crop band size (in pixels).

-padtop size

Set top pad band size (in pixels).

-padbottom size

Set bottom pad band size (in pixels).

-padleft size

Set left pad band size (in pixels).

-padright size

Set right pad band size (in pixels).

-padcolor (hex color)

Set color of padded bands. The value for padcolor is expressed as a six digit hexadecimal number where the first two digits represent red, the middle two digits green and last two digits blue (default = 000000 (black)).

-vn

Disable video recording.

-bt tolerance

Set video bitrate tolerance (in bit/s).

-maxrate bitrate

Set max video bitrate tolerance (in bit/s).

-minrate bitrate

Set min video bitrate tolerance (in bit/s).

-bufsize size

Set rate control buffer size (in bits).

-vcodec codec

Force video codec to codec. Use the copy special value to tell that the raw codec data must be copied as is.

-sameq

Use same video quality as source (implies VBR).

-pass n

Select the pass number (1 or 2). It is useful to do two pass encoding. The statistics of the video are recorded in the first pass and the video is generated at the exact requested bitrate in the second pass.

-passlogfile file

Set two pass logfile name to file.

-newvideo

Add a new video stream to the current output stream.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.4 Advanced Video Options

-pix_fmt format

Set pixel format.

-g gop_size

Set the group of pictures size.

-intra

Use only intra frames.

-vdt n

Discard threshold.

-qscale q

Use fixed video quantizer scale (VBR).

-qmin q

minimum video quantizer scale (VBR)

-qmax q

maximum video quantizer scale (VBR)

-qdiff q

maximum difference between the quantizer scales (VBR)

-qblur blur

video quantizer scale blur (VBR)

-qcomp compression

video quantizer scale compression (VBR)

-lmin lambda

minimum video lagrange factor (VBR)

-lmax lambda

max video lagrange factor (VBR)

-mblmin lambda

minimum macroblock quantizer scale (VBR)

-mblmax lambda

maximum macroblock quantizer scale (VBR)

These four options (lmin, lmax, mblmin, mblmax) use 'lambda' units, but you may use the QP2LAMBDA constant to easily convert from 'q' units:

 
ffmpeg -i src.ext -lmax 21*QP2LAMBDA dst.ext
-rc_init_cplx complexity

initial complexity for single pass encoding

-b_qfactor factor

qp factor between P- and B-frames

-i_qfactor factor

qp factor between P- and I-frames

-b_qoffset offset

qp offset between P- and B-frames

-i_qoffset offset

qp offset between P- and I-frames

-rc_eq equation

Set rate control equation (see section FFmpeg formula evaluator) (default = tex^qComp).

-rc_override override

rate control override for specific intervals

-me method

Set motion estimation method to method. Available methods are (from lowest to best quality):

zero

Try just the (0, 0) vector.

phods
log
x1
epzs

(default method)

full

exhaustive search (slow and marginally better than epzs)

-dct_algo algo

Set DCT algorithm to algo. Available values are:

0

FF_DCT_AUTO (default)

1

FF_DCT_FASTINT

2

FF_DCT_INT

3

FF_DCT_MMX

4

FF_DCT_MLIB

5

FF_DCT_ALTIVEC

-idct_algo algo

Set IDCT algorithm to algo. Available values are:

0

FF_IDCT_AUTO (default)

1

FF_IDCT_INT

2

FF_IDCT_SIMPLE

3

FF_IDCT_SIMPLEMMX

4

FF_IDCT_LIBMPEG2MMX

5

FF_IDCT_PS2

6

FF_IDCT_MLIB

7

FF_IDCT_ARM

8

FF_IDCT_ALTIVEC

9

FF_IDCT_SH4

10

FF_IDCT_SIMPLEARM

-er n

Set error resilience to n.

1

FF_ER_CAREFUL (default)

2

FF_ER_COMPLIANT

3

FF_ER_AGGRESSIVE

4

FF_ER_VERY_AGGRESSIVE

-ec bit_mask

Set error concealment to bit_mask. bit_mask is a bit mask of the following values:

1

FF_EC_GUESS_MVS (default = enabled)

2

FF_EC_DEBLOCK (default = enabled)

-bf frames

Use 'frames' B-frames (supported for MPEG-1, MPEG-2 and MPEG-4).

-mbd mode

macroblock decision

0

FF_MB_DECISION_SIMPLE: Use mb_cmp (cannot change it yet in FFmpeg).

1

FF_MB_DECISION_BITS: Choose the one which needs the fewest bits.

2

FF_MB_DECISION_RD: rate distortion

-4mv

Use four motion vector by macroblock (MPEG-4 only).

-part

Use data partitioning (MPEG-4 only).

-bug param

Work around encoder bugs that are not auto-detected.

-strict strictness

How strictly to follow the standards.

-aic

Enable Advanced intra coding (h263+).

-umv

Enable Unlimited Motion Vector (h263+)

-deinterlace

Deinterlace pictures.

-ilme

Force interlacing support in encoder (MPEG-2 and MPEG-4 only). Use this option if your input file is interlaced and you want to keep the interlaced format for minimum losses. The alternative is to deinterlace the input stream with ‘-deinterlace’, but deinterlacing introduces losses.

-psnr

Calculate PSNR of compressed frames.

-vstats

Dump video coding statistics to ‘vstats_HHMMSS.log’.

-vhook module

Insert video processing module. module contains the module name and its parameters separated by spaces.

-top n

top=1/bottom=0/auto=-1 field first

-dc precision

Intra_dc_precision.

-vtag fourcc/tag

Force video tag/fourcc.

-qphist

Show QP histogram.

-vbsf bitstream filter

Bitstream filters available are "dump_extra", "remove_extra", "noise".


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.5 Audio Options

-aframes number

Set the number of audio frames to record.

-ar freq

Set the audio sampling frequency (default = 44100 Hz).

-ab bitrate

Set the audio bitrate in kbit/s (default = 64).

-ac channels

Set the number of audio channels (default = 1).

-an

Disable audio recording.

-acodec codec

Force audio codec to codec. Use the copy special value to specify that the raw codec data must be copied as is.

-newaudio

Add a new audio track to the output file. If you want to specify parameters, do so before -newaudio (-acodec, -ab, etc..).

Mapping will be done automatically, if the number of output streams is equal to the number of input streams, else it will pick the first one that matches. You can override the mapping using -map as usual.

Example:

 
ffmpeg -i file.mpg -vcodec copy -acodec ac3 -ab 384 test.mpg -acodec mp2 -ab 192 -newaudio
-alang code

Set the ISO 639 language code (3 letters) of the current audio stream.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.6 Advanced Audio options:

-atag fourcc/tag

Force audio tag/fourcc.

-absf bitstream filter

Bitstream filters available are "dump_extra", "remove_extra", "noise", "mp3comp", "mp3decomp".


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.7 Subtitle options:

-scodec codec

Force subtitle codec ('copy' to copy stream).

-newsubtitle

Add a new subtitle stream to the current output stream.

-slang code

Set the ISO 639 language code (3 letters) of the current subtitle stream.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.8 Audio/Video grab options

-vc channel

Set video grab channel (DV1394 only).

-tvstd standard

Set television standard (NTSC, PAL (SECAM)).

-isync

Synchronize read on input.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.9 Advanced options

-map input stream id[:input stream id]

Set stream mapping from input streams to output streams. Just enumerate the input streams in the order you want them in the output. [input stream id] sets the (input) stream to sync against.

-map_meta_data outfile:infile

Set meta data information of outfile from infile.

-debug

Print specific debug info.

-benchmark

Add timings for benchmarking.

-dump

Dump each input packet.

-hex

When dumping packets, also dump the payload.

-bitexact

Only use bit exact algorithms (for codec testing).

-ps size

Set packet size in bits.

-re

Read input at native frame rate. Mainly used to simulate a grab device.

-loop_input

Loop over the input stream. Currently it works only for image streams. This option is used for automatic FFserver testing.

-loop_output number_of_times

Repeatedly loop output for formats that support looping such as animated GIF (0 will loop the output infinitely).

-threads count

Thread count.

-vsync parameter

Video sync method. Video will be stretched/squeezed to match the timestamps, it is done by duplicating and dropping frames. With -map you can select from which stream the timestamps should be taken. You can leave either video or audio unchanged and sync the remaining stream(s) to the unchanged one.

-async samples_per_second

Audio sync method. "Stretches/squeezes" the audio stream to match the timestamps, the parameter is the maximum samples per second by which the audio is changed. -async 1 is a special case where only the start of the audio stream is corrected without any later correction.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.10 FFmpeg formula evaluator

When evaluating a rate control string, FFmpeg uses an internal formula evaluator.

The following binary operators are available: +, -, *, /, ^.

The following unary operators are available: +, -, (...).

The following functions are available:

sinh(x)
cosh(x)
tanh(x)
sin(x)
cos(x)
tan(x)
exp(x)
log(x)
squish(x)
gauss(x)
abs(x)
max(x, y)
min(x, y)
gt(x, y)
lt(x, y)
eq(x, y)
bits2qp(bits)
qp2bits(qp)

The following constants are available:

PI
E
iTex
pTex
tex
mv
fCode
iCount
mcVar
var
isI
isP
isB
avgQP
qComp
avgIITex
avgPITex
avgPPTex
avgBPTex
avgTex

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

3.11 Protocols

The filename can be ‘-’ to read from standard input or to write to standard output.

FFmpeg also handles many protocols specified with an URL syntax.

Use 'ffmpeg -formats' to see a list of the supported protocols.

The protocol http: is currently used only to communicate with FFserver (see the FFserver documentation). When FFmpeg will be a video player it will also be used for streaming :-)


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

4. Tips


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5. external libraries

FFmpeg can be hooked up with a number of external libraries to add support for more formats.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

5.1 AMR

AMR comes in two different flavors, WB and NB. FFmpeg can make use of the AMR WB (floating-point mode) and the AMR NB (both floating-point and fixed-point mode) reference decoders and encoders.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

6. Supported File Formats and Codecs

You can use the -formats option to have an exhaustive list.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

6.1 File Formats

FFmpeg supports the following file formats through the libavformat library:

Supported File Format

Encoding

Decoding

Comments

MPEG audio

X

X

MPEG-1 systems

X

X

muxed audio and video

MPEG-2 PS

X

X

also known as VOB file

MPEG-2 TS

X

also known as DVB Transport Stream

ASF

X

X

AVI

X

X

WAV

X

X

Macromedia Flash

X

X

Only embedded audio is decoded.

FLV

X

X

Macromedia Flash video files

Real Audio and Video

X

X

Raw AC3

X

X

Raw MJPEG

X

X

Raw MPEG video

X

X

Raw PCM8/16 bits, mulaw/Alaw

X

X

Raw CRI ADX audio

X

X

Raw Shorten audio

X

SUN AU format

X

X

NUT

X

X

NUT Open Container Format

QuickTime

X

X

MPEG-4

X

X

MPEG-4 is a variant of QuickTime.

Raw MPEG4 video

X

X

DV

X

X

4xm

X

4X Technologies format, used in some games.

Playstation STR

X

Id RoQ

X

Used in Quake III, Jedi Knight 2, other computer games.

Interplay MVE

X

Format used in various Interplay computer games.

WC3 Movie

X

Multimedia format used in Origin's Wing Commander III computer game.

Sega FILM/CPK

X

Used in many Sega Saturn console games.

Westwood Studios VQA/AUD

X

Multimedia formats used in Westwood Studios games.

Id Cinematic (.cin)

X

Used in Quake II.

FLIC format

X

.fli/.flc files

Sierra VMD

X

Used in Sierra CD-ROM games.

Sierra Online

X

.sol files used in Sierra Online games.

Matroska

X

Electronic Arts Multimedia

X

Used in various EA games; files have extensions like WVE and UV2.

Nullsoft Video (NSV) format

X

ADTS AAC audio

X

X

Creative VOC

X

X

Created for the Sound Blaster Pro.

American Laser Games MM

X

Multimedia format used in games like Mad Dog McCree

AVS

X

Multimedia format used by the Creature Shock game.

Smacker

X

Multimedia format used by many games.

GXF

X

X

General eXchange Format SMPTE 360M, used by Thomson Grass Valley playout servers.

CIN

X

Multimedia format used by Delphine Software games.

MXF

X

Material eXchange Format SMPTE 377M, used by D-Cinema, broadcast industry.

SEQ

X

Tiertex .seq files used in the DOS CDROM version of the game Flashback.

X means that encoding (resp. decoding) is supported.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

6.2 Image Formats

FFmpeg can read and write images for each frame of a video sequence. The following image formats are supported:

Supported Image Format

Encoding

Decoding

Comments

PGM, PPM

X

X

PAM

X

X

PAM is a PNM extension with alpha support.

PGMYUV

X

X

PGM with U and V components in YUV 4:2:0

JPEG

X

X

Progressive JPEG is not supported.

.Y.U.V

X

X

one raw file per component

animated GIF

X

X

Only uncompressed GIFs are generated.

PNG

X

X

2 bit and 4 bit/pixel not supported yet.

Targa

X

Targa (.TGA) image format.

TIFF

X

Only 24 bit/pixel images are supported.

SGI

X

X

SGI RGB image format

X means that encoding (resp. decoding) is supported.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

6.3 Video Codecs

Supported Codec

Encoding

Decoding

Comments

MPEG-1 video

X

X

MPEG-2 video

X

X

MPEG-4

X

X

MSMPEG4 V1

X

X

MSMPEG4 V2

X

X

MSMPEG4 V3

X

X

WMV7

X

X

WMV8

X

X

not completely working

WMV9

X

not completely working

VC1

X

H.261

X

X

H.263(+)

X

X

also known as RealVideo 1.0

H.264

X

RealVideo 1.0

X

X

RealVideo 2.0

X

X

MJPEG

X

X

lossless MJPEG

X

X

JPEG-LS

X

X

fourcc: MJLS, lossless and near-lossless is supported

Apple MJPEG-B

X

Sunplus MJPEG

X

fourcc: SP5X

DV

X

X

HuffYUV

X

X

FFmpeg Video 1

X

X

experimental lossless codec (fourcc: FFV1)

FFmpeg Snow

X

X

experimental wavelet codec (fourcc: SNOW)

Asus v1

X

X

fourcc: ASV1

Asus v2

X

X

fourcc: ASV2

Creative YUV

X

fourcc: CYUV

Sorenson Video 1

X

X

fourcc: SVQ1

Sorenson Video 3

X

fourcc: SVQ3

On2 VP3

X

still experimental

On2 VP5

X

fourcc: VP50

On2 VP6

X

fourcc: VP60,VP61,VP62

Theora

X

X

still experimental

Intel Indeo 3

X

FLV

X

X

Sorenson H.263 used in Flash

Flash Screen Video

X

X

fourcc: FSV1

ATI VCR1

X

fourcc: VCR1

ATI VCR2

X

fourcc: VCR2

Cirrus Logic AccuPak

X

fourcc: CLJR

4X Video

X

Used in certain computer games.

Sony Playstation MDEC

X

Id RoQ

X

Used in Quake III, Jedi Knight 2, other computer games.

Xan/WC3

X

Used in Wing Commander III .MVE files.

Interplay Video

X

Used in Interplay .MVE files.

Apple Animation

X

fourcc: 'rle '

Apple Graphics

X

fourcc: 'smc '

Apple Video

X

fourcc: rpza

Apple QuickDraw

X

fourcc: qdrw

Cinepak

X

Microsoft RLE

X

Microsoft Video-1

X

Westwood VQA

X

Id Cinematic Video

X

Used in Quake II.

Planar RGB

X

fourcc: 8BPS

FLIC video

X

Duck TrueMotion v1

X

fourcc: DUCK

Duck TrueMotion v2

X

fourcc: TM20

VMD Video

X

Used in Sierra VMD files.

MSZH

X

Part of LCL

ZLIB

X

X

Part of LCL, encoder experimental

TechSmith Camtasia

X

fourcc: TSCC

IBM Ultimotion

X

fourcc: ULTI

Miro VideoXL

X

fourcc: VIXL

QPEG

X

fourccs: QPEG, Q1.0, Q1.1

LOCO

X

Winnov WNV1

X

Autodesk Animator Studio Codec

X

fourcc: AASC

Fraps FPS1

X

CamStudio

X

fourcc: CSCD

American Laser Games Video

X

Used in games like Mad Dog McCree

ZMBV

X

X

Encoder works only on PAL8

AVS Video

X

Video encoding used by the Creature Shock game.

Smacker Video

X

Video encoding used in Smacker.

RTjpeg

X

Video encoding used in NuppelVideo files.

KMVC

X

Codec used in Worms games.

VMware Video

X

Codec used in videos captured by VMware.

Cin Video

X

Codec used in Delphine Software games.

Tiertex Seq Video

X

Codec used in DOS CDROM FlashBack game.

X means that encoding (resp. decoding) is supported.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

6.4 Audio Codecs

Supported Codec

Encoding

Decoding

Comments

MPEG audio layer 2

IX

IX

MPEG audio layer 1/3

IX

IX

MP3 encoding is supported through the external library LAME.

AC3

IX

IX

liba52 is used internally for decoding.

Vorbis

X

X

WMA V1/V2

X

X

AAC

X

X

Supported through the external library libfaac/libfaad.

Microsoft ADPCM

X

X

MS IMA ADPCM

X

X

QT IMA ADPCM

X

4X IMA ADPCM

X

G.726 ADPCM

X

X

Duck DK3 IMA ADPCM

X

Used in some Sega Saturn console games.

Duck DK4 IMA ADPCM

X

Used in some Sega Saturn console games.

Westwood Studios IMA ADPCM

X

Used in Westwood Studios games like Command and Conquer.

SMJPEG IMA ADPCM

X

Used in certain Loki game ports.

CD-ROM XA ADPCM

X

CRI ADX ADPCM

X

X

Used in Sega Dreamcast games.

Electronic Arts ADPCM

X

Used in various EA titles.

Creative ADPCM

X

16 -> 4, 8 -> 4, 8 -> 3, 8 -> 2

RA144

X

Real 14400 bit/s codec

RA288

X

Real 28800 bit/s codec

RADnet

X

IX

Real low bitrate AC3 codec, liba52 is used for decoding.

AMR-NB

X

X

Supported through an external library.

AMR-WB

X

X

Supported through an external library.

DV audio

X

Id RoQ DPCM

X

Used in Quake III, Jedi Knight 2, other computer games.

Interplay MVE DPCM

X

Used in various Interplay computer games.

Xan DPCM

X

Used in Origin's Wing Commander IV AVI files.

Sierra Online DPCM

X

Used in Sierra Online game audio files.

Apple MACE 3

X

Apple MACE 6

X

FLAC lossless audio

X

Shorten lossless audio

X

Apple lossless audio

X

QuickTime fourcc 'alac'

FFmpeg Sonic

X

X

experimental lossy/lossless codec

Qdesign QDM2

X

there are still some distortions

Real COOK

X

All versions except 5.1 are supported

DSP Group TrueSpeech

X

True Audio (TTA)

X

Smacker Audio

X

WavPack Audio

X

Cin Audio

X

Codec used in Delphine Software games.

Intel Music Coder

X

Musepack

X

Only SV7 is supported

DT$ Coherent Audio

X

X means that encoding (resp. decoding) is supported.

I means that an integer-only version is available, too (ensures high performance on systems without hardware floating point support).


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

7. Platform Specific information


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

7.1 BSD

BSD make will not build FFmpeg, you need to install and use GNU Make (‘gmake’).


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

7.2 Windows

To get help and instructions for using FFmpeg under Windows, check out the FFmpeg Windows Help Forum at http://arrozcru.no-ip.org/ffmpeg/.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

7.2.1 Native Windows compilation

Notes:


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

7.2.2 Visual C++ compatibility

FFmpeg will not compile under Visual C++ – and it has too many dependencies on the GCC compiler to make a port viable. However, if you want to use the FFmpeg libraries in your own applications, you can still compile those applications using Visual C++. An important restriction to this is that you have to use the dynamically linked versions of the FFmpeg libraries (i.e. the DLLs), and you have to make sure that Visual-C++-compatible import libraries are created during the FFmpeg build process.

This description of how to use the FFmpeg libraries with Visual C++ is based on Visual C++ 2005 Express Edition Beta 2. If you have a different version, you might have to modify the procedures slightly.

Here are the step-by-step instructions for building the FFmpeg libraries so they can be used with Visual C++:

  1. Install Visual C++ (if you haven't done so already).
  2. Install MinGW and MSYS as described above.
  3. Add a call to ‘vcvars32.bat’ (which sets up the environment variables for the Visual C++ tools) as the first line of ‘msys.bat’. The standard location for ‘vcvars32.bat’ is ‘C:\Program Files\Microsoft Visual Studio 8\VC\bin\vcvars32.bat’, and the standard location for ‘msys.bat’ is ‘C:\msys\1.0\msys.bat’. If this corresponds to your setup, add the following line as the first line of ‘msys.bat’:

    call "C:\Program Files\Microsoft Visual Studio 8\VC\bin\vcvars32.bat"

  4. Start the MSYS shell (file ‘msys.bat’) and type link.exe. If you get a help message with the command line options of link.exe, this means your environment variables are set up correctly, the Microsoft linker is on the path and will be used by FFmpeg to create Visual-C++-compatible import libraries.
  5. Extract the current version of FFmpeg and change to the FFmpeg directory.
  6. Type the command ./configure --enable-shared --disable-static --enable-memalign-hack to configure and, if that didn't produce any errors, type make to build FFmpeg.
  7. The subdirectories ‘libavformat’, ‘libavcodec’, and ‘libavutil’ should now contain the files ‘avformat.dll’, ‘avformat.lib’, ‘avcodec.dll’, ‘avcodec.lib’, ‘avutil.dll’, and ‘avutil.lib’, respectively. Copy the three DLLs to your System32 directory (typically ‘C:\Windows\System32’).

And here is how to use these libraries with Visual C++:

  1. Create a new console application ("File / New / Project") and then select "Win32 Console Application". On the appropriate page of the Application Wizard, uncheck the "Precompiled headers" option.
  2. Write the source code for your application, or, for testing, just copy the code from an existing sample application into the source file that Visual C++ has already created for you. (Note that your source filehas to have a .cpp extension; otherwise, Visual C++ won't compile the FFmpeg headers correctly because in C mode, it doesn't recognize the inline keyword.) For example, you can copy ‘output_example.c’ from the FFmpeg distribution (but you will have to make minor modifications so the code will compile under C++, see below).
  3. Open the "Project / Properties" dialog box. In the "Configuration" combo box, select "All Configurations" so that the changes you make will affect both debug and release builds. In the tree view on the left hand side, select "C/C++ / General", then edit the "Additional Include Directories" setting to contain the complete paths to the ‘libavformat’, ‘libavcodec’, and ‘libavutil’ subdirectories of your FFmpeg directory. Note that the directories have to be separated using semicolons. Now select "Linker / General" from the tree view and edit the "Additional Library Directories" setting to contain the same three directories.
  4. Still in the "Project / Properties" dialog box, select "Linker / Input" from the tree view, then add the files ‘avformat.lib’, ‘avcodec.lib’, and ‘avutil.lib’ to the end of the "Additional Dependencies". Note that the names of the libraries have to be separated using spaces.
  5. Now, select "C/C++ / Code Generation" from the tree view. Select "Debug" in the "Configuration" combo box. Make sure that "Runtime Library" is set to "Multi-threaded Debug DLL". Then, select "Release" in the "Configuration" combo box and make sure that "Runtime Library" is set to "Multi-threaded DLL".
  6. Click "OK" to close the "Project / Properties" dialog box and build the application. Hopefully, it should compile and run cleanly. If you used ‘output_example.c’ as your sample application, you will get a few compiler errors, but they are easy to fix. The first type of error occurs because Visual C++ doesn't allow an int to be converted to an enum without a cast. To solve the problem, insert the required casts (this error occurs once for a CodecID and once for a CodecType). The second type of error occurs because C++ requires the return value of malloc to be cast to the exact type of the pointer it is being assigned to. Visual C++ will complain that, for example, (void *) is being assigned to (uint8_t *) without an explicit cast. So insert an explicit cast in these places to silence the compiler. The third type of error occurs because the snprintf library function is called _snprintf under Visual C++. So just add an underscore to fix the problem. With these changes, ‘output_example.c’ should compile under Visual C++, and the resulting executable should produce valid video files.

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

7.2.3 Cross compilation for Windows with Linux

You must use the MinGW cross compilation tools available at http://www.mingw.org/.

Then configure FFmpeg with the following options:

 
./configure --enable-mingw32 --cross-prefix=i386-mingw32msvc-

(you can change the cross-prefix according to the prefix chosen for the MinGW tools).

Then you can easily test FFmpeg with Wine (http://www.winehq.com/).


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

7.2.4 Compilation under Cygwin

Cygwin works very much like Unix.

Just install your Cygwin with all the "Base" packages, plus the following "Devel" ones:

 
binutils, gcc-core, make, subversion

Do not install binutils-20060709-1 (they are buggy on shared builds); use binutils-20050610-1 instead.

Then run

 
./configure --enable-static --disable-shared

to make a static build or

 
./configure --enable-shared --disable-static

to build shared libraries.

If you want to build FFmpeg with additional libraries, download Cygwin "Devel" packages for Ogg and Vorbis from any Cygwin packages repository and/or SDL, xvid, faac, faad2 packages from Cygwin Ports, (http://cygwinports.dotsrc.org/).


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

7.2.5 Crosscompilation for Windows under Cygwin

With Cygwin you can create Windows binaries that don't need the cygwin1.dll.

Just install your Cygwin as explained before, plus these additional "Devel" packages:

 
gcc-mingw-core, mingw-runtime, mingw-zlib

and add some special flags to your configure invocation.

For a static build run

 
./configure --enable-mingw32 --enable-memalign-hack --enable-static --disable-shared --extra-cflags=-mno-cygwin --extra-libs=-mno-cygwin

and for a build with shared libraries

 
./configure --enable-mingw32 --enable-memalign-hack --enable-shared --disable-static --extra-cflags=-mno-cygwin --extra-libs=-mno-cygwin

[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

7.3 BeOS

The configure script should guess the configuration itself. Networking support is currently not finished. errno issues fixed by Andrew Bachmann.

Old stuff:

François Revol - revol at free dot fr - April 2002

The configure script should guess the configuration itself, however I still didn't test building on the net_server version of BeOS.

FFserver is broken (needs poll() implementation).

There are still issues with errno codes, which are negative in BeOS, and that FFmpeg negates when returning. This ends up turning errors into valid results, then crashes. (To be fixed)


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

8. Developers Guide


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

8.1 API


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

8.2 Integrating libavcodec or libavformat in your program

You can integrate all the source code of the libraries to link them statically to avoid any version problem. All you need is to provide a 'config.mak' and a 'config.h' in the parent directory. See the defines generated by ./configure to understand what is needed.

You can use libavcodec or libavformat in your commercial program, but any patch you make must be published. The best way to proceed is to send your patches to the FFmpeg mailing list.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

8.3 Coding Rules

FFmpeg is programmed in the ISO C90 language with a few additional features from ISO C99, namely:

These features are supported by all compilers we care about, so we won't accept patches to remove their use unless they absolutely don't impair clarity and performance.

All code must compile with GCC 2.95 and GCC 3.3. Currently, FFmpeg also compiles with several other compilers, such as the Compaq ccc compiler or Sun Studio 9, and we would like to keep it that way unless it would be exceedingly involved. To ensure compatibility, please don't use any additional C99 features or GCC extensions. Especially watch out for:

Indent size is 4. The presentation is the one specified by 'indent -i4 -kr -nut'. The TAB character is forbidden outside of Makefiles as is any form of trailing whitespace. Commits containing either will be rejected by the Subversion repository.

Main priority in FFmpeg is simplicity and small code size (=less bugs).

Comments: Use the JavaDoc/Doxygen format (see examples below) so that code documentation can be generated automatically. All nontrivial functions should have a comment above them explaining what the function does, even if it's just one sentence. All structures and their member variables should be documented, too.

 
/**
 * @file mpeg.c
 * MPEG codec.
 * @author ...
 */

/**
 * Summary sentence.
 * more text ...
 * ...
 */
typedef struct Foobar{
    int var1; /**< var1 description */
    int var2; ///< var2 description
    /** var3 description */
    int var3;
} Foobar;

/**
 * Summary sentence.
 * more text ...
 * ...
 * @param my_parameter description of my_parameter
 * @return return value description
 */
int myfunc(int my_parameter)
...

fprintf and printf are forbidden in libavformat and libavcodec, please use av_log() instead.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

8.4 Development Policy

  1. You must not commit code which breaks FFmpeg! (Meaning unfinished but enabled code which breaks compilation or compiles but does not work or breaks the regression tests) You can commit unfinished stuff (for testing etc), but it must be disabled (#ifdef etc) by default so it does not interfere with other developers' work.
  2. You don't have to over-test things. If it works for you, and you think it should work for others, then commit. If your code has problems (portability, triggers compiler bugs, unusual environment etc) they will be reported and eventually fixed.
  3. Do not commit unrelated changes together, split them into self-contained pieces.
  4. Do not change behavior of the program (renaming options etc) without first discussing it on the ffmpeg-devel mailing list. Do not remove functionality from the code. Just improve!

    Note: Redundant code can be removed.

  5. Do not commit changes to the build system (Makefiles, configure script) which change behavior, defaults etc, without asking first. The same applies to compiler warning fixes, trivial looking fixes and to code maintained by other developers. We usually have a reason for doing things the way we do. Send your changes as patches to the ffmpeg-devel mailing list, and if the code maintainers say OK, you may commit. This does not apply to files you wrote and/or maintain.
  6. We refuse source indentation and other cosmetic changes if they are mixed with functional changes, such commits will be rejected and removed. Every developer has his own indentation style, you should not change it. Of course if you (re)write something, you can use your own style, even though we would prefer if the indentation throughout FFmpeg was consistent (Many projects force a given indentation style - we don't.). If you really need to make indentation changes (try to avoid this), separate them strictly from real changes.

    NOTE: If you had to put if(){ .. } over a large (> 5 lines) chunk of code, then either do NOT change the indentation of the inner part within (don't move it to the right)! or do so in a separate commit

  7. Always fill out the commit log message. Describe in a few lines what you changed and why. You can refer to mailing list postings if you fix a particular bug. Comments such as "fixed!" or "Changed it." are unacceptable.
  8. If you apply a patch by someone else, include the name and email address in the log message. Since the ffmpeg-cvslog mailing list is publicly archived you should add some SPAM protection to the email address. Send an answer to ffmpeg-devel (or wherever you got the patch from) saying that you applied the patch.
  9. Do NOT commit to code actively maintained by others without permission. Send a patch to ffmpeg-devel instead. If noone answers within a reasonable timeframe (12h for build failures and security fixes, 3 days small changes, 1 week for big patches) then commit your patch if you think it's OK. Also note, the maintainer can simply ask for more time to review!
  10. Subscribe to the ffmpeg-cvslog mailing list. The diffs of all commits are sent there and reviewed by all the other developers. Bugs and possible improvements or general questions regarding commits are discussed there. We expect you to react if problems with your code are uncovered.
  11. Update the documentation if you change behavior or add features. If you are unsure how best to do this, send a patch to ffmpeg-devel, the documentation maintainer(s) will review and commit your stuff.
  12. Never write to unallocated memory, never write over the end of arrays, always check values read from some untrusted source before using them as array index or other risky things.
  13. Remember to check if you need to bump versions for the specific libav parts (libavutil, libavcodec, libavformat) you are changing. You need to change the version integer and the version string. Incrementing the first component means no backward compatibility to previous versions (e.g. removal of a function from the public API). Incrementing the second component means backward compatible change (e.g. addition of a function to the public API). Incrementing the third component means a noteworthy binary compatible change (e.g. encoder bug fix that matters for the decoder).
  14. If you add a new codec, remember to update the changelog, add it to the supported codecs table in the documentation and bump the second component of the ‘libavcodec’ version number appropriately. If it has a fourcc, add it to ‘libavformat/avienc.c’, even if it is only a decoder.

We think our rules are not too hard. If you have comments, contact us.

Note, these rules are mostly borrowed from the MPlayer project.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

8.5 Submitting patches

First, (see section Coding Rules) above if you didn't yet.

When you submit your patch, try to send a unified diff (diff '-up' option). I cannot read other diffs :-)

Also please do not submit patches which contain several unrelated changes. Split them into individual self-contained patches; this makes reviewing them much easier.

Run the regression tests before submitting a patch so that you can verify that there are no big problems.

Patches should be posted as base64 encoded attachments (or any other encoding which ensures that the patch won't be trashed during transmission) to the ffmpeg-devel mailing list, see http://lists.mplayerhq.hu/mailman/listinfo/ffmpeg-devel

It also helps quite a bit if you tell us what the patch does (for example 'replaces lrint by lrintf'), and why (for example '*BSD isn't C99 compliant and has no lrint()')

We reply to all submitted patches and either apply or reject with some explanation why, but sometimes we are quite busy so it can take a week or two.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

8.6 Regression tests

Before submitting a patch (or committing to the repository), you should at least test that you did not break anything.

The regression tests build a synthetic video stream and a synthetic audio stream. These are then encoded and decoded with all codecs or formats. The CRC (or MD5) of each generated file is recorded in a result file. A 'diff' is launched to compare the reference results and the result file.

The regression tests then go on to test the FFserver code with a limited set of streams. It is important that this step runs correctly as well.

Run 'make test' to test all the codecs and formats.

Run 'make fulltest' to test all the codecs, formats and FFserver.

[Of course, some patches may change the results of the regression tests. In this case, the reference results of the regression tests shall be modified accordingly].


[Top] [Contents] [Index] [ ? ]

About This Document

This document was generated by Build Daemon user on March, 12 2008 using texi2html 1.78.

The buttons in the navigation panels have the following meaning:

Button Name Go to From 1.2.3 go to
[ < ] Back Previous section in reading order 1.2.2
[ > ] Forward Next section in reading order 1.2.4
[ << ] FastBack Beginning of this chapter or previous chapter 1
[ Up ] Up Up section 1.2
[ >> ] FastForward Next chapter 2
[Top] Top Cover (top) of document  
[Contents] Contents Table of contents  
[Index] Index Index  
[ ? ] About About (help)  

where the Example assumes that the current position is at Subsubsection One-Two-Three of a document of the following structure:


This document was generated by Build Daemon user on March, 12 2008 using texi2html 1.78.