Thursday, January 31, 2008

How to parse MPEG4 data over the RTP stream and identify the header with SDP file ?

analysing the RTP received packets from middle of the sending and test the SDP with QuickTime and VLC player.
Qt and VLC plays the RTP stream thru SDP file .So There might be two chance for it.

1.without SDP file players can't receive RTP stream data.
Reason :
If I opened the RTP stream in a VLC player, it doesnt receive RTP stream data and gives unable to open error.

2. So based on this SDP file and we have to search for the VOL_start_code. For Each frame VOP is available. Successive VOPs can have I Frame or P Frame. for computer video, VOL_Header must have an information about Width and height

So combine the VOP information and SDP file information, it creates the header for MPEG4 Encoded data. But VOP start Code is not like that... 0000 01 B6 (vop_start_code ) it is available in more than one RTP packet consecutively.These packets are having less frame size.

VOL header is having more video Info . So it is having the header info. So we have to search for the Next VOL header.
if we started from middle of the RTP stream.

How to create memory for IMediaSample Queue ?

we can create the Object for the IMediaSample as follows :
typedef IMediaSample* PIMediaSample;
//For Memory allocation PIMediaSample *pSamples = new PIMediaSample[100];
//For Memory Release delete[] pSamples;

For Queue for the Output Pin, we can use COutputQueue class or we can use Queue

Queue <> samplesQ;
BYTE* pbData,*pbFrameBytes;
unsigned long FrameSize;
PIMediaSample* pSample = new PIMediaSample();
pSample->GetPointer(&pbData); memcpy(pbData,pbFrameBytes,FrameSize);
samplesQ.Push(pSample);

PIMediaSample* pSample; samplesQ.Pop(pSample); //Sample is Removed from Queue Deliver(pSample); pSample->Release(); // Here the sample is released.

otherwise we may use COutputQueue. It supports queueing of Media Samples.
COutputQueue * pOutputQueue;

How to Receive RTP Stream in the middle of The Transmission ?

Currently we must receive RTP Stream from Start Onwards...
Then only it will works correctly.otherwise it may not work out.So Do this Excercise :
1.Receive the RTP Stream from Middle of Transmission and dump it into a file.
2.Pass this dump file to the ffmpeg and decode it by specifying the Profile-level-id and specify the format as m4v and can see if we can specify theES.
ffmpeg also decodes the .m4v file well.


Previously I thought of it as SDP file can be played only with QuickTime.QuickTime is having some kernel based capture driver to receive data from socket.Actually QuickTime processes are running in background.But now it is not like that.But Later I played the SDP file with VLC player. VLC player doesnt have an exe file running in background.So we can play the MPEG4 video elementary stream from the middle of the sTream.Further Stream Infos are not needed.

SDP protocol, SDP file Format

SDP protocol :
----------------

.SDP file can be played in VLC player and Quicktime player.

So without any headers, How is it possible to decode the data ?
From Sample SDP file, what info we are getting ?.
1.MP4v -ES
2.IPaddress and Port to receive data
3.payload type ( 96)
4.profile-level-id ( argument passed to the MPEG4V-ES stream.)

Another way of Implementing the Distance Learning System

Instead of Filter based approach,
we can use the application approach also.

The application receives data from the Socket and send the received packets to the MPEG4 Decoder.From MPEG4 Decoder, we can get the raw yuv in 420P format.From Here we can convert it to RGB and render it in screen.

Filter needed for Distance Learning System

Filters Needed For Distance Learning System
:-------------------------------------------------
Based on RTP:
-----------------
1.RTP Video Sender Filter
2.RTP Video Receiver Filter
3.RTP Audio Sender Filter
4.RTP Audio Receiver Filter
5.MPEG4 Video Encoder Filter
6.MPEG4 Video Decoder Filter
7.MPEG4 Audio Encoder Filter
8.MPEG4 Audio Decoder Filter
9.Audio and Video Sync Filter


Based On RTSP:
------------------
1.RTSP Sender Filter
2.RTSP Receiver Filter
3.MPEG4 Multiplexer Filter
4.MPEG4 Demultiplexer Filter
5.MPEG4 Video Encoder Filter
6.MPEG4 Video Decoder Filter
7.MPEG4 Audio Encoder Filter
8.MPEG4 Audio Decoder Filter

Additional Features :
---------------------
1.MPEG4 Writer Filter for writing it into an MP4 File

Wednesday, January 30, 2008

How can identify the KeyFrame from a MPEG4 Encoded data ?

how can we identify the Keyframe from MPEG4 Encoded data ?...
within MPEG4 Encoded data, they will have the configuration information u know... Every vop_start_code(00 00 01 B6) indicates a new frame.
For the First Frame Set the KeyFrame flag. Count this vop_start_code to identify the key frame.
if the vop_start_code is 30, then FrameCount % number_of_vop_start_codes == 0 then it is a keyframe.

How can identify the KeyFrame from a MPEG4 Encoded data ?

how can we identify the Keyframe from MPEG4 Encoded data ?...
within MPEG4 Encoded data, they will have the configuration information u know... Every vop_start_code(00 00 01 B6) indicates a new frame.
For the First Frame Set the KeyFrame flag. Count this vop_start_code to identify the key frame.
if the vop_start_code is 30, then FrameCount % number_of_vop_start_codes == 0 then it is a keyframe.

How can we develop the Source Filter for RTP Receiver and set its media type ?...

How can we develop the Source Filter for RTP Receiver ?...
We can do one thing.

1.Develop the Source filter with Dynamic Output Pin,That means the output pin's buffer size may vary.

2.RTP receiver Filter must send the output data to its outptut pin based on the Frame. if the Frame is wrapped over more than one RTP packet, we have to send it over the network. For Doing this one develop the Separate class which will separate the RTP header and assemlble them as frame.
3.Look on MWS source Filter for Dynamic Output Pin.
GetStreamCount()
for(int i = 0; i < GetStreamCount(); i++)
{
StreamInfo stInfo = GetStreamInfo(i);
if(stInfo->id == VIDEO) { //Video is available in a Stream
stInfo->GetVideoFormat();
}
else if(stInfo->id == AUDIO)
{ //Audio is available in this stream stInfo->GetAudioFormat(); }
stInfo->GetExtraInfo() // Extra Header...
}
1.GetStreamInfo()
2.TrackId

VideoInfoHeader also must be set at the time of creating the Output Pin.

How can we idenify the MPEG2VIDEOINFO header's dwSequenceHeader from (received from RTP) MPEG4 Encoded buffer ?

How can we idenify the MPEG2VIDEOINFO header's dwSequenceHeader from (received from RTP) MPEG4 Encoded buffer ?

- Search for Group_of_vop_start_code (00 00 01 B3 check it without space ).Bytes Before the Group_of_vop_start_code is considered as a MPEG2VIDEOINFOHEADER's dwSequenceHeader.
From the Start of the VOS header's , we can identify the width and height of the video. otherwise we can hardcode the data with the specified configuration at the receiver side.

group_of_vop_start_code - 00 00 01 B3 -Search for 00 00 01 B3 Bytes before this code is added as a Header format;


MPEG2VIDEOINFO vidInfo;
vidInfo.dwSequenceHeader[0] = ( BYTE*) pbBytesBefore000001B3;
MPEG2VIDEOINFO vidInfo; BYTE* pbHeader = (BYTE*)vidInfo.dwSequenceHeader[0]

How can we idenify the MPEG2VIDEOINFO header's dwSequenceHeader from (received from RTP) MPEG4 Encoded buffer ?

How can we idenify the MPEG2VIDEOINFO header's dwSequenceHeader from (received from RTP) MPEG4 Encoded buffer ?

- Search for Group_of_vop_start_code (00 00 01 B3 check it without space ).Bytes Before the Group_of_vop_start_code is considered as a MPEG2VIDEOINFOHEADER's dwSequenceHeader.
From the Start of the VOS header's , we can identify the width and height of the video. otherwise we can hardcode the data with the specified configuration at the receiver side.

group_of_vop_start_code - 00 00 01 B3 -Search for 00 00 01 B3 Bytes before this code is added as a Header format;



MPEG2VIDEOINFO vidInfo;
vidInfo.dwSequenceHeader[0] = ( BYTE*) pbBytesBefore000001B3;
MPEG2VIDEOINFO vidInfo; BYTE* pbHeader = (BYTE*)vidInfo.dwSequenceHeader[0]

Tuesday, January 29, 2008

CMediaType Linker Error

I got the CMediaType Error as follows :

Creating library Debug/HDSceneDetectorAPI.lib and object Debug/HDSceneDetectorAPI.exp
CDxFilterGraph.obj : error LNK2001: unresolved external symbol "public: __thiscall CMediaType::CMediaType(void)" (??0CMediaType@@QAE@XZ)
CDxFilterGraph.obj : error LNK2001: unresolved external symbol "public: __thiscall CMediaType::~CMediaType(void)" (??1CMediaType@@QAE@XZ)
Debug/HDSceneDetectorAPI.dll : fatal error LNK1120: 2 unresolved externals


Solution :
-------------
For this problem, I included the "C:\DXSDK\samples\C++\DirectShow\BaseClasses\Debug_Unicode\strmbasd.lib"
in a Linker --> Input Settings...

Dynamic Output Pin Buffer size for the Filter _Look at DsNetwork Receiver Filter

Regarding Dynamic Output Buffer and size See the Things in DSNetwork Receiver Filter.DSNetwork Receiver Filter ouptuts dynamic buffer size on its output pin.DSNetwork filter implemented the allocator for the output pin.

Dynamic Output Pin Buffer size for the Filter _Look at DsNetwork Receiver Filter

Regarding Dynamic Output Buffer and size See the Things in DSNetwork Receiver Filter.DSNetwork Receiver Filter ouptuts dynamic buffer size on its output pin.DSNetwork filter implemented the allocator for the output pin.

How to decode the RTP MPEG4 dump( received bits) ?

How to decode the RTP MPEG4 dump :
-----------------------------------------
I added the Network receiver filter and dump the received data to a file.I send the rtp stream using ffmpeg as follows : ffmpeg -i "D:\highway.avi" -s 176x144 -vcodec mpeg4 -f rtp rtp://127.0.0.1:5000/
I got an error So I specified the audio as Nil using the following command : -an ffmpeg -i "D:\highway.avi" -s 176x144 -an -vcodec mpeg4 -f rtp rtp://127.0.0.1:5000/Now it is sending data to RTP successfully.
Using RTP receiver filter, I received the RTP data and dumped it to the file using Dump Filter.
I passed the RTP dump as input to the ffmpeg as follows :
ffmpeg -vodec mpeg4 -f rtp -i "D:\RTP.dump" -vcodec rawvideo "D:\RTP.YUV"
I played it with YUV file player, with some flickering it plays the RTP Mpeg data well.

How to Show the File Dialog before inserting a Filter

How to Show the File Dialog before inserting a Filter :
--------------------------------------------------------
if We implement the IFileSourceFilter or IFileSinkFilter then you will get a File Dialog...
While Inserting the filter in a GraphEdit.if we implement the IFileSourceFilter, the File Open Dialog box will be shown.if we implement the IFileSinkFilter, the File Save Dialog box will be shown.

Monday, January 28, 2008

MPEG4 .m4v file ( it is in raw format

Transport Stream :
----------------------
we opened the .mp4 file in a GraphEdit , 1.Dumped the MPEG2VideoInfo's dwSequenceHeader in to a file.
BYTE* pbData = (BYTE*) dwSequenceHeader[0];
cbSequenceHeader is having the size of the header. 2. Dumped the Encoded data in to a file.
Now we are creating a new file by merging a header and Encoded content.and Named it as .m4v , we can play this file in a media playerotherwise name it as .bits and we can decode this file using ffmpeg as follows :
ffmpeg -vcodec mpeg4 -i "EncodedWithHdr.bits" -vcodec rawvideo EncodedWithHdr.yuv
But Program stream and Transport Stream is not always same.

we can also create the .m4v by the following command :
ffmpeg -i D:\highway.avi -f m4v D:\highway.m4v

Program Stream and TransportStream Difference

There will be no difference between Program Stream and Transport stream. Except Header...

TS have VOS,VO and VOL headers. But the PS( Program Stream) is directly having VOL

headers. Sometimes we referred it as Elementary Stream.

if the Packet is having only video or audio, we call it as Elementary Stream.
if the packet contains both audio and video, we call it as Program Stream.
if the Packet contains one or more program stream we call it as Transport Stream.
we can carry more channels over the TS.

Thursday, January 24, 2008

ffmpeg RTP streaming


ffmpeg -vcodec mpeg4 -i D:\Media\vel1.mp4 -an -s 176x144 -vcodec rawvideo -f rtp rtp://127.0.0.1:5000/

while running this in a commandline, we got the following things in ffmpeg command informations.


I copied the following things in a rawvideo.sdp file and opened it in a
QuickTime player. Now the video is playing.


SDP:( this content will be generated by ffmpeg)
--------
v=0
o=- 0 0 IN IPV4 127.0.0.1
t=0 0
s=No Name
a=tool:libavformat
c=IN IP4 127.0.0.1
m=video 5000 RTP/AVP 96
a=rtpmap:96 MP4V-ES/90000
a=fmtp:96 profile-level-id=1




Tuesday, January 22, 2008

How to get the MPEG4 encoded bitstream headers within a MPEG4 Decoder Filter

1.I opened the ( highway176x144_30fps.mp4) file in a GraphEdit.

I dumped the MPEG encoded contents in to a file using the following graph:

MP4 Source Filter -> Sample Grabber Filter ->Dump Filter ( highway176x144_30fps.bits)

if I passed the dump file to the MPEG4 decoder application,It gives an error.
( Because the Encoded content also have some Header information ... It is missing )

2.I used my Temporal decoder filter to open the same Mp4 file ( highway176x144_30fps.mp4)

MP4 Source Filter -> Temporal Decoder Filter -> Video Renderer

within my temporal Decoder, I checked the MPEG2VIDEOINFO header of an Tempora Decoder filter's Input Pin.

MPEG2VIDEOINFo header is having some values like as follows:

DWORD cbSequenceHeader; // Size of the Encoded contents Header
DWORD dwSequenceHeader[1]; // Contains header info.

we can typecast it as follows to get the buffer,

BYTE* pbData =(BYTE*) mpg2VideoInfo->dwSequenceHeader;
I dumped this Encoded data to a file.( highway176x144_30fps.mpeg4hdr)


3.I created the Sample application to merge these two files.

The new File ( highway_mpeg4.bits) must have the following :

0. Create a new file ( highway_mpeg4.bits)
i. Append the contents of highway176x144_30fps.mpeg4hdr file to the highway_mpeg4.bits
ii. Append the contents of highway176x144_30fps.bits file to the highway_mpeg4.bits

4.Next I did the following.

Using FFMpeg to decode the MPEG4 contents as follows :

ffmpeg -s 176x144 -vcodec mpeg4 -i d:\media\MpegInput\highway_mpeg4.bits -vcodec rawvideo D:\media\MpegInput\highway_mpeg4.yuv

ffmpeg successully generated the Output file "D:\media\MpegInput\highway_mpeg4.yuv".


I Opened this YUV file in YUV player. it is working fine.

Thursday, January 17, 2008

How can we test the Temporal Decoder Filter Framework

How can we test the Temporal Decoder Filter Framework :
---------------------------------------------------------
1. For Temporal Decoder, the Input buffer size may vary...

So Implement the custom allocator for the input pin

2. Open Some Image file using OpenCV .
Based on the Image Size, allocate the Output pin buffer size using DecideBufferSize() fn.
copy the Image buffer in to the output buffer.

Thru this we can test the temporal decoder filter...

if we print the Input( source) Sample Size and Output(destination) Sample Size.
we can identify the variable Input Buffer size.

But the Output Buffer size is always constant.

Construct the Filter graph as follows in GraphEdit :

MP4 Video --> Temporal Decoder --> Video Renderer

Run the Graph if it runs the graph and displays the Image properly. then it is fine...

Error While Overriding virtual methods of Directshow Baseclasses

Error While Overriding virtual methods of Directshow Baseclasses:
---------------------------------------------------------------


while overriding some fns,
I got an error ...


class CMyTransformInputPin : public CTransformInputPin
{
public :
HRESULT Receive(IMediaSample* pSample);
};

HRESULT CMyTransformInputPin :: Receive(IMediaSample* pSample)
{
HRESULT hr = S_OK;
hr = CTransformInputPin::Receive(pSample);
return hr;
}


I got an error as follows :

error C2555: 'CTemporalInputPin::Receive' : overriding virtual function differs from 'CTransformInputPin::Receive' only by return type or calling convention
baseclasses\transfrm.h(33) : see declaration of 'CTransformInputPin'


Solution :
-------------

I modified it as follows :


class CMyTransformInputPin : public CTransformInputPin
{
public :
STDMETHODIMP Receive(IMediaSample* pSample); // Modification is Here...
};

HRESULT CMyTransformInputPin :: Receive(IMediaSample* pSample)
{
HRESULT hr = S_OK;
hr = CTransformInputPin::Receive(pSample);
return hr;
}

Now It is working.For STDMETHODIMP macro expansion is as follows :



#define STDMETHODIMP HRESULT STDMETHODCALLTYPE


For STDMETHODCALLTYPE macro defn is as follows :


#ifdef _68K_
#define STDMETHODCALLTYPE __cdecl
#else
#define STDMETHODCALLTYPE __stdcall
#endif


So it varies in calling convention, So I got an error...

GetMediaType() fn 's Execution in a DShow Filter

About GetMediaType() fn :
----------------------------
1. I opened the Image File in a GetMediaType() fn of a Temporal Decoder filter.
2. I added this filter to the GraphEdit as follows

Mp4 Video -> Temporal Decoder Filter ->Video Renderer
it is working fine ...
I saved this filter graph in .grf (graph) file.

If I opened the Graph(.grf ) file directly in GraphEdit, it gives error,

Because the GetMediaType() is not called in a filter. So The invalid memory is copied to my filter.

It causes the error in Graphedit while running the graph.

So instead of opening the Graph File,

Open the MP4 File in a graph edit and build the Graph manually every time as follows :


MP4 Video --> Temporal Decoder --> Video Renderer

GetMediaType() fn 's Execution in a DShow Filter

About GetMediaType() fn :
----------------------------
1. I opened the Image File in a GetMediaType() fn of a Temporal Decoder filter.
2. I added this filter to the GraphEdit as follows

Mp4 Video -> Temporal Decoder Filter ->Video Renderer
it is working fine ...
I saved this filter graph in .grf (graph) file.

If I opened the Graph(.grf ) file directly in GraphEdit, it gives error,

Because the GetMediaType() is not called in a filter. So The invalid memory is copied to my filter.

It causes the error in Graphedit while running the graph.

So instead of opening the Graph File,

Open the MP4 File in a graph edit and build the Graph manually every time as follows :


MP4 Video --> Temporal Decoder --> Video Renderer

How to develop only one custom Pin for a CTransformFilter

How to override only one Pin of a CTransformFilter?
-------------------------------------------------------

1.I want to implement the allocator for the Transform input pin.

How can we do this ?...


I developed my own CTransformInputPin derived class as follows :


class CMyAllocator : public CMemAllocator
{

};

class CNewTransformInputPin :public CTransformInputPin
{
friend class CMyAllocator;
};


I initialized my "CNewTransformInputPin" as follows :


class CMyTransformFilter : CTransformFilter
{
public:
CMyTransformFilter()
{
m_pInput = new CNewTransformInputPin();
}
};


while running the Graph, the GraphEdit hangs and I closed the application.

Solution :
-------------------

I modified it as follows :



class CMyTransformFilter : CTransformFilter
{
public:
CBasePin* GetPin(int i)
{
if( m_pInput == NULL)
{
m_pInput = new CNewTransformInputPin();
}

if( m_pOutput == NULL)
{
m_pOutput = new CTransformOutputPin();
}

if( i== 0) { return m_pInput;}
else if (i == 1) { return m_pOutput;}
else return NULL;
}
};





Now we can override the single Pin in a transform filter by Overriding the CBasePin* GetPin() fn...

Wednesday, January 16, 2008

Dynamic Buffer Size for input or Output Pin of a Filter

Things learnt today :
--------------------------
I want to identify the maximum MPEG4 Decoder's Input Buffer Size.

So what we need to do was

1.Open the MP4 file in a GraphEdit
2.GraphEdit is as follows :

MP4 Video Source Filter --> ffdshow video decoder --> Video Renderer

3.I developed the Null Transform Filter and which accepts any input media type and outputs the data on its output pin,
like a null transform filter and make the graph as follows :

MP4Video Source Filter --> Null Transform filter -> ffdshow video decoder --> Video Renderer

while running the graph, I am not getting video on video renderer and I got an error .

But it is not working well...

4.Next I removed the Null Transform filter and checked it with "Directshow Sample Grabber in DXSDK samples" filter.
it is working. So what I did was Just print the MediaSample size within a Directshow Sample grabber.

The Sample Grabber filter's media sample size is varying... Media Sample Size is not a fixed one.

How they managed it ? Null Transform filter accepts constant input size. So It causes the Error.


Conclusion :
--------------

To accept dynamic input size, we have to implement the custom allocator for the input pin.

Within the sample grabber filter, u may check this one...
Sample Grabber filter's input pin is using CMemAllocator class .

So if we want to develop the Encoder, the Encoder Output pin buffer size will be varying dynamically.
So we have to implement Custom allocator for output pins in Encoder.
if we want to develop the Decoder, the Decoder input pin buffer size will be varying dynamically.
So we have to implement custom allocator for input pins in Decoder.

Dynamic Buffer Size for input or Output Pin of a Filter

Things learnt today :
--------------------------
I want to identify the maximum MPEG4 Decoder's Input Buffer Size.

So what we need to do was

1.Open the MP4 file in a GraphEdit
2.GraphEdit is as follows :

MP4 Video Source Filter --> ffdshow video decoder --> Video Renderer

3.I developed the Null Transform Filter and which accepts any input media type and outputs the data on its output pin,
like a null transform filter and make the graph as follows :

MP4Video Source Filter --> Null Transform filter -> ffdshow video decoder --> Video Renderer

while running the graph, I am not getting video on video renderer and I got an error .

But it is not working well...

4.Next I removed the Null Transform filter and checked it with "Directshow Sample Grabber in DXSDK samples" filter.
it is working. So what I did was Just print the MediaSample size within a Directshow Sample grabber.

The Sample Grabber filter's media sample size is varying... Media Sample Size is not a fixed one.

How they managed it ? Null Transform filter accepts constant input size. So It causes the Error.


Conclusion :
--------------

To accept dynamic input size, we have to implement the custom allocator for the input pin.

Within the sample grabber filter, u may check this one...
Sample Grabber filter's input pin is using CMemAllocator class .

So if we want to develop the Encoder, the Encoder Output pin buffer size will be varying dynamically.
So we have to implement Custom allocator for output pins in Encoder.
if we want to develop the Decoder, the Decoder input pin buffer size will be varying dynamically.
So we have to implement custom allocator for input pins in Decoder.

Friday, January 11, 2008

How to call callback functions repeatedly ?

ll the callbacks will be in the form of as follows :

WSARecv( AsyncCallback);

if the data is received in a socket, AsyncCallback () fn is being called .
How can we repeatly Read data from socket ? By calling WSARecv() fn in a thread ( which is being called repeatedly)
?

No.. we can achive this by the following


WSARecv(AsyncCallback);


within AsyncCallback()
{

//call the WSARecv() fn based on some condition ...
}

For Example,

AsyncCallback()
{
if( bStop == false)
{
WsaRecv(AsyncCallback);
}
}



Another one way is :


bool bContinueLoop = true;
RecvSocket()
{
WSARecv(AsyncCallback);
}

AsyncCallback()
{
if(bContinueLoop)
{
RecvSocket();
}

}

StopRecvSocket()
{
bContinueLoop = false;
}

Monday, January 07, 2008

How to Implement DMO filters

Implement DMO filters :
-----------------------
1. if we want to develop the Inplace filter,

we have to implement IMediaObjectInPlace interface.

2. if we want to develop the filter like Transform Filter,

we have to implement IMediaObject interface.


Regards
Sundara rajan.A

calculate Time Between Two Frames using FrameRate

calculate Time Between Two Frames using FrameRate :
---------------------------------------------------
1.fDistanceBetweenTwoFrames = 1000 / FrameRate;
2.fDistanceBetweenTwoFrames = 33.33 for 30 FPS
3.fDistanceBetweenTwoFrames = 40 for 25 FPS

Thursday, January 03, 2008

Capture video and audio from devices and send it to RTP Stream

Capture video and audio from devices and send it to RTP :

1.I installed the cygwin in windowsXP.
2.I copied the FFMPEG with .exe
3. I connected the webcam,microphone in my pc.
4. I run this command in cygwin shell as follows :

$ ./ffmpeg -v 100 -f vfwcap -s "640x480" -i /dev/video0 -vcodec mpeg4 -f rtp rt
p://127.0.0.1:8090 -f audio_device -i /dev/dsp -acodec mp2 -f rtp rtp://127.0.0
.1:9090 2>&1 | tee ffmpeg_log_win32.txt


./ is the standard for running .exe files in cygwin shell.


we can switch over to drive using the following command:

cd D:
cd C:
we have to run the ffmpeg from the directory at which the it is being installed.


I opened the .sdp file in a QuickTime player.
After buffering the quicktime player shows the data.
SDP stands for Session Description Protocol.
.SDP file content is as follows :



v=0
o=- 0 0 IN IPV4 127.0.0.1
t=0 0
s=No Name
a=tool:libavformat
c=IN IP4 127.0.0.1
m=video 8090 RTP/AVP 96
a=rtpmap:96 MP4V-ES/90000
a=fmtp:96 profile-level-id=1
m=audio 9090 RTP/AVP 14
a=rtpmap:14 MPA/90000

I used the DS Network receiver to receive data from the specified port.
(8090) and using Dump Filter, I dumped the data in to a file.
FFMPEG transmits the data in Transport Stream format.
Our MPEG4 Encoder and decoders are in Program Stream.

Capture video and audio from devices and send it to RTP Stream

Capture video and audio from devices and send it to RTP :

1.I installed the cygwin in windowsXP.
2.I copied the FFMPEG with .exe
3. I connected the webcam,microphone in my pc.
4. I run this command in cygwin shell as follows :

$ ./ffmpeg -v 100 -f vfwcap -s "640x480" -i /dev/video0 -vcodec mpeg4 -f rtp rt
p://127.0.0.1:8090 -f audio_device -i /dev/dsp -acodec mp2 -f rtp rtp://127.0.0
.1:9090 2>&1 | tee ffmpeg_log_win32.txt


./ is the standard for running .exe files in cygwin shell.


we can switch over to drive using the following command:

cd D:
cd C:
we have to run the ffmpeg from the directory at which the it is being installed.


I opened the .sdp file in a QuickTime player.
After buffering the quicktime player shows the data.
SDP stands for Session Description Protocol.
.SDP file content is as follows :



v=0
o=- 0 0 IN IPV4 127.0.0.1
t=0 0
s=No Name
a=tool:libavformat
c=IN IP4 127.0.0.1
m=video 8090 RTP/AVP 96
a=rtpmap:96 MP4V-ES/90000
a=fmtp:96 profile-level-id=1
m=audio 9090 RTP/AVP 14
a=rtpmap:14 MPA/90000

I used the DS Network receiver to receive data from the specified port.
(8090) and using Dump Filter, I dumped the data in to a file.
FFMPEG transmits the data in Transport Stream format.
Our MPEG4 Encoder and decoders are in Program Stream.