Quantcast
Channel: Intel® Software - Media
Viewing all 2185 articles
Browse latest View live

Capture FPS drops when Windows Media Player is running

$
0
0

Hi,

I developed an application for capturing the desktop screen by referring sample_decode example. What I am observing is that:

1. While using HW Decoding + System Memory option, my capture rate is around 250 FPS. But if I start playing any video file (checked with a .mp4 file having 1280x720 AVC video) in Windows Media Player while capturing is ON, then capture rate is dropped to 15-20 FPS only.

2. While using HW Decoding + D3D11 Memory option, my capture rate is around 90-100 FPS. And if I start playing video file in Windows Media Player while capturing is ON, then capture rate is dropped to 10-12 FPS only.

Note: this capture rate drop does not occur if I play same file in VLC player.

So my question is:

1. Why capture rate is dropped when media player is running? Is capture decoder using any resource which is being used by windows media player also? How to avoid this frame drop?

2. Why capture speed is slow with video memory, but faster with system memory? I was assuming that HW decoding + D3D11 memory combination would be best in performance.

 

To reproduce this issue, please run the sample_decode.exe with following option in both case (when media player is running and when media player is not running)

[sample_decode.exe capture -w 1920 -h 1080 -p "22d62c07e672408fbb4cc20ed7a053e4" -r]

you can observe the frame drop scenario.

 

My environment details:

IMSDK version: 6.0.0.349

Sample decode version: 6.0.0.49

OS: Windows 8.1

Desktop resolution: 1920x1080

System analyzer log is attached.

AttachmentSize
Downloadsys_analysis.txt1.42 KB

Hardware accelerated encoding

$
0
0

When I use quicksync to encode to an h.264 file, I need DTS and PTS values. But I use hardware encoding ,PTS is proper but DTS is always 0 after encoding.

My configuration for encoding is shown below: 

m_encVideoParams.mfx.CodecProfile        = MFX_PROFILE_AVC_HIGH;
m_encVideoParams.mfx.CodecId            = MFX_CODEC_AVC;
m_encVideoParams.mfx.TargetUsage                = MFX_TARGETUSAGE_BEST_QUALITY;
m_encVideoParams.mfx.TargetKbps                 = 2000;
m_encVideoParams.mfx.MaxKbps            = 4000;

m_encVideoParams.mfx.RateControlMethod          = MFX_RATECONTROL_VBR;

m_encVideoParams.mfx.FrameInfo.CropW            = h264OptParams -> width;
m_encVideoParams.mfx.FrameInfo.CropH            = h264OptParams -> height;
m_encVideoParams.mfx.FrameInfo.Width        = MSDK_ALIGN16( h264OptParams -> width );

m_encVideoParams.mfx.GopPicSize            =52;
m_encVideoParams.mfx.IdrInterval        = 2;
m_encVideoParams.mfx.GopRefDist            = 3;
m_encVideoParams.mfx.GopOptFlag            = MFX_GOP_STRICT;
m_encVideoParams.mfx.BufferSizeInKB        = 0; 
m_encVideoParams.mfx.FrameInfo.FrameRateExtN    = 25;
m_encVideoParams.mfx.FrameInfo.FrameRateExtD    = 1;

m_encVideoParams.mfx.FrameInfo.PicStruct    = MFX_PICSTRUCT_PROGRESSIVE;

m_encVideoParams.mfx.FrameInfo.Height        = MSDK_ALIGN16( h264OptParams -> height );

m_encVideoParams.mfx.FrameInfo.FourCC           = MFX_FOURCC_NV12;
m_encVideoParams.mfx.FrameInfo.ChromaFormat     = MFX_CHROMAFORMAT_YUV420;
m_encVideoParams.mfx.FrameInfo.BitDepthLuma    = 8;
m_encVideoParams.mfx.FrameInfo.BitDepthChroma    = 8;
m_encVideoParams.AsyncDepth            = 4;

m_encVideoParams.mfx.FrameInfo.CropX            = 0;
m_encVideoParams.mfx.FrameInfo.CropY            = 0;
m_encVideoParams.mfx.EncodedOrder        = 0;

m_encVideoParams.IOPattern            = MFX_IOPATTERN_IN_SYSTEM_MEMORY;

CoCreateInstance for Quick Sync Video H.264 Encoder returns E_FAIL

$
0
0

Dear reader,

I'm trying to encode raw NV12 buffers using Window Media Foundation Transforms. I iterate over the available hardware encoders using MFEnumEx, extract the GUID from the activation object which supports hardware encoding. Then when I try to create the transform object using CoCreateInstance(), the function returns E_FAIL.  What am I doing wrong? I've pasted all my test code here with the steps to reproduce this issue.

I've seen a couple of other posts in this forum with somewhat related issue dating from back to 2013. As I understand Quick Sync is supported through the Windows Media Transform API, or am I wrong?

Best,
d

Correct way of managing calls to MFXVideoENCODE_EncodeFrameAsync()

$
0
0

Hi

When I call MFXVideoENCODE_EncodeFrameAsync(), what is the recommended way of managing the syncpoint(s) and bitstream(s) variables that you pass into this function? I'm wondering because I've seen different ways how people implement this. One way I've seen is to reuse one bitstream and reuse the syncpoint for every call you make to MFXVideoENCODE_EncodeFrameAsync().  The second solution I've seen is to manage a pool of bitstreams and syncpoints. 

I've heard that the second solution is the preferred way; though it seems like a design flaw to create multiple bitstream objects as one bitstream should be enough to handle the output of the encoder.

Anyone who can share some advise? 

Thanks

Need To Know Release Date Of Next Graphics Driver Version

$
0
0

Hi everyone.

I need to know when the next graphics driver for the Ivy Bridge processors will be released.

My project cannot be released because of a handle leaking bug with the latest version of the graphics driver, and solving this problem is very urgent for the project.

I posted a thread about this issue in the following link.

https://software.intel.com/en-us/forums/intel-media-sdk/topic/595304

So would someone care to tell me the approximate release date of the next driver version?

Thanks!

Is HW accel possible for JPEG encoder?

$
0
0

Hello

I am trying media sdk sample_encode and during debug i see that pPipeline->Init(&Params) function returns MFX_WRN_PARTIAL_ACCELERATION

in fact is see with Intel's GPA that GPU is not used at all. So seem JPEG encoding cannot be HW accelerated? 

Or what i am doing wrong?

 

Is it possible to notify h264 HW encoder about frame update region?

$
0
0

hello

I am trying to use HW encoder for streaming h264. I know i should push raw frames to encoder and it will do everything for me.

Besides that i am expecting that encoder should compare new coming frame with several previous frames for making P-frame difference calculation. And i expect that frame comparation is probably very heavy task.

What if i know exactly update region? I know list of affected regions which changed relative to previous frames. This info theoretically could help encoder to do frame comparation faster. Knowledge of frame update region could save some CPU/GPU power.

But question is: how to give update region to encoder? Is it ever possible?

Upscaling PAL to FullHD

$
0
0

Hello There!

I'm trying to upscale the PAL (720x576@50i) signal to a progressive FullHD (1920x1080) at same fps rate, while cutting equal parts from the top and the bottom of the source picture, to gain the correct "picture aspect ratio".  

My parameters look quite right for me, but i get "MFX_ERR_UNDEFINED_BEHAVIOR" error on the "MFXVideoSession::SyncOperation" call.

What i'm doing wrong? Thanks in advance!

"Intel API Implementation: MFX_IMPL_SOFTWARE Version: 1.13"

"-------VPP Parameter-------"
"AsyncDepth: 0"
"NumExtParam: 0"
"IOPattern: MFX_IOPATTERN_IN_SYSTEM_MEMORY,MFX_IOPATTERN_OUT_SYSTEM_MEMORY"

"-------VPP INPUT-------"
"BitDepthLuma: 8"
"BitDepthChroma: 8"
"Shift: 0"
"FourCC: YUY2"
"Width: 720"
"Height: 576"
"CropX: 0"
"CropY: 85"
"CropW: 720"
"CropH: 405"
"FrameRateExtN: 25000"
"FrameRateExtD: 1000"
"AspectRatioW: 0"
"AspectRatioH: 0"
"PicStruct: MFX_PICSTRUCT_FIELD_TFF"
"ChromaFormat: MFX_CHROMAFORMAT_YUV422"

"-------VPP OUTPUT-------"
"BitDepthLuma: 8"
"BitDepthChroma: 8"
"Shift: 0"
"FourCC: RGB4"
"Width: 1920"
"Height: 1088"
"CropX: 0"
"CropY: 0"
"CropW: 1920"
"CropH: 1080"
"FrameRateExtN: 25000"
"FrameRateExtD: 1000"
"AspectRatioW: 0"
"AspectRatioH: 0"
"PicStruct: MFX_PICSTRUCT_PROGRESSIVE"
"ChromaFormat: MFX_CHROMAFORMAT_YUV444"

Kind regards,
Max


Does media sdk currently support 32-bit Linux?

$
0
0

Hi everybody!

I would like to know if there is any available version of the Intel media sdk that can be used on the 32-bit Linux platforms now.

Thanks!

Latest private build still records from microphone

API 1.17 documentation

$
0
0

Since newest Intel drivers on Microsoft update page includes API 1.17, where I can find some documentation about this? And what's new over 1.16? Does it include Skylake support?

H.264 Profiles in gstreamer-vaapi

$
0
0

Hello,

I am doing some experimentation using gstreamer-vaapi on an Intel i3-4370 CPU. I am running libva 1.6.1, and libva Intel driver 1.6.0 on Ubuntu 15.04.

My issue is that I cannot seem to get the thing to do anything except constrained baseline, despite asking it to do other profiles.

For example:

gst-launch-1.0 filesrc location=/ramdisk/bbb_sunflower_1080p_30fps_normal.mp4 ! qtdemux ! vaapidecode ! vaapiencode_h264 ! video/x-h264,profile=high ! qtmux ! filesink location=/ramdisk/tmp.mov

appears to work fine, but something like:

avprobe -show_streams /ramdisk/tmp.mov

will indicate

profile=Constrained Baseline

Indeed, the output file sizes don't change much, if at all, between the profiles.

The only way that I can see a change what avprobe reports for the profile is to change the encoder element to:

vaapiencode_h264 dct8x8=1

My question is, is this correct behavior? Do we need dct8x8 to do anything higher than Constrained Baseline? If so, why? Is this a fundamental detail of H.264, or is this an implementation detail?

I am trying to compare some tradeoffs between HW and SW implementations, so I am trying to be very careful which knobs I turn, in order to be able to properly compare.

Thanks in advance for any feedback!
-Bill Katsak

 

 

H264 Encoding fails with D3D11 Memory

$
0
0

Hi,

I am running Sample_Encode.exe for encoding a yuv file into h264, but this application is always failing if run with -d3d11 option.

It works fine with D3D9 video memory and System memory, but fail with D3D11 memory (failing in hardware device init). So is it any know issue or limitation in IMSDK h264 encoder implementation?

My environment details are as following:

IMSDK version: 6.0.0.349
Sample decode version: 6.0.0.49
OS: Windows 8.1
Graphics Card: Intel(R) HD Graphics 4400 with driver version 10.18.14.4170

System analyzer info and screen shot of command and error are attached.

AttachmentSize
DownloadEncoding.jpg125.55 KB
Downloadsys_analysis.txt1.42 KB

IOPattern changed by MFXVideoVPP_Query

$
0
0

Greetings,

I added a VPP to resize the video frames, after the decoder and before the encoder. The pipeline worked well if I set the IOPattern to SYSTEM_MEMORY and VIDEO_MEMORY. However, when I set the IOPattern to OPAQUE_MEMORY, the call to MFXVideoVPP_QueryIOSurf() always returned error -15 (MFX_ERR_INVALID_VIDEO_PARAM).

So I tried to call MFXVideoVPP_Query() before MFXVideoVPP_QueryIOSurf() so as to verify the params, and realized that params had been changed by MFXVideoVPP_Query(). 

Below is my code snippet:

    // decide which memory to use
    switch(m_memType) {
        case SYSTEM_MEMORY:
            m_mfxVppParams.IOPattern = MFX_IOPATTERN_IN_SYSTEM_MEMORY | MFX_IOPATTERN_OUT_SYSTEM_MEMORY;
            break;
        case VIDEO_MEMORY:
            m_mfxVppParams.IOPattern = MFX_IOPATTERN_IN_VIDEO_MEMORY | MFX_IOPATTERN_OUT_VIDEO_MEMORY;
            break;
        case OPAQUE_MEMORY:
            m_mfxVppParams.IOPattern = MFX_IOPATTERN_IN_OPAQUE_MEMORY | MFX_IOPATTERN_OUT_OPAQUE_MEMORY;
            break;
        default:
            msdk_printf(MSDK_STRING("  channel %u: invalid memory type !\n"), m_nChanID);
            break;
    }
    msdk_printf(MSDK_STRING("  channel %u: vpp before query: IOPattern == 0x%0x;\n"), m_nChanID, m_mfxVppParams.IOPattern);

    sts = m_pmfxVPP->Query(&m_mfxVppParams, &m_mfxVppParams);
    MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);

    msdk_printf(MSDK_STRING("  channel %u: vpp after query: IOPattern == 0x%0x;\n"), m_nChanID, m_mfxVppParams.IOPattern);

 

And here is the stdout output:

  channel 0: instantiating the vpp ...
  channel 0: setting up the vpp params ...
  channel 0: vpp before query: IOPattern == 0x44;
  channel 0: vpp after query: IOPattern == 0x2;

The IOPattern change was also confirmed by the MFX tracer. Before the call to m_pmfxVPP->Query(), IOPattern in the vpp params are like this:

    in.IOPattern=UNKNOWN
    out.IOPattern=UNKNOWN

I don't understand why they were reported as "Unknown", as I had set them to be 
(MFX_IOPATTERN_IN_OPAQUE_MEMORY | MFX_IOPATTERN_OUT_OPAQUE_MEMORY).

However, after the call, they became like below, which was obviously wrong.
    in.IOPattern=MFX_IOPATTERN_IN_SYSTEM_MEMORY
    out.IOPattern=MFX_IOPATTERN_IN_SYSTEM_MEMORY

Any ideas/suggestions as to why this happened ?

Thanks,
Robby

Video scaling with "aspect ratio correction" (Letterbox or Pillarbox) results in incorrect color shift

$
0
0

Hello there!

I ran in an problem, while scaling a video frame with Intels video processing library. This post should help anyone, who experiences same issue.

While  MFXVideoVPP::Init(mfxVideoParam *par) returns MFX_ERR_NONE, there is a color shift (yellow and blue are broken) in the output video data. This runtime error occures because of misconfiguration of "crop" parameters. 

In this example a NTSC frame is scaled to 640x480 frame, under "scale to height" aspect ratio correction  mode.
(To achive this i have to cut equal parts from the left and right sides of the source frame)

This configuration results in described "color-shift" error:

"Intel API Implementation: MFX_IMPL_SOFTWARE Version: 1.13"    
"-------VPP Parameter-------"    
"AsyncDepth: 0"    
"NumExtParam: 0"    
"IOPattern: MFX_IOPATTERN_IN_SYSTEM_MEMORY,MFX_IOPATTERN_OUT_SYSTEM_MEMORY"    
"-------VPP INPUT-------"    
"BitDepthLuma: 8"    
"BitDepthChroma: 8"    
"Shift: 0"    
"FourCC: YUY2"    
"Width: 720"    
"Height: 512"    
"CropX: 35"    
"CropY: 0"    
"CropW: 650"    
"CropH: 486"    
"FrameRateExtN: 30000"    
"FrameRateExtD: 1001"    
"AspectRatioW: 0"    
"AspectRatioH: 0"    
"PicStruct: MFX_PICSTRUCT_FIELD_BFF"    
"ChromaFormat: MFX_CHROMAFORMAT_YUV422"    
""    
"-------VPP OUTPUT-------"    
"BitDepthLuma: 8"    
"BitDepthChroma: 8"    
"Shift: 0"    
"FourCC: RGB4"    
"Width: 640"    
"Height: 480"    
"CropX: 0"    
"CropY: 0"    
"CropW: 640"    
"CropH: 480"    
"FrameRateExtN: 30000"    
"FrameRateExtD: 1001"    
"AspectRatioW: 0"    
"AspectRatioH: 0"    
"PicStruct: MFX_PICSTRUCT_PROGRESSIVE"    
"ChromaFormat: MFX_CHROMAFORMAT_YUV444"    

This would be the correct configuration :

"Intel API Implementation: MFX_IMPL_SOFTWARE Version: 1.13"    
"-------VPP Parameter-------"    
"AsyncDepth: 0"    
"NumExtParam: 0"    
"IOPattern: MFX_IOPATTERN_IN_SYSTEM_MEMORY,MFX_IOPATTERN_OUT_SYSTEM_MEMORY"    
"-------VPP INPUT-------"    
"BitDepthLuma: 8"    
"BitDepthChroma: 8"    
"Shift: 0"    
"FourCC: YUY2"    
"Width: 720"    
"Height: 512"    
"CropX: 36"    
"CropY: 0"    
"CropW: 648"    
"CropH: 486"    
"FrameRateExtN: 30000"    
"FrameRateExtD: 1001"    
"AspectRatioW: 0"    
"AspectRatioH: 0"    
"PicStruct: MFX_PICSTRUCT_FIELD_BFF"    
"ChromaFormat: MFX_CHROMAFORMAT_YUV422"    
""    
"-------VPP OUTPUT-------"    
"BitDepthLuma: 8"    
"BitDepthChroma: 8"    
"Shift: 0"    
"FourCC: RGB4"    
"Width: 640"    
"Height: 480"    
"CropX: 0"    
"CropY: 0"    
"CropW: 640"    
"CropH: 480"    
"FrameRateExtN: 30000"    
"FrameRateExtD: 1001"    
"AspectRatioW: 0"    
"AspectRatioH: 0"    
"PicStruct: MFX_PICSTRUCT_PROGRESSIVE"    
"ChromaFormat: MFX_CHROMAFORMAT_YUV444"    

 

Kind regards,
Max

 


mfxEncodeCtrl payload lifetime

$
0
0

Hi,

I'm having some issues inserting SEI payloads while encoding.  Specifically, I'm confused regarding the lifetime of the mfxEncodeCtrl structure and associated payload.  If I allocate the mfxEncodeCtrl and mfxPayload data on the heap and never free either, everything works fine.  If I free just the mfxPayload->Data, it also seems to work.  But if I also free the mfxEncodeCtrl structure or the Payload pointers, then I get a segfault in the codec library.

I thought I saw something in the documentation that said the payload data needs to remain valid for the duration of the async job, which is why I'm waiting until after the SyncOperation completes to free it.  Is that also the case with the rest of the mfxEncodeCtrl struct?

So, I'd like to clarify the following:

- Does the encoder copy the mfxPayload data to some internal storage during EncodeFrameAsync?

- Is it ok to use a new mfxEncodeCtrl structure on each call to EncodeFrameAsync, or should I be reusing the same one and just changing the payload each time?  If this is ok, what is the expected lifetime of the mfxEncodeCtrl object?

I should also mention that I'm pipelining the encoding with an AsyncDepth of 4.

Thanks,

will

dpdk & media sdk

$
0
0

Hello,

Is there any sample which show capturing video packets from device (ethernet, pcie, other port) ?

Is it using dpdk ? or v4l ?

What is the recommendation ?

Regards,

Ran

AdaptiveI support

$
0
0

Hi,

Is AdaptiveI or AdaptiveB supported on a particular set of hardware or a particular API level?  The manual makes it sound like API 1.8 is the requirement.  I'm using 1.10 and I don't see any I-frames being inserted on scene changes.

I found this: https://software.intel.com/en-us/forums/intel-media-sdk/topic/530296 which seems to indicate that it isn't supported - is this true for all hardware/API levels, or just a specific generation?

I'm using API 1.10 on Linux with Ivy Bridge hardware.  If AdaptiveI is not supported, do you have any recommendations for how to improve performance on scene boundaries?  Should LookAhead work on this hardware?

Thanks,

Will

Is graphic driver installed ?

$
0
0

Hello,

I am using media sdk server with i7, sentOS,

Is there a way to validate that graphic driver in installed and running ?

I have not seen that grphc driver installation is mentioned anywhere in media sdk start guide.

Thank you,

Ran

 

 

comparison of media sdk server editions

Viewing all 2185 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>