Quantcast
Channel: Intel® Software - Media
Viewing all 2185 articles
Browse latest View live

Decode 4k video Return MFX_ERR_DEVICE_FAILED

$
0
0

hi all,

I am using intel media sdk(2016) cycle decoding a 4k h264 video files. However, after a while, the DecodeFrameAsync interface returns MFX_ERR_DEVICE_FAILED. Running time is uncertain, may be ten minutes, may be ten hours.
May I ask what this may be the reason?

 CPU:    Intel(R) Core(TM) i5-6600 CPU
 OS:     Microsoft Windows 10
 Arch:   64 bits

Thanks in advance!

 


some questions about Intel VPP

$
0
0

Hi, when I use vpp to mix the pictures, I got two input streams to mix, but the output picture is just get the first stream and miss the second stream,the param set like :

stream=1.yuv
width=640
height=480
cropx=0
cropy=0
cropw=640
croph=480
dstx=0
dsty=0
dstw=640
dsth=480
framerate=25
fourcc=nv12

stream=2.yuv
width=640
height=480
cropx=0
cropy=0
cropw=640
croph=480
dstx=0
dsty=0
dstw=240
dsth=160
framerate=25
fourcc=nv12

How could this happen? My computer type is thinkpad e431, Graphics Card: Intel(R) HD Graphics 4000 

And This Works well on another computer thinkpad e440 with Graphics Card: Intel(R) HD Graphics 4600

Thread Topic: 

Question

GPU Hang

$
0
0

Hi,

I'm experiencing fairly frequent GPU hangs when running ~20 processes each doing an mpeg2 decode, rescale, then h264 and jpeg encodes.  This is on Windows 8.1 with the latest drivers.  Is anyone else experiencing this sort of thing?  If so, is there anything that can be done to reduce (and ideally stop) this from occurring?  If it's unusual, then does anyone have any pointer as to what we might be doing that is causing it?  If there are any logging / tracing tools that I can run to provide further details, then let me know.

Cheers,

Steve

h264_qsv encoder can't keep up with live capture?

$
0
0

We are using ffmpeg to capture live video from a Blackmagic DeckLink Mini Recorder and encode to multiple h264 outputs (streamed over RTMP and written to disk).  We're running CentOS 7.2 with Media Server Studio 2017 on a Core i7-6700K.

We were very impressed with h264_qsv performance when we did multiple transcodes of a 1920x1080 MP4 file -- we were able to do 4 independent  transcodes to 7 different locations (3 streams, 4 files) and it all ran in 3x real time.  And the CPU impact was minimal (maybe 10-15%).

Based on this performance, we thought that it would be trivial to do real-time encoding of HD video.

Unfortunately, we have found that the 1 GB frame buffer will fill up, causing frames to drop.  Sometimes it takes an hour before this behavior happens, and sometimes it only takes a few minutes.  But it seems to always happen.

Here is a sample invocation:

https://gist.github.com/jpriebe/9acdc3beb50547449bd47f4fb46de214

(normally we would "tee" the low/medium/high encodings to an RTMP streaming server.  I have removed that to make the example a little simpler)

We have tried everything to streamline/speed up the QuickSync encoding:

- use veryfast preset
- fix the min/max bitrates to use CBR
- force the GPU clock to 1150MHz

No matter what settings we use, the buffer will overrun eventually.  The strange thing is that the GPU load (as measured by the metrics_monitor utility) is less than 20% and clock speed is confirmed to be at 1150MHz. So it doesn't seem like the GPU is overloaded.

We tried using libx264 for some of the encodings to the CPU, to no avail.  If even one of the four encodings is done on the GPU, we will get an overrun.

By comparison, if we run all 4 encodings on the CPU, we can run indefinitely with no buffer overrun.

It really seems like there is some sort of bottleneck in the h264_qsv encoding.  I don't know if it's a problem in ffmpeg, with QuickSync itself, or just in my understanding of how it all works.

Any insight would be much appreciated.  Thanks!

Thread Topic: 

Help Me

JPEG Encode Hardware Acceleration

$
0
0

I have developed an application which encodes JPEG images around Media SDK 1.19.  On my development machine everything works well, however it is a Gen 5 Core CPU (i7-5820K) so the implementation fails back to Software.

On a second machine (Atom E3825) the application also works and automatically fails back to software implementation.

I went out and purchased a Gen 6 Skylake (Pentium E4400) CPU and motherboard so that I can test JPEG hardware acceleration.  Here is my problem:

When I run my application I get the response MFX_ERR_UNSUPPORTED from MFXVideoENCODE_Query.

If I change the IMPL from AUTO to SOFTWARE the program runs with no problems.

If I run the "sample_encode.exe" program provided by Intel like this: "sample_encode.exe jpeg -w 1920 -h 1080 -f 30 -q 90 -i test.bin -o test.jpg" it runs correctly but gives the warning "partial acceleration".

Here is my code for the video param:

                mfxVideoParam m_mfxEncParams;
		memset(&m_mfxEncParams, 0, sizeof(mfxVideoParam));
		m_mfxEncParams.mfx.CodecId = MFX_CODEC_JPEG;

		m_mfxEncParams.mfx.FrameInfo.FrameRateExtN = 120;
		m_mfxEncParams.mfx.FrameInfo.FrameRateExtD = 1;

		m_mfxEncParams.IOPattern = MFX_IOPATTERN_IN_SYSTEM_MEMORY;
		// frame info parameters
		m_mfxEncParams.mfx.FrameInfo.FourCC = MFX_FOURCC_YV12;
		m_mfxEncParams.mfx.FrameInfo.ChromaFormat = MFX_CHROMAFORMAT_YUV420;
		m_mfxEncParams.mfx.FrameInfo.PicStruct = MFX_PICSTRUCT_PROGRESSIVE;


		m_mfxEncParams.mfx.FrameInfo.Width = MSDK_ALIGN16(width);
		m_mfxEncParams.mfx.FrameInfo.Height = (MFX_PICSTRUCT_PROGRESSIVE == m_mfxEncParams.mfx.FrameInfo.PicStruct) ?
			MSDK_ALIGN16(height) : MSDK_ALIGN32(height);

		m_mfxEncParams.mfx.FrameInfo.CropX = 0;
		m_mfxEncParams.mfx.FrameInfo.CropY = 0;
		m_mfxEncParams.mfx.FrameInfo.CropW = width;
		m_mfxEncParams.mfx.FrameInfo.CropH = height;

		m_mfxEncParams.mfx.Interleaved = 1;
		m_mfxEncParams.mfx.Quality = 75;
		m_mfxEncParams.mfx.RestartInterval = 0;

		m_mfxEncParams.AsyncDepth = 1;

		m_mfxEncParams.mfx.BufferSizeInKB = width * height * 32 / 1024;

		mfxStatus status = MFXVideoENCODE_Query(*session, &m_mfxEncParams, &m_mfxEncParams);

And here are the results from mediasdk_system_analyzer_64.exe:

Intel(R) Media Server Studio 2016 R2 - System Analyzer (64-bit)

The following versions of Media SDK API are supported by platform/driver
[opportunistic detection of MSDK API > 1.19]:

    Version    Target    Supported    Dec    Enc
    1.0    HW    Yes        X    X
    1.0    SW    Yes        X    X
    1.1    HW    Yes        X    X
    1.1    SW    Yes        X    X
    1.2    HW    Yes        X    X
    1.2    SW    Yes        X    X
    1.3    HW    Yes        X    X
    1.3    SW    Yes        X    X
    1.4    HW    Yes        X    X
    1.4    SW    Yes        X    X
    1.5    HW    Yes        X    X
    1.5    SW    Yes        X    X
    1.6    HW    Yes        X    X
    1.6    SW    Yes        X    X
    1.7    HW    Yes        X    X
    1.7    SW    Yes        X    X
    1.8    HW    Yes        X    X
    1.8    SW    Yes        X    X
    1.9    HW    Yes        X    X
    1.9    SW    Yes        X    X
    1.10    HW    Yes        X    X
    1.10    SW    Yes        X    X
    1.11    HW    Yes        X    X
    1.11    SW    Yes        X    X
    1.12    HW    Yes        X    X
    1.12    SW    Yes        X    X
    1.13    HW    Yes        X    X
    1.13    SW    Yes        X    X
    1.14    HW    Yes        X    X
    1.14    SW    Yes        X    X
    1.15    HW    Yes        X    X
    1.15    SW    Yes        X    X
    1.16    HW    Yes        X    X
    1.16    SW    Yes        X    X
    1.17    HW    Yes        X    X
    1.17    SW    Yes        X    X
    1.18    HW    Yes        X    X
    1.18    SW    Yes        X    X
    1.19    HW    Yes        X    X
    1.19    SW    Yes        X    X

Graphics Devices:
    Name                                         Version             State
    Intel(R) HD Graphics 510                     20.19.15.4501       Active

System info:
    CPU:    Intel(R) Pentium(R) CPU G4400 @ 3.30GHz
    OS:    Microsoft Windows 7 Professional 
    Arch:    64-bit

Installed Media SDK packages (be patient...processing takes some time):

Installed Media SDK DirectShow filters:

Installed Intel Media Foundation Transforms:
    Intel(R) Hardware VC-1 Decoder MFT : {059A5BAE-5D7A-4C5E-8F7A-BFD57D1D6AAA}
    Intel(R) Hardware H.264 Decoder MFT : {45E5CE07-5AC7-4509-94E9-62DB27CF8F96}
    Intel(R) Hardware MPEG-2 Decoder MFT : {CD5BA7FF-9071-40E9-A462-8DC5152B1776}
    Intel(R) Quick Sync Video H.264 Encoder MFT : {4BE8D3C0-0515-4A37-AD55-E4BAE19AF471}
    Intel(R) Hardware Preprocessing MFT : {EE69B504-1CBF-4EA6-8137-BB10F806B014}

Thank you everyone for your help.

Media SDK JPEG Memory Leak

$
0
0

Hi,

I'm running a fairly simple transcode which is showing a memory leak when encoding JPEGs with driver version 20.19.15.4539 on Windows 10 with a i7-6770HQ CPU / Iris Pro 580 GPU.

I have run two separate processing pipelines.  First is an mpeg2 decode to an h264 encode; this runs just fine with no noticeable leaks.  Changing just the initialisation code for the encoder to perform a JPEG encode instead of h264, I see a significant leak of the order of ~450MB over the course of around 12 minutes.  This is doing 25 frames per second, so assuming the leak is per-frame then it is around 25KB per frame.

Again, literally the only difference between the two runs was changing the parameters to the the MFXVideoENCODE_Init call, so I am pretty confident that this is a leak in the driver rather than in my surrounding code.

One additional point, the encoder is reporting MFX_WRN_PARTIAL_ACCELERATION when initialising; I was expecting JPEG encodes on Skylake to be hardware.  Don't know if that's related to the leak or a subject for another post.

Any ideas on how to diagnose the leak further?

Cheers,

Steve

Intel® Quick Sync Video H.264 Encoder, program blocked and produces no data

$
0
0

Hi

I'm having problems running Intel® Quick Sync Video H.264 Encoder MFT, when the hardware MFT sents METransformHaveOutput(EventType = 602) event, the sample from ProcessOutput is NULL, and the program is blocked in this state. Any Any idea what could be wrong? or are there some examples of hardware MFT? 

Here are the outputs

Hardware URL Attribute:AA243E5D-2F73-48c7-97F7-F6FA17651651.
Hardware Friendly Name:Intel?Quick Sync Video H.264 Encoder MFT.
Frame 1
EventType = 601
timeStamp = 0
duration = 49816800
EventType = 601
EventType = 602

 Following are parts of my code, and program is blocked at WaitForSingleObject(mEventDrainComplete, INFINITE), the MFT have already turned to METransformHaveOutput while the sample from ProcessOutput is NULL.

#include "hw_enc_common.h"
#include "MpegEncoder_i.h"

#pragma unmanaged

MpegEncoder::MpegEncoder()
{
    mEventHaveInput = CreateEvent(NULL, FALSE, FALSE, NULL);
    mEventNeedInput = CreateEvent(NULL, FALSE, FALSE, NULL);
    mEventDrainComplete = CreateEvent(NULL, FALSE, FALSE, NULL);
}

MpegEncoder::~MpegEncoder()
{
    CloseHandle(mEventHaveInput);
    CloseHandle(mEventNeedInput);
    CloseHandle(mEventDrainComplete);
}

bool MpegEncoder::Initialize()
{
    bool res;

    //try
    {
        HRESULT hr;
        TESTHR(hr = MFStartup(MF_VERSION));
        res = true;
    }
    //catch(com_error ex)
    //{
    //    res = false;
    //}

    return res;
}

bool MpegEncoder::Create(int width, int height, int rate)
{
    bool res;
    HRESULT hr;

    error = false;

    mRate = rate;
    mWidth = width;
    mHeight = height;
    mpWriter = NULL;

    //CoInitializeEx(NULL, COINIT_APARTMENTTHREADED | COINIT_DISABLE_OLE1DDE);
    //MFStartup(MF_VERSION);

    uint64_t frame_rate = uint64_t(rate) << 32 | 1;
    uint64_t frame_size = uint64_t(width) << 32 | height;

    //mTrace = new CObTrace(L"H264Encoder_#.log");
    //mTrace->SetLogByDate();

    //try
    {
        IMFMediaTypePtr pType1;
        MFCreateMediaType(&pType1);
        TESTHR(hr = pType1->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video));
        TESTHR(hr = pType1->SetGUID(MF_MT_SUBTYPE, MFVideoFormat_I420));
        TESTHR(hr = pType1->SetUINT32(MF_MT_INTERLACE_MODE, 2/*MFVideoInterlaceMode::MFVideoInterlace_Progressive*/));
        TESTHR(hr = pType1->SetUINT64(MF_MT_FRAME_RATE, frame_rate));
        TESTHR(hr = pType1->SetUINT64(MF_MT_FRAME_SIZE, frame_size));

        IMFMediaTypePtr    pType2;
        MFCreateMediaType(&pType2);
        TESTHR(hr = pType2->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video));
        TESTHR(hr = pType2->SetGUID(MF_MT_SUBTYPE, MFVideoFormat_NV12));
        TESTHR(hr = pType2->SetUINT32(MF_MT_INTERLACE_MODE, 2/*MFVideoInterlaceMode::MFVideoInterlace_Progressive*/)); 
        TESTHR(hr = pType2->SetUINT64(MF_MT_FRAME_RATE, frame_rate));
        TESTHR(hr = pType2->SetUINT64(MF_MT_FRAME_SIZE, frame_size));

        IMFMediaTypePtr    pType3;
        MFCreateMediaType(&pType3);
        TESTHR(hr = pType3->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video));
        TESTHR(hr = pType3->SetGUID(MF_MT_SUBTYPE, MFVideoFormat_H264));
        TESTHR(hr = pType3->SetUINT32(MF_MT_AVG_BITRATE, 100000000));
        TESTHR(hr = pType3->SetUINT64(MF_MT_FRAME_RATE, frame_rate));
        TESTHR(hr = pType3->SetUINT64(MF_MT_FRAME_SIZE, frame_size));
        TESTHR(hr = pType3->SetUINT32(MF_MT_INTERLACE_MODE, 2/*MFVideoInterlaceMode::MFVideoInterlace_Progressive*/));
        TESTHR(hr = pType3->SetUINT32(MF_MT_MPEG2_PROFILE, 66/*eAVEncH264VProfile::eAVEncH264VProfile_Main*/));
        TESTHR(hr = pType3->SetUINT32(MF_MT_ALL_SAMPLES_INDEPENDENT, TRUE));

        ///////////////////////////////////color  convert///////////////////////////////////
        IUnknown *spH264EncoderUnk = NULL, *spColorConvertUnk = NULL;
        TESTHR(MFTRegisterLocalByCLSID(
            __uuidof(CColorConvertDMO),
            MFT_CATEGORY_VIDEO_PROCESSOR,
            L"",
            MFT_ENUM_FLAG_SYNCMFT,
            0,
            NULL,
            0,
            NULL
            ));

        // Create Color Convert
        TESTHR(CoCreateInstance(CLSID_CColorConvertDMO, NULL, CLSCTX_INPROC_SERVER,
            IID_IUnknown, (void**)&spColorConvertUnk));

        TESTHR(spColorConvertUnk->QueryInterface(IID_PPV_ARGS(&mpColorConverter)));
        //TESTHR(hr = mpColorConverter.CreateInstance(CLSID_CColorConvertDMO, mpColorConverter));
        TESTHR(hr = mpColorConverter->SetOutputType(0, pType2, 0));
        TESTHR(hr = mpColorConverter->SetInputType(0, pType1, 0));

        ///////////////////////////////////////////h264 encoder//////////////////////////////////////
        uint32_t count = 0;
        IMFActivate **ppActivate = NULL;
        TESTHR( MFTEnumEx(
            MFT_CATEGORY_VIDEO_ENCODER,
            MFT_ENUM_FLAG_ASYNCMFT | MFT_ENUM_FLAG_HARDWARE    | MFT_ENUM_FLAG_LOCALMFT,
            NULL,       // Input type
            NULL,       // Output type
            &ppActivate,
            &count
            ));
        if (SUCCEEDED(hr) && count == 0)
        {
            hr = MF_E_TOPO_CODEC_NOT_FOUND;
            printf("H264 Encoder MF_E_TOPO_CODEC_NOT_FOUND\n");
        }
        // Create the first encoder in the list.
        if (SUCCEEDED(hr))
        {
            LPWSTR hardwareName = NULL;            
            TESTHR(ppActivate[0]->GetAllocatedString(MFT_ENUM_HARDWARE_URL_Attribute, &hardwareName,  NULL));
            wprintf(L"Hardware URL Attribute:%s.\n", hardwareName);
            TESTHR(ppActivate[0]->GetAllocatedString(MFT_FRIENDLY_NAME_Attribute, &hardwareName, NULL));
            wprintf(L"Hardware Friendly Name:%s.\n", hardwareName);
            TESTHR(ppActivate[0]->ActivateObject(IID_PPV_ARGS(&mpH264Encoder)));        
        }
        for (UINT32 i = 0; i < count; i++)
        {
            ppActivate[i]->Release();
        }
        CoTaskMemFree(ppActivate);

        //TESTHR(hr = mpH264Encoder.CreateInstance(L"{4BE8D3C0-0515-4A37-AD55-E4BAE19AF471}", mpH264Encoder));

        IMFAttributesPtr pAttributes;
        TESTHR(hr = mpH264Encoder->GetAttributes(&pAttributes));
        TESTHR(hr = pAttributes->SetUINT32(MF_TRANSFORM_ASYNC_UNLOCK, TRUE));
        TESTHR(hr = pAttributes->SetUINT32(MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS, TRUE));
        TESTHR(hr = mpH264Encoder->SetOutputType(0, pType3, 0));
        TESTHR(hr = mpH264Encoder->SetInputType(0, pType2, 0));

        TESTHR(hr = mpH264Encoder->QueryInterface(IID_IMFMediaEventGenerator, (void**)&mpH264EncoderEventGenerator));
        TESTHR(hr = mpH264EncoderEventGenerator->BeginGetEvent(this, NULL));

        mpWriter = NULL;

        res = true;
    }
    //catch (com_error ex)
    //{
    //    mTrace->Trace(1, L"Exception in %s(%d): 0x%08X", ex.GetFilename(), ex.GetLinenum(), ex.Error());
    //    res = false;
    //}

    return res;
}

bool MpegEncoder::Open(const wchar_t *filename)
{
    bool res;
    HRESULT hr;

    //mTrace->Trace(1, L"New file: %s", filename);

    //try
    {
        mFilename = filename;
        mpWriter = NULL;

        ResetEvent(mEventHaveInput);
        ResetEvent(mEventNeedInput);
        ResetEvent(mEventDrainComplete);
        
        TESTHR(hr = mpH264Encoder->ProcessMessage(MFT_MESSAGE_NOTIFY_START_OF_STREAM, 0));

        res = true;
    }
    //catch (com_error ex)
    //{
    //    mTrace->Trace(1, L"Exception in %s(%d): 0x%08X", ex.GetFilename(), ex.GetLinenum(), ex.Error());
    //    res = false;
    //}

    return res;
}

bool MpegEncoder::Encode(uint8_t * image, uint32_t size, uint64_t timestamp, uint64_t duration)
{
    bool res;
    //try
    {
        this->image = image;
        this->size = size;
        this->timestamp = timestamp;
        this->duration = duration;

        //mTrace->Trace(1, L"New image");

        SetEvent(mEventHaveInput);

        WaitForSingleObject(mEventNeedInput, INFINITE);

        this->image= nullptr;

        res = !error;        
    }
    //catch (com_error ex)
    //{
    //    mTrace->Trace(1, L"Exception in %s(%d): 0x%08X", ex.GetFilename(), ex.GetLinenum(), ex.Error());
    //    res = false;
    //}

    return res;
}

bool MpegEncoder::Close()
{
    bool res;
    //try
    {
        HRESULT hr;

        //mTrace->Trace(1, L"End file");

        // Retrieve the last samples that might by in the encoder
        TESTHR(hr = mpH264Encoder->ProcessMessage(MFT_MESSAGE_NOTIFY_END_OF_STREAM, 0));
        TESTHR(hr = mpH264Encoder->ProcessMessage(MFT_MESSAGE_COMMAND_DRAIN, 0));

        SetEvent(mEventHaveInput);

        printf("gg\n");

        WaitForSingleObject(mEventDrainComplete, INFINITE);

        printf("hh\n");

        TESTHR(hr = mpWriter->Finalize());

        mpWriter = NULL;
        newFile = true;

        res = true;
    }
    //catch (com_error ex)
    //{
    //    mTrace->Trace(1, L"Exception in %s(%d): 0x%08X", ex.GetFilename(), ex.GetLinenum(), ex.Error());
    //    res = false;
    //}

    return res;
}

STDMETHODIMP MpegEncoder::Invoke(IMFAsyncResult* pAsyncResult)
{
    HRESULT hr;
    HRESULT hStatus;
    IMFMediaEventPtr pEvent;
    MediaEventType meType;

    //try
    {
        TESTHR(hr = mpH264EncoderEventGenerator->EndGetEvent(pAsyncResult, &pEvent));
        TESTHR(hr = pEvent->GetType(&meType));
        TESTHR(hr = pEvent->GetStatus(&hStatus));
        printf("EventType = %d\n", meType);
        if (hStatus == S_OK)
        {
            if (meType == METransformNeedInput)
            {
                HRESULT hr;
                BYTE *pbBuffer;
                DWORD status;
                IMFSamplePtr pYUVSample, pNV12Sample;
                IMFMediaBufferPtr pYUVBuffer, pNV12Buffer;
                MFT_OUTPUT_STREAM_INFO streaminfo;

                //mTrace->Trace(1, L"New METransformNeedInput event");

                WaitForSingleObject(mEventHaveInput, INFINITE);

                if (image != NULL)
                {
                    TESTHR(hr = MFCreateMemoryBuffer(size, &pYUVBuffer));
                    TESTHR(hr = pYUVBuffer->Lock(&pbBuffer, NULL, NULL));
                    TESTHR(hr = MFCopyImage(pbBuffer, mWidth , image, mWidth , mWidth, mHeight*3/2));
                    TESTHR(hr = pYUVBuffer->SetCurrentLength(size));
                    TESTHR(hr = pYUVBuffer->Unlock());
                    TESTHR(hr = MFCreateSample(&pYUVSample));
                    TESTHR(hr = pYUVSample->AddBuffer(pYUVBuffer));
                    TESTHR(hr = pYUVSample->SetSampleDuration(duration));
                    TESTHR(hr = pYUVSample->SetSampleTime(timestamp));
                    TESTHR(hr = mpColorConverter->ProcessInput(0, pYUVSample, 0));

                    MFT_OUTPUT_DATA_BUFFER nv12OutputDataBuffer;
                    ZeroMemory(&nv12OutputDataBuffer, sizeof(nv12OutputDataBuffer));
                    TESTHR(hr = mpColorConverter->GetOutputStreamInfo(0, &streaminfo));
                    TESTHR(hr = MFCreateSample(&pNV12Sample));
                    TESTHR(hr = MFCreateMemoryBuffer(streaminfo.cbSize, &pNV12Buffer));
                    TESTHR(hr = pNV12Sample->AddBuffer(pNV12Buffer));
                    nv12OutputDataBuffer.pSample = pNV12Sample;
                    TESTHR(hr = mpColorConverter->ProcessOutput(0, 1, &nv12OutputDataBuffer, &status));

                    if (newFile)
                    {
                        //mTrace->Trace(1, L"Set MFSampleExtension_Discontinuity");
                        TESTHR(hr = nv12OutputDataBuffer.pSample->SetUINT32(MFSampleExtension_Discontinuity, TRUE));
                        newFile = false;
                    }
                    TESTHR(hr = mpH264Encoder->ProcessInput(0, nv12OutputDataBuffer.pSample, 0));
                }
                SetEvent(mEventNeedInput);
            }
            else if (meType == METransformHaveOutput)
            {
                DWORD status;
                MFT_OUTPUT_DATA_BUFFER h264OutputDataBuffer;
                MFT_OUTPUT_STREAM_INFO streaminfo;
                TESTHR(hr = mpH264Encoder->GetOutputStreamInfo(0, &streaminfo));

                //mTrace->Trace(1, L"New METransformHaveOutput event");

                ZeroMemory(&h264OutputDataBuffer, sizeof(h264OutputDataBuffer));
                hr = mpH264Encoder->ProcessOutput(0, 1, &h264OutputDataBuffer, &status);
                if (hr == MF_E_TRANSFORM_STREAM_CHANGE)
                {
                    //mTrace->Trace(1, L"New MF_E_TRANSFORM_STREAM_CHANGE event");
                    if (h264OutputDataBuffer.dwStatus & MFT_OUTPUT_DATA_BUFFER_FORMAT_CHANGE)
                    {
                        //mTrace->Trace(1, L"New MFT_OUTPUT_DATA_BUFFER_FORMAT_CHANGE event");
                        // El encoder dice que el formato ha cambiado y necesita que lo configuremos de nuevo
                        // Leemos el tipo que tiene configurado y se lo volvemos a escribir.
                        IMFMediaTypePtr pType;
                        TESTHR(hr = mpH264Encoder->GetOutputAvailableType(0, 0, &pType));
                        TESTHR(hr = mpH264Encoder->SetOutputType(0, pType, 0));
                    }
                }
                else if (hr == S_OK)
                {
                    if (mpWriter == NULL)
                    {
                        IMFMediaTypePtr pType;
                        TESTHR(hr = mpH264Encoder->GetOutputAvailableType(0, 0, &pType));

                        IMFByteStreamPtr pByteStream;
                        IMFMediaSinkPtr pMediaSink;
                        TESTHR(hr = MFCreateFile(MF_ACCESSMODE_READWRITE, MF_OPENMODE_DELETE_IF_EXIST, MF_FILEFLAGS_NONE, mFilename.c_str(), &pByteStream));
                        TESTHR(hr = MFCreateMPEG4MediaSink(pByteStream, pType, NULL, &pMediaSink));
                        TESTHR(hr = MFCreateSinkWriterFromMediaSink(pMediaSink, NULL, &mpWriter));
                        TESTHR(hr = mpWriter->BeginWriting());
                    }
                    TESTHR(hr = mpWriter->WriteSample(0, h264OutputDataBuffer.pSample));
                    h264OutputDataBuffer.pSample->Release();
                    if (h264OutputDataBuffer.pEvents != NULL)
                        h264OutputDataBuffer.pEvents->Release();
                }
                else
                    TESTHR(hr);
            }
            else if (meType == METransformDrainComplete)
            {
                //mTrace->Trace(1, L"New METransformDrainComplete event");
                TESTHR(hr = mpH264Encoder->ProcessMessage(MFT_MESSAGE_COMMAND_FLUSH, 0));
                SetEvent(mEventDrainComplete);
            }
            else if (meType == MEError)
            {
                PROPVARIANT pValue;
                TESTHR(hr = pEvent->GetValue(&pValue));
                //mTrace->Trace(1, L"MEError, value: %u", pValue.vt);
                error = true;
                SetEvent(mEventNeedInput);
            }
            else 
            {
                PROPVARIANT pValue;
                TESTHR(hr = pEvent->GetValue(&pValue));
                //mTrace->Trace(1, L"Unknown event type: %lu, Value: %u", meType, pValue.vt);
            }
            TESTHR(hr = mpH264EncoderEventGenerator->BeginGetEvent(this, NULL));
        }
    }
    //catch(com_error ex)
    //{
    //    printf("Exception in %s(%d): 0x%08X\n", ex.GetFilename(), ex.GetLinenum(), ex.Error());
    //}

    return S_OK;
}

Here is the information of my computer

CPU:Intel(R) Core(TM) i7-4790

GPU:Intel HD 4600 with device version 10.18.15.4279

         AMD Radeon(TM) R5 240 with device version 20.19.0.32832

Zone: 

MFXVideoSession::Init with MFX_IMPL_VIA_D3D11 returns MFX_ERR_UNSUPPORTED

$
0
0

Hello,

    I'm attempting to use code adapted from the IntelMediaSDK tutorials, specifically simple_3_encode, to encode bitmaps to h.264 video. Using the DX11_D3D preprocessor flag causes initialization to fail, however, with the error -3, or MFX_ERR_UNSUPPORTED. The issue seems to involve the mfxIMPL having the MFX_IMPL_VIA_D3D11 flag set, but I cannot find a reason why that feature is unsupported. We're attempting to use DirectX 11 to have access to the D3D_DRIVER_TYPE_NULL driver type while creating the ID3D11Device, in order to have access to the encoding functions while the program runs in Session 0, which it sometimes must. If there's a preferred alternative for that, it would also be appreciated.

    My computer is running Windows 10, and dxdiag shows that DirectX 12 is installed. I'm not sure if I'm just initializing something incorrectly, but any help would be appreciated. Thank you in advance.


H264 IDR frequency

$
0
0

Hi,

I am using the 2016 SDK under linux.

What is the best way to select the IDR frames interval (keyint under ffmpeg/libx264). By default it is set to 10s (250frames for 25 fps), is it possible to change this parameter ?

Thanks

Does MFX_IMPL_SOFTWARE support colour format conversion?

$
0
0

 

Hi,

I've created a decoder with MFX_IMPL_SOFTWARE and have a VPP component that does the following:   

    p.vpp.In.FourCC = MFX_FOURCC_NV12;
    p.vpp.Out.FourCC = MFX_FOURCC_YUY2;

 

, i.e. it converts the decoder output (which is NV12) to YUY2 before I grab the data and send it off elsewhere.  This conversion happens no problem with MFX_IMPL_HARDWARE with an Intel HD.  Software doesn't appear to perform the conversion (my image is split, green and pink/purple and about 1/4 size).  I replaced my D3D11 allocator with a SysMem one, just in case that was doing something but no dice.  It made no difference.

 

Will MFX_IMPL_SOFTWARE perform these conversions or do we need to do them ourselves?  Btw, I'm diligently examining return codes for errors and everything appears to be working fine (no errors).

 

Note: Something odd about use of libmfxsw32.dll, if your code is debug the dispatcher looks for libmfxsw32_d.dll but this isn't shipped with the SDK and there's no way to build it.  Renaming the existing libmfxsw32.dll to libmfxsw32_d.dll allows you to link however.

Y16 Encode/Decode support

$
0
0

I'm trying to encode / decode 16 bit grayscale video. I assume the right FOURCC for this is MFX_FOURCC_R16. Is that correct?

Assuming it is, the library throws an UNSUPPORTED error when calling MFXVideoENCODE_Query. The trace log is attached. I notice that the FOURCC in the FrameInfo is printed as UNKNOWN. Does that mean that R16 is not supported? What am I doing wrong?

Thanks,
-J

AttachmentSize
Downloadapplication/octet-streamtracer.log33.49 KB

Decoding MJPEG Hardware Acceleration

$
0
0

I'm trying to decode Mjpeg by Intel Msdk.Now my cpu is Celeor J1800 and it works on my thin client.
I have two questions:
1. dose mjpeg hardware decoding must need 4th generation core and forward? celeor J1800 is bay trail, is it belong to 4th generation?

2.I use DXVA_CHECKER(one checking GPU ability's software)find that "Intel Hardware M-jpeg Decoder MFT" and it's GUID is"91CD2D6E-897B-4FA1-B0D7-51DC88010E0A",but I didn't find this GUID in MSDN forums, does it mean this mjpeg hw decoder ability is just provide by intel and I must use MSDK gain Hardware Acceleration ability even msdk also use d3d9 or d3d11, using DXVA2 +d3d9 is impossible?

Thank you everyone for your help.

Encode/Decode a JPEG, PNG or TIFF

$
0
0

Hello,

Is there a high-level way to encode/decode a JPEG, PNG or TIFF with Intel Media SDK. I passed through the sample_encode and sample_decode on the github, but those are way too onerous to start with the library.

I would like something that I could call and get the buffer without problem and with the performance of the Intel library.

I was previously working with IPP 7.0 (deprecated ippj.h), but I'm trying to port my solution on Intel compiler 2017 and IPP 9.0.

Is someone can help me with simple code just to load images, no need to run on video.

Thanks

Intel MediaSDK MFXVideoEncode::Query Formats

$
0
0

Hello,

I'm attempting to write an h.264 encoder based on the simple_3_encode_vmem_async sample that can use the WARP feature of DirectX11.1. However, there seems to be a format conflict.

For MFXVideoEncode::Query() to succeed, the following flags need to be set:

m_mfxEncParams.mfx.FrameInfo.FourCC = MFX_FOURCC_NV12;
m_mfxEncParams.mfx.FrameInfo.ChromaFormat = MFX_CHROMAFORMAT_YUV420;

As far as I can tell, any other values cause a return value of MFX_ERR_UNSUPPORTED. Later, when we try to call g_pD3D11Device->CreateTexture2D(&desc, NULL, &pTexture2D);, it fails. Using the D3D11_CREATE_DEVICE_DEBUG flag, we can see an ouput of "ID3D11Device::CreateTexture2D: Invalid format. The format (0x67, NV12) is not supported as a decoder output."

Is there any way to initialize the decoder with other image formats, so that a device of type D3D_DRIVER_TYPE_WARP can create textures?

 

Thank you.

HEVC plugin performance

$
0
0

Hello,

Could you please explain me the difference between hardware accelerated and GPU accelerated HEVC plugins? I understand that Professional version of the SDK has two hardware accelerated plugins - libmfx_hevce_hw64.so and libmfx_hevce_gacc64.so. Their GUID's are 6fadc791a0c2eb479ab6dcd5ea9da347 and e5400a06c74d41f5b12d430bbaa23d0b.

They both utilize GPU - I can see that in the output of the metrics_monitor tool. However the two plugins have different performance;  libmfx_hevce_hw64 is almost 2.5x faster than libmfx_hevce_gacc64. I am confused because libmfx_hevce_gacc64 ships with Professional Evaluation version and I was under the impression that it should perform better than the plugin that comes with Community Edition version of the SDK. 

$ /opt/intel/mediasdk/samples/sample_multi_transcode -i::h265 puppies.h265 -b 6000 -o::h265 puppies_out.h265 -hw -u 7 -n 1000 -pe 6fadc791a0c2eb479ab6dcd5ea9da347 

Multi Transcoding Sample Version 7.0.16053497

libva info: VA-API version 0.99.0
libva info: va_getDriverName() returns 0
libva info: User requested driver 'iHD'
libva info: Trying to open /opt/intel/mediasdk/lib64/iHD_drv_video.so
libva info: Found init function __vaDriverInit_0_32
libva info: va_openDriver() returns 0
plugin_loader.h :170 [INFO] Plugin was loaded from GUID: { 0x33, 0xa6, 0x1c, 0x0b, 0x4c, 0x27, 0x45, 0x4c, 0xa8, 0xd8, 0x5d, 0xde, 0x75, 0x7c, 0x6f, 0x8e } (Intel (R) Media SDK HW plugin for HEVC DECODE)
plugin_loader.h :170 [INFO] Plugin was loaded from GUID: { 0x6f, 0xad, 0xc7, 0x91, 0xa0, 0xc2, 0xeb, 0x47, 0x9a, 0xb6, 0xdc, 0xd5, 0xea, 0x9d, 0xa3, 0x47 } (Intel (R) Media SDK HW plugin for HEVC ENCODE)
Pipeline surfaces number (DecPool): 9
MFX HARDWARE Session 0 API ver 1.19 parameters:
Input  video: HEVC
Output video: HEVC

Session 0 was NOT joined with other sessions

Transcoding started
..........
Transcoding finished

Common transcoding time is  25.57 sec
MFX session 0 transcoding PASSED:
Processing time: 25.57 sec
Number of processed frames: 1000

The test PASSED
plugin_loader.h :196 [INFO] MFXBaseUSER_UnLoad(session=0x0x564e053eaf20), sts=0
plugin_loader.h :196 [INFO] MFXBaseUSER_UnLoad(session=0x0x564e053eaf20), sts=0

 

$ /opt/intel/mediasdk/samples/sample_multi_transcode -i::h265 puppies.h265 -b 6000 -o::h265 puppies_out.h265 -hw -u 7 -n 1000 -pe e5400a06c74d41f5b12d430bbaa23d0b
Multi Transcoding Sample Version 7.0.16053497

libva info: VA-API version 0.99.0
libva info: va_getDriverName() returns 0
libva info: User requested driver 'iHD'
libva info: Trying to open /opt/intel/mediasdk/lib64/iHD_drv_video.so
libva info: Found init function __vaDriverInit_0_32
libva info: va_openDriver() returns 0
plugin_loader.h :170 [INFO] Plugin was loaded from GUID: { 0x33, 0xa6, 0x1c, 0x0b, 0x4c, 0x27, 0x45, 0x4c, 0xa8, 0xd8, 0x5d, 0xde, 0x75, 0x7c, 0x6f, 0x8e } (Intel (R) Media SDK HW plugin for HEVC DECODE)
plugin_loader.h :170 [INFO] Plugin was loaded from GUID: { 0xe5, 0x40, 0x0a, 0x06, 0xc7, 0x4d, 0x41, 0xf5, 0xb1, 0x2d, 0x43, 0x0b, 0xba, 0xa2, 0x3d, 0x0b } (Unknown plugin)
Pipeline surfaces number (DecPool): 13
MFX HARDWARE Session 0 API ver 1.19 parameters:
Input  video: HEVC
Output video: HEVC

Session 0 was NOT joined with other sessions

Transcoding started
..........
Transcoding finished

Common transcoding time is  89.21 sec
MFX session 0 transcoding PASSED:
Processing time: 89.21 sec
Number of processed frames: 1000

The test PASSED
plugin_loader.h :196 [INFO] MFXBaseUSER_UnLoad(session=0x0x5599cb479f20), sts=0
plugin_loader.h :196 [INFO] MFXBaseUSER_UnLoad(session=0x0x5599cb479f20), sts=0

 

 

Thanks,

Ingvar


HEVC transcode benchmark

$
0
0

Hello,

 

I'd like to ask a question about  benchmarking process. On the IBC show in Amsterdam Intel demonstrated live transcode of 4K AVC source into HEVC with 90fps. When using sample_multi_transcode tool with -u 7 settings the best results I've got was ~56FPS transcoding 25Mbps 4k AVC into HEVC elementary stream. I ran this test on intel NUC NUC6I7KYK I7-6770HQ using linux version of the SDK. Could you please describe your benchmarking process, what samples and hardware did you use? 

 

Thanks, 

Ingvar

VPP HW impl blurs text image during color conversion from BGRA to NV12

$
0
0

With reference to my previous post: https://software.intel.com/en-us/forums/intel-media-sdk/topic/657754

VPP HW implementation is blurring text images when I convert BGRA (RGB32) frames into NV12, whereas VPP SW implementation works fine in same scenario.

This issue can be easily reproduced with Intel Media SDK Samples 2016 6.0.0.142 and Intel(R)_Media_SDK_2016.0.1 also. Use following command lines:

1. sample_vpp -i VppIn.bgr -sw 1920 -sh 1080 -scc rgb4 -dw 1920 -dh 1080 -dcc nv12 -lib hw -o VppOut_HW.nv12

2. sample_vpp -i VppIn.bgr -sw 1920 -sh 1080 -scc rgb4 -dw 1920 -dh 1080 -dcc nv12 -lib sw -o VppOut_SW.nv12

Source frames, comparison results and environment details are attached here.

As per discussion on my previous post, root cause was diagnosed at intel driver, but there are no further updates after that. So please update.

AttachmentSize
Downloadapplication/zipvpp_text_blur.zip888.31 KB

libva error after installing Media Server Studio 2017 on Centos 7

$
0
0

I am trying to install the Intel MSS 2017 on my Centos 7.2.1511

When I type the command: "cat lspci -nn -s 0:02.0" this is the output:

00:02.0 VGA compatible controller [0300]: Intel Corporation Sky Lake Integrated Graphics [8086:1916] (rev 07)

meaning I have the appropriate hardware to run it, but after I finished running the installation scripts, I tried to check whether the installation went correctly, so I typed the command vainfo and got this in return:

libva info: VA-API version 0.39.3
libva info: va_getDriverName() returns 0
libva info: User requested driver 'iHD'
libva info: Trying to open /opt/intel/mediasdk/lib64/iHD_drv_video.so
libva info: Found init function __vaDriverInit_0_32
Segmentation fault (core dumped)

this is the ouptut of running gdb vainfo:

Reading symbols from /usr/local/bin/vainfo...done.
(gdb) run
Starting program: /usr/local/bin/vainfo
libva info: VA-API version 0.39.3
libva info: va_getDriverName() returns 0
libva info: User requested driver 'iHD'
libva info: Trying to open /opt/intel/mediasdk/lib64/iHD_drv_video.so
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
libva info: Found init function __vaDriverInit_0_32

Program received signal SIGSEGV, Segmentation fault.
0x00007ffff4618b5f in __vaDriverInit_0_32 ()
   from /opt/intel/mediasdk/lib64/iHD_drv_video.so
Missing separate debuginfos, use: debuginfo-install glibc-2.17-106.el7_2.6.x86_64 libX11-1.6.3-2.el7.x86_64 libXau-1.0.8-2.1.el7.x86_64 libXext-1.3.3-3.el7.x86_64 libXfixes-5.0.1-2.1.el7.x86_64 libgcc-4.8.5-4.el7.x86_64 libpciaccess-0.13.4-2.el7.x86_64 libstdc++-4.8.5-4.el7.x86_64 libxcb-1.11-4.el7.x86_64
(gdb) bt
#0  0x00007ffff4618b5f in __vaDriverInit_0_32 ()
   from /opt/intel/mediasdk/lib64/iHD_drv_video.so
#1  0x00007ffff7bbe0c8 in va_openDriver (dpy=dpy@entry=0x611310,
    driver_name=<optimized out>) at va.c:301
#2  0x00007ffff7bbef8b in vaInitialize (dpy=dpy@entry=0x611310,
    major_version=major_version@entry=0x7fffffffdd30,
    minor_version=minor_version@entry=0x7fffffffdd34) at va.c:563
#3  0x0000000000400fe0 in main (argc=1, argv=0x7fffffffde88) at vainfo.c:120
(gdb)

Any help would be much appreciated. Thanks!

 

 

 

Zone: 

Thread Topic: 

Help Me

Understanding HEVC HW encoder plugins

$
0
0

Hello

I am trying to use the HEVC hardware accelerated encoder and i found 2 plugins.
1. MFX_PLUGINID_HEVCE_HW
2. MFX_PLUGINID_HEVCE_GACC

When i tried to simply run sample_encode with h264 encoder, the sample encode failed with an error (see attached file 1.txt)

but when i tried to use the hevc_gacc plugin sample_encode ran as expeected (attahced file 2.txt)

I would like to ask what is the difference between the plugins? and when should i use each one of them?

Some details :
OS  : Centos 7.2.1511
CPU : Intel(R) Xeon(R) CPU E3-1275 v5 @ 3.60GHz
GPU : 00:02.0 Display controller [0380]: Intel Corporation Device [8086:191d] (rev 06)    (from lspci)

Thank you
Koby

AttachmentSize
Downloadtext/plain1.txt768 bytes
Downloadtext/plain2.txt1.23 KB

Quick sync performance

$
0
0

Hi,

I observe a performance drop over time when using Quicksync encoding example from the Intel SDK.

after freshly restarting the computer I can compress 4K 60fps but after 1  hour, same executable produces only 4K 30fps.

Any idea why this performance drop over time ?

Thank you

Zone: 

Viewing all 2185 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>