Quantcast
Channel: Intel® Software - Media
Viewing all 2185 articles
Browse latest View live

h264_qsv did't take pix_fmt AV_PIX_FMT_BGRA

$
0
0

Hello,

I have a hardware i9-9900k with widows 10 sdk.

I have installed Media SDK 2019R1.

ffmpeg latest 4.2 sdk.

My problem is when I set pix_fmt = nv12, avcodec_open2 returns the 0 value. But if I set pix_fmt AV_PIX_FMT_BGRA it returns -22.

Please help me why it didn't support AV_PIX_FMT_BGRA.

In sample code of Intel encode it is support bgra format.

    AVBufferRef *encode_device = NULL;

   AVCodecContext * Codec_Context_GPU;
    int ret = 0;
    const AVCodec *codec;
    avcodec_register_all();
    AVHWDeviceType Device_Type = AV_HWDEVICE_TYPE_QSV;     

    ret = av_hwdevice_ctx_create(&encode_device, Device_Type , NULL, NULL, 0);   

    codec = avcodec_find_encoder_by_name("h264_qsv");   

    Codec_Context_GPU = avcodec_alloc_context3(codec);
   
    Codec_Context_GPU->bit_rate = 2000000;                                          
    Codec_Context_GPU->width = 1920;
    Codec_Context_GPU->height = 1080;    

    Codec_Context_GPU->framerate.num = 25;
    Codec_Context_GPU->framerate.den = 1;   
    Codec_Context_GPU->time_base.num = Codec_Context_GPU->framerate.den;
    Codec_Context_GPU->time_base.den = Codec_Context_GPU->framerate.num; 
    Codec_Context_GPU->hw_device_ctx = av_buffer_ref(encode_device);       
    Codec_Context_GPU->pix_fmt = AV_PIX_FMT_NV12;//AV_PIX_FMT_BGRA;//AV_PIX_FMT_NV12;

    ret = avcodec_open2(Codec_Context_GPU, codec, NULL);
    if (ret < 0)
    {
        LogException(L"Could not open codec: %s\n");
    }


[UHD 620 / Surface Pro 6] Slow VPP for YUV->RGB conversion

$
0
0

System: Microsoft Surface Pro 6 (128 GB)

  • CPU: i5-8250U
  • GPU: Intel UHD Graphics 620
    • Driver: 24.20.100.6294
  • RAM: 8 GB @ 1867 MHz
  • OS: Windows 10 1809 (10.0.17763)

Hello,

I am using the Media SDK for decoding an H.264 stream, with VPP to convert the decoder's NV12 output to RGB. On the Surface Pro 6, decoding 1920x1080 content takes about ~2.5 - 3 ms. Converting NV12 to (RGB) takes ~8-9 ms, but can take upwards of 20 ms on some frames. It gets a couple milliseconds slower running on battery (despite selecting the Best Performance power profile). I tried two different Surface Pro 6 tablets and both have this performance.

On a Microsoft Surface Go (Intel UHD 615), it's a lot more reasonable. Decoding takes about ~4 ms while the color format conversion takes ~2 ms, with very little variance.

Task Manager has a Video Processing graph for the Surface Pro 6, but it always shows 0% usage. Not sure if this is intended to capture VPP workload or not though. The SDK never returns MFX_WRN_PARTIAL_ACCELERATION.

 

This sounds like a firmware issue from my perspective, but if there is any workaround (or if this is fixed in a newer driver), please let me know! Any advice would be appreciated.

[UHD 620 / Surface Pro 6] Slow VPP for YUV->RGB conversion

$
0
0

System: Microsoft Surface Pro 6 (128 GB)

CPU: i5-8250U

GPU: Intel UHD Graphics 620

Driver: 24.20.100.6294

RAM: 8 GB @ 1867 MHz

OS: Windows 10 1809 (10.0.17763)

Hello,

I am using the Media SDK for decoding an H.264 stream, with VPP to convert the decoder's NV12 output to RGB. On the Surface Pro 6, decoding 1920x1080 content takes about ~2.5 - 3 ms. Converting NV12 to (RGB) takes ~8-9 ms, but can take upwards of 20 ms on some frames. It gets a couple milliseconds slower running on battery (despite selecting the Best Performance power profile). I tried two different Surface Pro 6 tablets and both have this performance.

On a Microsoft Surface Go (Intel UHD 615), it's a lot more reasonable. Decoding takes about ~4 ms while the color format conversion takes ~2 ms, with very little variance.

Task Manager has a Video Processing graph for the Surface Pro 6, but it always shows 0% usage. Not sure if this is intended to capture VPP workload or not though. The SDK never returns MFX_WRN_PARTIAL_ACCELERATION.

 

This sounds like a firmware issue from my perspective, but if there is any workaround (or if this is fixed in a newer driver), please let me know! Any advice would be appreciated.

sample multi_transcode has no display

$
0
0

Hi:

I'm running example mentioned here https://software.intel.com/en-us/articles/how-to-create-video-wall-with-...

I got the following output but no rendering on monitor.

My understanding is that vpp_comp_only 4 ensures that it displays on monitor.

Aung.

 

Multi Transcoding Sample Version 8.2.25.

Par file is: parfile

WARNING: internal allocators were disabled because of composition+rendering requirement

Some inter-sessions do not use opaque memory (possibly because of -o::raw).
Opaque memory in all inter-sessions is disabled.
libva info: VA-API version 1.4.1
libva info: va_getDriverName() returns 0
libva info: User requested driver 'iHD'
libva info: Trying to open /opt/intel/mediasdk/lib64/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_4
libva info: va_openDriver() returns 0
plugin_loader.h :172 [INFO] Plugin was loaded from GUID: { 0x33, 0xa6, 0x1c, 0x0b, 0x4c, 0x27, 0x45, 0x4c, 0xa8, 0xd8, 0x5d, 0xde, 0x75, 0x7c, 0x6f, 0x8e } (Intel (R) Media SDK HW plugin for HEVC DECODE)
MFX HARDWARE Session 0 API ver 1.29 parameters:
Input  video: HEVC
Output video: To child session

plugin_loader.h :172 [INFO] Plugin was loaded from GUID: { 0x33, 0xa6, 0x1c, 0x0b, 0x4c, 0x27, 0x45, 0x4c, 0xa8, 0xd8, 0x5d, 0xde, 0x75, 0x7c, 0x6f, 0x8e } (Intel (R) Media SDK HW plugin for HEVC DECODE)
MFX HARDWARE Session 1 API ver 1.29 parameters:
Input  video: HEVC
Output video: To child session

plugin_loader.h :172 [INFO] Plugin was loaded from GUID: { 0x33, 0xa6, 0x1c, 0x0b, 0x4c, 0x27, 0x45, 0x4c, 0xa8, 0xd8, 0x5d, 0xde, 0x75, 0x7c, 0x6f, 0x8e } (Intel (R) Media SDK HW plugin for HEVC DECODE)
MFX HARDWARE Session 2 API ver 1.29 parameters:
Input  video: HEVC
Output video: To child session

plugin_loader.h :172 [INFO] Plugin was loaded from GUID: { 0x33, 0xa6, 0x1c, 0x0b, 0x4c, 0x27, 0x45, 0x4c, 0xa8, 0xd8, 0x5d, 0xde, 0x75, 0x7c, 0x6f, 0x8e } (Intel (R) Media SDK HW plugin for HEVC DECODE)
MFX HARDWARE Session 3 API ver 1.29 parameters:
Input  video: HEVC
Output video: To child session

Pipeline surfaces number (EncPool): 2
MFX HARDWARE Session 4 API ver 1.29 parameters:
Input  video: From parent session
Output video: To child session

Pipeline surfaces number (DecPool): 6
Session 0 was joined with other sessions
Pipeline surfaces number (DecPool): 6
Session 1 was joined with other sessions
Pipeline surfaces number (DecPool): 6
Session 2 was joined with other sessions
Pipeline surfaces number (DecPool): 6
Session 3 was joined with other sessions
Session 4 was joined with other sessions

Transcoding started
........................................................................................................
Transcoding finished

Common transcoding time is 46.0446 sec
-------------------------------------------------------------------------------
*** session 0 PASSED (MFX_ERR_NONE) 44.6974 sec, 2500 frames
-i::h265 /home/movidius/mdk/examples/refApps/AICamera_generic/pc/mbnet_videoStream4kHevc.h265 -vpp_comp_dst_x 0 -vpp_comp_dst_y 0 -vpp_comp_dst_w 960 -vpp_comp_dst_h 540 -join -o::sink

*** session 1 PASSED (MFX_ERR_NONE) 44.6974 sec, 2500 frames
-i::h265 /home/movidius/mdk/examples/refApps/AICamera_generic/pc/mbnet_videoStream4kHevc.h265 -vpp_comp_dst_x 960 -vpp_comp_dst_y 0 -vpp_comp_dst_w 960 -vpp_comp_dst_h 540 -join -o::sink

*** session 2 PASSED (MFX_ERR_NONE) 46.0443 sec, 2655 frames
-i::h265 /home/movidius/mdk/examples/refApps/AICamera_generic/pc/mbnet_videoStream4kHevc_2.h265 -vpp_comp_dst_x 0 -vpp_comp_dst_y 540 -vpp_comp_dst_w 960 -vpp_comp_dst_h 540 -join -o::sink

*** session 3 PASSED (MFX_ERR_NONE) 46.0403 sec, 2655 frames
-i::h265 /home/movidius/mdk/examples/refApps/AICamera_generic/pc/mbnet_videoStream4kHevc_2.h265 -vpp_comp_dst_x 960 -vpp_comp_dst_y 540 -vpp_comp_dst_w 960 -vpp_comp_dst_h 540 -join -o::sink

*** session 4 PASSED (MFX_ERR_NONE) 44.7168 sec, 2501 frames
-vpp_comp_only 4 -join -i::source

Test

Which version of Intel Media SDK has Intel Xeon CPU E-2278GE support?

$
0
0

Please tell me the version of Intel Media SDK which supported a new CPU (Xeon E-2278G)?

 

-dc::rgb4 in sample_multi_transcode error

$
0
0

I would like my decode pipeline to have rgb out and so I set up parfile as below: (ignore the composition, currently filling only 1 tile).

 

-i::h265file.h265  -hw -vpp_comp_dst_x 0 -vpp_comp_dst_y 0 -vpp_comp_dst_w 1920 -vpp_comp_dst_h 1080  -dc::rgb4  -vpp_comp_tile_id 0   -join -o::sink
-vpp_comp_only 1 -vpp_comp_num_tiles 1 -rx11     -fps 10   -ext_allocator   -join -i::source

The error is:

[ERROR], sts=MFX_ERR_UNSUPPORTED(-3), CalculateNumberOfReqFrames, m_pmfxDEC.get failed at /home/movidius/mdk/examples/refApps/AICamera_generic/pc/decode_process/sample_multi_transcode/src/pipeline_transcode.cpp:3387

[ERROR], sts=MFX_ERR_UNSUPPORTED(-3), AllocFrames, CalculateNumberOfReqFrames failed at /home/movidius/mdk/examples/refApps/AICamera_generic/pc/decode_process/sample_multi_transcode/src/pipeline_transcode.cpp:3273

[ERROR], sts=MFX_ERR_UNSUPPORTED(-3), CompleteInit, AllocFrames failed at /home/movidius/mdk/examples/refApps/AICamera_generic/pc/decode_process/sample_multi_transcode/src/pipeline_transcode.cpp:3971

[ERROR], sts=MFX_ERR_UNSUPPORTED(-3), Init, m_pThreadContextArray[i]->pPipeline->CompleteInit failed at /home/movidius/mdk/examples/refApps/AICamera_generic/pc/decode_process/sample_multi_transcode/src/sample_multi_tran

vainfo error

$
0
0

Hi,

When I run vainfo I get this error:

libva info: VA-API version 1.4.1
libva info: va_getDriverName() returns -1
libva info: User requested driver 'iHD'
libva info: Trying to open /opt/intel/mediasdk/lib64/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_4
DRM_IOCTL_I915_GEM_APERTURE failed: Invalid argument
Assuming 131072kB available aperture size.
May lead to reduced performance or incorrect rendering.
get chip id failed: -1 [22]
param: 4, val: 0
libva error: /opt/intel/mediasdk/lib64/iHD_drv_video.so init failed
libva info: va_openDriver() returns 1
vaInitialize failed with error code 1 (operation failed),exit

 

I don't find the /dev/dri/ directory.

I think that I don't have a driver for your iGPU or it is broken.

When I run  "lscpu" command, I get this:

Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
Address sizes:       39 bits physical, 48 bits virtual
CPU(s):              12
On-line CPU(s) list: 0-11
Thread(s) per core:  2
Core(s) per socket:  6
Socket(s):           1
NUMA node(s):        1
Vendor ID:           GenuineIntel
CPU family:          6
Model:               158
Model name:          Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz
Stepping:            10
CPU MHz:             800.037
CPU max MHz:         4600.0000
CPU min MHz:         800.0000
BogoMIPS:            6384.00
Virtualization:      VT-x
L1d cache:           32K
L1i cache:           32K
L2 cache:            256K
L3 cache:            12288K
NUMA node0 CPU(s):   0-11


undefined symbol: wl_proxy_marshal_constructor_versioned

$
0
0

Hi,

When I run vainfo I get this error:

vainfo: symbol lookup error: /opt/intel/mediasdk/lib64/libva-wayland.so.2: undefined symbol: wl_proxy_marshal_constructor_versioned
 

I use ubuntu 19.04

How can I resolve this?

IGD Aperture Size

$
0
0

Hi,

Does the BOIS setting "IGD Aperture Size" affect video decoding/encoding performance?

Does it affect probability of MFX_ERR_GPU_HANG occurrence?

Especially, when using multiple video decoding/encoding applications on Windows10.

 

Errors when using several encoding applications

$
0
0

Hi,

Customers often wish to fully utilize processor resources for decoding/encoding (of live/realtime video streams).
I.e. to run multiple imsdk applications: some of them utilize gpu/hw, and some - cpu/sw.

And it is normally that number of applications can be changed from time to time.
Or, alternative approach: one (multithreaded) apllication/service, which can change number of handled streams on-the-fly.
E.g. computer transcodes 5 iptv streams today and will transcode 7 streams tomorrow. And, 5 running already streams shouldn't be interrupted during start of 2 additional streams.

But intel media sdk library has well-known bug, which has not been fixed for years.
Start/stop of mfx session may cause errors inside another running sessions.
I think the reason is lack of synchronization primitives somewhere inside imsdk libraries.

I saw such errors on different processors, different windows versions, etc.
Issue can be easily reproduced using standart imsdk samples.

Steps to reproduce a bug with sw-library on windows:

1. Download latest samples (today it is https://software.intel.com/sites/default/files/managed/61/d0/MediaSample...).
2. Take \_bin\win32\sample_encode.exe from it.
3. Employ latest imsdk library (MediaSDK2019R1.exe, libmfxsw32.dll).
4. Take some uncompessed video file. I use this one _input.nv12: https://drive.google.com/file/d/1z3O6iobsnPLzwQddXlTHoK1UzOIY9fJ3/view?u...
5. Download two scripts (sample_encode_1.bat and sample_encode_N.bat) attached to this message and put them beside sample_encode.exe.
6. Start sample_encode_N.bat. It will run in infinite loop four sample_encode.exe instances.
7. Wait several days (or less), and you'll see error messages at sample_encode consoles.

This bug has already been published years ago. But it is still not fixed. Here you can find detailed discussions:
https://software.intel.com/en-us/forums/intel-media-sdk/topic/696953
https://software.intel.com/en-us/forums/intel-media-sdk/topic/475624
https://software.intel.com/en-us/forums/intel-media-sdk/topic/536840
 

AttachmentSize
Downloadapplication/zip4_encoders_loop.zip691 bytes

MFX_ERR_GPU_HANG when using several decoding/encoding applications

maximum supported video resolution for media SDK and h264_qsv

$
0
0

Hi there,

I  am using h264_qsv with ffmpeg to encode video @ 3840 x 2160 without an issue. 

 However, the h264_qsv codec initialization fails when I set a video resolution of 7680 x 1080.    Does anyone know if this is a limitation of  h264_qsv codec? 

How about  Intel Media SDK,  does it support encoding H264 @ 7680 x 1080 ? 

thank you and regards

Leon

how to encode h265 in lookahead bitrate mode?

$
0
0

when encoding H264 video, the lookahead rate control method can improve the encoding quality . So I want to encode h265 video file in lookahead mode too.
I don't know how to use the LA mode for h265 encoding. I think the la_ext is the implementation for h265 LA mode., so my command line is :
./sample_multi_transcode -i::h265 input.265 -o::h265 out.hevc -b 3000 -w 1920 -h 1080 -u 4 -lad 100 -la_ext

The result is:

Multi Transcoding Sample Version 8.4.27.

libva info: VA-API version 1.0.0
libva info: va_getDriverName() returns 0
libva info: User requested driver 'iHD'
libva info: Trying to open /opt/intel/mediasdk/lib64/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_0
libva info: va_openDriver() returns 0
Session 0:
plugin_loader.h :185 [INFO] Plugin was loaded from GUID: { 0x58, 0x8f, 0x11, 0x85, 0xd4, 0x7b, 0x42, 0x96, 0x8d, 0xea, 0x37, 0x7b, 0xb5, 0xd0, 0xdc, 0xb4 } (Intel (R) Media SDK plugin for LA ENC)

[ERROR], sts=MFX_ERR_UNSUPPORTED(-3), EncodePreInit, m_pmfxENC->Query failed at /home/lpb/git_hub/MediaSDK/samples/sample_multi_transcode/src/pipeline_transcode.cpp:461

[ERROR], sts=MFX_ERR_UNSUPPORTED(-3), Init, EncodePreInit failed at /home/lpb/git_hub/MediaSDK/samples/sample_multi_transcode/src/pipeline_transcode.cpp:3721

[ERROR], sts=MFX_ERR_UNSUPPORTED(-3), Init, pThreadPipeline->pPipeline->Init failed at /home/lpb/git_hub/MediaSDK/samples/sample_multi_transcode/src/sample_multi_transcode.cpp:442
plugin_loader.h :211 [INFO] MFXBaseUSER_UnLoad(session=0x0x56331c850c50), sts=0

[ERROR], sts=MFX_ERR_UNSUPPORTED(-3), main, transcode.Init failed at /home/lpb/git_hub/MediaSDK/samples/sample_multi_transcode/src/sample_multi_transcode.cpp:1169

My CPU is i7-6700HQ. API version is 1.28. Libva version is 1.0
So how to encode h265 in lookahead mode? And whether my CPU and the driver version support the h265 LA mode?

sample multi_transcode has no display

$
0
0

Hi:

I'm running example mentioned here https://software.intel.com/en-us/articles/how-to-create-video-wall-with-...

I got the following output but no rendering on monitor.

My understanding is that vpp_comp_only 4 ensures that it displays on monitor.

Aung.

 

Multi Transcoding Sample Version 8.2.25.

Par file is: parfile

WARNING: internal allocators were disabled because of composition+rendering requirement

Some inter-sessions do not use opaque memory (possibly because of -o::raw).
Opaque memory in all inter-sessions is disabled.
libva info: VA-API version 1.4.1
libva info: va_getDriverName() returns 0
libva info: User requested driver 'iHD'
libva info: Trying to open /opt/intel/mediasdk/lib64/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_4
libva info: va_openDriver() returns 0
plugin_loader.h :172 [INFO] Plugin was loaded from GUID: { 0x33, 0xa6, 0x1c, 0x0b, 0x4c, 0x27, 0x45, 0x4c, 0xa8, 0xd8, 0x5d, 0xde, 0x75, 0x7c, 0x6f, 0x8e } (Intel (R) Media SDK HW plugin for HEVC DECODE)
MFX HARDWARE Session 0 API ver 1.29 parameters:
Input  video: HEVC
Output video: To child session

plugin_loader.h :172 [INFO] Plugin was loaded from GUID: { 0x33, 0xa6, 0x1c, 0x0b, 0x4c, 0x27, 0x45, 0x4c, 0xa8, 0xd8, 0x5d, 0xde, 0x75, 0x7c, 0x6f, 0x8e } (Intel (R) Media SDK HW plugin for HEVC DECODE)
MFX HARDWARE Session 1 API ver 1.29 parameters:
Input  video: HEVC
Output video: To child session

plugin_loader.h :172 [INFO] Plugin was loaded from GUID: { 0x33, 0xa6, 0x1c, 0x0b, 0x4c, 0x27, 0x45, 0x4c, 0xa8, 0xd8, 0x5d, 0xde, 0x75, 0x7c, 0x6f, 0x8e } (Intel (R) Media SDK HW plugin for HEVC DECODE)
MFX HARDWARE Session 2 API ver 1.29 parameters:
Input  video: HEVC
Output video: To child session

plugin_loader.h :172 [INFO] Plugin was loaded from GUID: { 0x33, 0xa6, 0x1c, 0x0b, 0x4c, 0x27, 0x45, 0x4c, 0xa8, 0xd8, 0x5d, 0xde, 0x75, 0x7c, 0x6f, 0x8e } (Intel (R) Media SDK HW plugin for HEVC DECODE)
MFX HARDWARE Session 3 API ver 1.29 parameters:
Input  video: HEVC
Output video: To child session

Pipeline surfaces number (EncPool): 2
MFX HARDWARE Session 4 API ver 1.29 parameters:
Input  video: From parent session
Output video: To child session

Pipeline surfaces number (DecPool): 6
Session 0 was joined with other sessions
Pipeline surfaces number (DecPool): 6
Session 1 was joined with other sessions
Pipeline surfaces number (DecPool): 6
Session 2 was joined with other sessions
Pipeline surfaces number (DecPool): 6
Session 3 was joined with other sessions
Session 4 was joined with other sessions

Transcoding started
........................................................................................................
Transcoding finished

Common transcoding time is 46.0446 sec
-------------------------------------------------------------------------------
*** session 0 PASSED (MFX_ERR_NONE) 44.6974 sec, 2500 frames
-i::h265 /home/movidius/mdk/examples/refApps/AICamera_generic/pc/mbnet_videoStream4kHevc.h265 -vpp_comp_dst_x 0 -vpp_comp_dst_y 0 -vpp_comp_dst_w 960 -vpp_comp_dst_h 540 -join -o::sink

*** session 1 PASSED (MFX_ERR_NONE) 44.6974 sec, 2500 frames
-i::h265 /home/movidius/mdk/examples/refApps/AICamera_generic/pc/mbnet_videoStream4kHevc.h265 -vpp_comp_dst_x 960 -vpp_comp_dst_y 0 -vpp_comp_dst_w 960 -vpp_comp_dst_h 540 -join -o::sink

*** session 2 PASSED (MFX_ERR_NONE) 46.0443 sec, 2655 frames
-i::h265 /home/movidius/mdk/examples/refApps/AICamera_generic/pc/mbnet_videoStream4kHevc_2.h265 -vpp_comp_dst_x 0 -vpp_comp_dst_y 540 -vpp_comp_dst_w 960 -vpp_comp_dst_h 540 -join -o::sink

*** session 3 PASSED (MFX_ERR_NONE) 46.0403 sec, 2655 frames
-i::h265 /home/movidius/mdk/examples/refApps/AICamera_generic/pc/mbnet_videoStream4kHevc_2.h265 -vpp_comp_dst_x 960 -vpp_comp_dst_y 540 -vpp_comp_dst_w 960 -vpp_comp_dst_h 540 -join -o::sink

*** session 4 PASSED (MFX_ERR_NONE) 44.7168 sec, 2501 frames
-vpp_comp_only 4 -join -i::source


QuickSync AVC encoder crash access violation

$
0
0

Hello

I'm new in QuickSync and Intel Media SDK. So, I've checked examples and tutorials. I need to encode YUV420P input frames with AVC encoder with HW acceleration support. As a start point I took the simple_encode sample(the one where system memory used for encoding surfaces). I've inited all the parameters, but the crash(access violation reading location) occurs after encoding starts(about 730-760 msec frame timestamp). Tried on different videos. I've checked the FFMPEG implementation, but still couldn't fix the crash. Also, I've tried to use aligned memory for surface data and separate bitstream instances for each encode call and copy the frame data and copy the packet data to standalone instances. But the crash still occurs. I'm stuck wit this and not sure how to fix this. Also, I've checked the behavior on different Intel GPU with same result. Intel Media trace didn't cleared anything. My hardware: Intel Core i7-6700K(Intel HD Graphics 530). 1 monitor connected to NVidia GPU, Intel HD Graphics is not connected to monitor(this is a valid use case for DX11, as I can see). The D3D9 initialization fails, but D3D11 works fine. Please, check my initialization and encode calls below.

Also, I have to note, GetFreeSurface method is always returns first surface(idx == 0) and output packets have wrong PTS/DTS(usually 0). See the logs below. The frames to Encode method are coming in presentation order.

mfxStatus LQuickSyncEncoder::InitEncoder()
{
   mfxVersion version;
   version.Major = 1;
   version.Minor = 0;
   // Try to use D3D9 first. It has limitations why we can't use it(unavailable without connected monitor).
   // So, in case we failed to use D3D9, we will request to use D3D11.
   mfxStatus result = session.Init(MFX_IMPL_HARDWARE_ANY | MFX_IMPL_VIA_D3D9, &version);

   if (result != MFX_ERR_NONE) {
      LFTRACE("Failed to init MFX_IMPL_HARDWARE_ANY | MFX_IMPL_VIA_D3D9 accelerated session.");
      // MFX_IMPL_HARDWARE_ANY will utilize HW acceleration even if Intel HD Graphics device is not associated with primary adapter.
      // This is common case if system has high-performance GPU and Intel CPU with embedded Intel HD Graphics GPU.
      // MFX_IMPL_VIA_D3D11 is required, because Direct3D11 allows to use hardware acceleration even without connected monitor.
      result = session.Init(MFX_IMPL_HARDWARE_ANY | MFX_IMPL_VIA_D3D11, &version);
      if (result != MFX_ERR_NONE) {
         // Failed to init hardware accelerated session.
         // We're not intrested in software implementation, because we can use x264 or WMF encoders instead. So, don't try to init IMPL_AUTO or IMPL_SOFTWARE session here.
         LFTRACE("Failed to init MFX_IMPL_HARDWARE_ANY | MFX_IMPL_VIA_D3D11 accelerated session.");
         return result;
      } else LFTRACE("Succeeded to init MFX_IMPL_HARDWARE_ANY | MFX_IMPL_VIA_D3D11 accelerated session.");
   } else LFTRACE("Succeeded to init MFX_IMPL_HARDWARE_ANY | MFX_IMPL_VIA_D3D9 accelerated session.");

   // SandyBridge(2nd Gen Intel Core) is required for QuickSync usage.
   // TODO: we can support QuickSync started from desired CPU. For example, we can select Ivy Bridge(3rd generation Intel Core) as the first CPU that we will support for IntelQuickSync.
   // This can be done with platform request from the session.
   // Why we might need this ? Possibly first generations of Intel QuickSync produces low quality with approx same performance as CPU. So, we don't get any advantages using it.
   // See: session.QueryPlatform method

   // Set the packets timebase
   packet.rTimebase = MSEC_TIMEBASE;

   // Set required video parameters for encode
   mfxVideoParam mfxEncParams;
   ZeroMemory(&mfxEncParams, sizeof(mfxEncParams));

   mfxEncParams.mfx.CodecId = MFX_CODEC_AVC; // TODO: QuickSync is also supports MPEG-2, VP8, VP9 and H.265/HEVC encoders.
   mfxEncParams.mfx.CodecProfile = MapH264ProfileToMFXProfile(h264Profile);
   mfxEncParams.mfx.CodecLevel = MapH264LevelToMFXLevel(h264Level);
   mfxEncParams.mfx.TargetUsage = MFX_TARGETUSAGE_BALANCED; // TODO: we can support "fast speed" here
   if (videoQuality.bUseQualityScale) {
      // Intelligent Constant Quality (ICQ) bitrate control algorithm. It is value in the 1…51 range, where 1 corresponds the best quality.
      const uint16_t arICQQuality[] = { 51, 38, 25, 12, 1 };
      mfxEncParams.mfx.ICQQuality = arICQQuality[videoQuality.qualityScale];
      mfxEncParams.mfx.RateControlMethod = MFX_RATECONTROL_ICQ;
   } else {
      mfxEncParams.mfx.TargetKbps = uint16_t(videoQuality.uBitrate);
      mfxEncParams.mfx.RateControlMethod = MFX_RATECONTROL_VBR;
   }
   const LRational rFramerate = LDoubleToRational(1.0 / vf.vfps, LDEFAULT_FRAME_RATE_BASE); // TODO: need to handle variable framerate ?
   mfxEncParams.mfx.FrameInfo.FrameRateExtN = rFramerate.den; // Assign den to num, because we used 1.0 / fps
   mfxEncParams.mfx.FrameInfo.FrameRateExtD = rFramerate.num;
   mfxEncParams.mfx.FrameInfo.FourCC = MFX_FOURCC_NV12;
   mfxEncParams.mfx.FrameInfo.ChromaFormat = MFX_CHROMAFORMAT_YUV420;
   mfxEncParams.mfx.FrameInfo.PicStruct = MFX_PICSTRUCT_PROGRESSIVE;
   mfxEncParams.mfx.FrameInfo.CropX = 0;
   mfxEncParams.mfx.FrameInfo.CropY = 0;
   mfxEncParams.mfx.FrameInfo.CropW = uint16_t(vf.GetWidthPixels());
   mfxEncParams.mfx.FrameInfo.CropH = uint16_t(vf.GetHeightPixels());
   // Width must be a multiple of 16.
   // Height must be a multiple of 16 in case of frame picture and a multiple of 32 in case of field picture.
   mfxEncParams.mfx.FrameInfo.Width = uint16_t(MSDK_ALIGN16((vf.GetWidthPixels())));
   mfxEncParams.mfx.FrameInfo.Height = uint16_t((MFX_PICSTRUCT_PROGRESSIVE == mfxEncParams.mfx.FrameInfo.PicStruct) ? MSDK_ALIGN16(vf.GetHeightPixels()) : MSDK_ALIGN32(vf.GetHeightPixels()));

   const LRational rPixelAspect = vf.PixelAspectRatioGet();

   mfxEncParams.mfx.FrameInfo.AspectRatioW = uint16_t(rPixelAspect.num);
   mfxEncParams.mfx.FrameInfo.AspectRatioH = uint16_t(rPixelAspect.den);
   mfxEncParams.mfx.FrameInfo.BitDepthLuma = 8;
   mfxEncParams.mfx.FrameInfo.BitDepthChroma = 8;

   mfxEncParams.IOPattern = MFX_IOPATTERN_IN_SYSTEM_MEMORY; // TODO: use GPU memory if possible
   //mfxEncParams.AsyncDepth = 1; // TODO: find a correct value

   pEncoder.Assign(new MFXVideoENCODE(session));
   // Validate video encode parameters. The validation result is written to same structure.
   // MFX_WRN_INCOMPATIBLE_VIDEO_PARAM is returned if some of the video parameters are not supported,
   // instead the encoder will select suitable parameters closest matching the requested configuration
   result = pEncoder->Query(&mfxEncParams, &mfxEncParams);
   if ((result != MFX_ERR_NONE) && (result != MFX_WRN_INCOMPATIBLE_VIDEO_PARAM)) {
      LFTRACE("Can't validate video encode parameters");
      return result;
   }

   // Query number of required surfaces for encoder.
   mfxFrameAllocRequest EncRequest;
   ZeroMemory(&EncRequest, sizeof(EncRequest));

   //#define WILL_WRITE 0x2000
   //EncRequest.Type |= /*WILL_WRITE*/MFX_MEMTYPE_SYSTEM_MEMORY | MFX_MEMTYPE_FROM_ENC; // This line is only required for Windows DirectX11 to ensure that surfaces can be written to by the application

   result = pEncoder->QueryIOSurf(&mfxEncParams, &EncRequest);
   if (result != MFX_ERR_NONE) {
      LFTRACE("pEncoder->QueryIOSurf Failed");
      return result;
   }

   const uint16_t uEncSurfaceCount = EncRequest.NumFrameSuggested;
   LFTRACEF(TEXT("Encode surfaces requested [%d]"), uEncSurfaceCount);

   // Allocate surfaces for encoder:
   // Width and height of buffer must be aligned, a multiple of 32.
   // Frame surface array keeps pointers all surface planes and general frame info.
   const uint16_t uRequestWidth = (uint16_t)MSDK_ALIGN32(EncRequest.Info.Width);
   const uint16_t uRequestHeight = (uint16_t)MSDK_ALIGN32(EncRequest.Info.Height);
   const uint8_t uBitsPerPixel = 16; // NV12 format is a 12 bits per pixel format.
   const uint32_t uSurfaceSize = uint32_t(uRequestWidth * uRequestHeight * uBitsPerPixel / 8 * 1.5);

   sbaSurfaceBuffers.SetArrayCapacityLarge(uSurfaceSize * uEncSurfaceCount);
   if (!sbaSurfaceBuffers.IsValid()) {
      LFTRACE("Failed to allocate memory for surface buffers");
      return MFX_ERR_MEMORY_ALLOC;
   }
   sbaSurfaceBuffers.SetSize(uSurfaceSize * uEncSurfaceCount);

   // Allocate surface headers (mfxFrameSurface1) for encoder.
   saSurfaces.SetArrayCapacity(uEncSurfaceCount);
   saSurfaces.SetSize(uEncSurfaceCount);

   //uint8_t* pMem = (uint8_t*)_aligned_malloc(uSurfaceSize * uEncSurfaceCount, 32);
   //ZeroMemory(pMem, uSurfaceSize * uEncSurfaceCount);

   for (size_t i = 0; i < uEncSurfaceCount; i++) {
      saSurfaces[i].Assign(new mfxFrameSurface1);
      ZeroMemory(saSurfaces[i].get(), sizeof(mfxFrameSurface1));
      memcpy(&(saSurfaces[i]->Info), &(mfxEncParams.mfx.FrameInfo), sizeof(mfxFrameInfo));

      saSurfaces[i]->Data.Y = &sbaSurfaceBuffers/*pMem*/[uSurfaceSize * i];
      saSurfaces[i]->Data.U = saSurfaces[i]->Data.Y + uRequestWidth * uRequestHeight;
      // We're using NV12 format here. U and V values are interleaved. So, V address always will be U + 1
      saSurfaces[i]->Data.V = saSurfaces[i]->Data.U + 1;
      saSurfaces[i]->Data.PitchLow = uRequestWidth;
   }

   // Initialize the Media SDK encoder.
   result = pEncoder->Init(&mfxEncParams);
   // Ignore the partial acceleration warning
   if (result == MFX_WRN_PARTIAL_ACCELERATION) {
      LFTRACE("Encoder will work with partial HW acceleration");
      result = MFX_ERR_NONE;
   }
   if (result != MFX_ERR_NONE) {
      LFTRACE("Failed to init the encoder");
      return result;
   }

   // Retrieve video parameters selected by encoder.
   mfxVideoParam selectedParameters;
   ZeroMemory(&selectedParameters, sizeof(selectedParameters));

   mfxExtCodingOptionSPSPPS mfxSPSPPS;
   ZeroMemory(&mfxSPSPPS, sizeof(mfxSPSPPS));

   mfxSPSPPS.Header.BufferId = MFX_EXTBUFF_CODING_OPTION_SPSPPS;
   mfxSPSPPS.Header.BufferSz = sizeof(mfxExtCodingOptionSPSPPS);
   LSizedByteArray saSPS(128);
   LSizedByteArray saPPS(128);
   mfxSPSPPS.PPSBuffer = saPPS.get();
   mfxSPSPPS.PPSBufSize = uint16_t(saPPS.GetSize());
   mfxSPSPPS.SPSBuffer = saSPS.get();
   mfxSPSPPS.SPSBufSize = uint16_t(saSPS.GetSize());

   // We need to get SPS and PPS data only
   mfxExtBuffer* pExtBuffers[] = { (mfxExtBuffer*)&mfxSPSPPS };
   selectedParameters.ExtParam = pExtBuffers;
   selectedParameters.NumExtParam = lenof(pExtBuffers);

   result = pEncoder->GetVideoParam(&selectedParameters);
   if (result != MFX_ERR_NONE) {
      LFTRACE("Failed to GetVideoParam of the encoder");
      return result;
   }

   // Get ExtraData
   const size_t uExtradataSize = mfxSPSPPS.SPSBufSize + mfxSPSPPS.PPSBufSize;
   if (uExtradataSize == 0) {
      LFTRACE("Extradata has wrong size.");
      return MFX_ERR_INCOMPATIBLE_VIDEO_PARAM;
   }

   saExtradata.SetArrayCapacity(uExtradataSize);
   saExtradata.SetSize(uExtradataSize);
   memcpy(saExtradata.get(), mfxSPSPPS.SPSBuffer, mfxSPSPPS.SPSBufSize);
   memcpy(saExtradata.get() + mfxSPSPPS.SPSBufSize, mfxSPSPPS.PPSBuffer, mfxSPSPPS.PPSBufSize);

   // Prepare Media SDK bit stream buffer
   ZeroMemory(&mfxBS, sizeof(mfxBS));
   mfxBS.MaxLength = selectedParameters.mfx.BufferSizeInKB * 1000; // selectedParameters.mfx.BufferSizeInKB * 1000 - this is copied from the sample
   sbaBitstreamData.SetArrayCapacityLarge(mfxBS.MaxLength);
   if (!sbaBitstreamData.IsValid()) {
      LFTRACE("Failed to allocate mempoy for bitstream buffer");
      return MFX_ERR_MEMORY_ALLOC;
   }
   sbaBitstreamData.SetSize(mfxBS.MaxLength);

   mfxBS.Data = sbaBitstreamData.get();

   return result;
}

bool LQuickSyncEncoder::Encode(const LVideoFrame& frm)
{
   LFTRACE();

   if (packet.DataIsValid()) {
      LFDEBUG("Previous packet hasn't been read. You must call GetNextPacket() after every Encode()");
      return false;
   }

   const videoposition_t vpCurFrame = frm.GetPosition();
   if (vpCurFrame <= vpPreviousFrame) {
      LFTRACE("Current pts is the same or less then the previous pts. Drop frame.");
      return true;
   }

   // Main encoding method
   mfxFrameSurface1* pSurface = GetFreeSurface(saSurfaces); // Find free frame surface
   if (pSurface == nullptr) {
      LFDEBUG("No free enc surface available");
      return false;
   }

   LFTRACEF(TEXT("Current frame pos = [%d]"), vpCurFrame);

   mfxStatus status = LoadFrameData(pSurface, frm);

   if (status != MFX_ERR_NONE) {
      LFTRACE("Unable to load frame data to enc surface");
      return false;
   }

   //LPtr<mfxBitstream> pBS(new mfxBitstream);
   //ZeroMemory(pBS.get(), sizeof(mfxBitstream));
   //LSizedByteArray BSData;
   //BSData.SetArrayCapacityLarge(mfxBS.MaxLength);
   //BSData.IsValid();
   //BSData.SetSize(mfxBS.MaxLength);
   //ZeroMemory(BSData.get(), mfxBS.MaxLength);
   //pBS->Data = BSData.get();
   //pBS->MaxLength = mfxBS.MaxLength;

   while (true) {
      // Encode a frame asychronously (returns immediately)
      status = pEncoder->EncodeFrameAsync(NULL, pSurface, &mfxBS/*pBS.get()*/, &syncPoint);

      if ((status > MFX_ERR_NONE) && (syncPoint == NULL)) { // Repeat the call if warning and no output
         if (MFX_WRN_DEVICE_BUSY == status) Sleep(5); // Wait if device is busy, then repeat the same call
      } else if ((status > MFX_ERR_NONE) && (syncPoint != NULL)) {
         status = MFX_ERR_NONE; // Ignore warnings if output is available
         break;
      } else if (MFX_ERR_NOT_ENOUGH_BUFFER == status) {
         // TODO: Allocate more bitstream buffer memory here if needed...
         LFDEBUG("Encoder requested more memory");
         break;
      } else break;
   }

   if (MFX_ERR_NONE == status) {
      status = session.SyncOperation(syncPoint, 60000); // Synchronize. Wait until encoded frame is ready
      if (status != MFX_ERR_NONE) {
         LFDEBUG("Failed to Wait until encoded frame is ready");
         return false;
      }

      status = WritePacketData(packet, mfxBS/**pBS.get()*/);
      if (status != MFX_ERR_NONE) {
         LFDEBUG("Failed to write packet data");
         return false;
      }
   }

   vpPreviousFrame = vpCurFrame;

   // MFX_ERR_MORE_DATA is a valid case: encoder works in async mode and asked more frames to generate a packet
   return (status == MFX_ERR_MORE_DATA) || (status == MFX_ERR_NONE);
}


mfxStatus LQuickSyncEncoder::LoadFrameData(mfxFrameSurface1* pSurface, const LVideoFrame& frm)
{
   // Get pointers to frames data planes
   LImageScanlineConstIterator siY(frm.GetImageBuffer());
   LImageScanlineIteratorU siU(frm.GetImageBuffer());
   LImageScanlineIteratorV siV(frm.GetImageBuffer());

   // Copy source frames to the mxf structures
   const size_t uHeight = size_t(frm.GetImageBuffer().GetFormat().iHeight);

   if ((uHeight / 2) > 2048) {
      LFDEBUG("Max supported U and V planes height is 2048");
      return MFX_ERR_UNSUPPORTED;
   }

   //mfxFrameInfo* pInfo = &pSurface->Info;
   mfxFrameData* pData = &pSurface->Data;
   pData->TimeStamp = LRescaleRational(frm.GetPosition(), MSEC_TIMEBASE, MFX_TIMEBASE);

   // Copy Y plane
   memcpy(pData->Y, (uint8_t*)siY.Get(), pData->PitchLow * uHeight);

   // Copy UV planes data
   size_t uOffset = 0;
   const size_t uUVWidthBytes = frm.GetImageBuffer().GetFormat().GetWidthBytesPlanarU();
   
   while(siU.IsValid() && siV.IsValid()) {
      const uint8_t* pU = (uint8_t*) siU.Get();
      const uint8_t* pV = (uint8_t*) siV.Get();

      for (size_t i = 0; i < uUVWidthBytes; i++) {
         pData->U[uOffset] = pU[i]; // U byte
         pData->V[uOffset] = pV[i]; // V byte
         uOffset += 2;
      }
      siU.Next();
      siV.Next();
   }

   return MFX_ERR_NONE;
}

mfxStatus LQuickSyncEncoder::WritePacketData(LMediaPacket& _packet, mfxBitstream& _mfxBitstream)
{
   // Reinit the bitstream
   //const uint32_t umfxBitstreamMaxData = _mfxBitstream.MaxLength;
   //ZeroMemory(&_mfxBitstream, sizeof(_mfxBitstream));
   //_mfxBitstream.MaxLength = umfxBitstreamMaxData;
   //mfxBS.Data = sbaBitstreamData.get();

   // Use copy of the bitstream data
   //LSizedByteArray saNewPacketData(size_t(_mfxBitstream.DataLength));
   //memcpy(saNewPacketData.get(), _mfxBitstream.Data + _mfxBitstream.DataOffset, _mfxBitstream.DataLength);
   //_packet.DataAssignArray(saNewPacketData);
   // Line below: use bitstream data itself
   _packet.DataSetExternalReference(_mfxBitstream.Data + _mfxBitstream.DataOffset, size_t(_mfxBitstream.DataLength)); // Set the frame data
   // Set PTS
   if (_mfxBitstream.TimeStamp == MFX_TIMESTAMP_UNKNOWN) _packet.pts = LVID_NO_TIMESTAMP_VALUE;
   else _packet.pts = LRescaleRational(_mfxBitstream.TimeStamp, MFX_TIMEBASE, MSEC_TIMEBASE); // PTS is get from Frame passed to QuickSync
   // Set DTS
   if (_mfxBitstream.DecodeTimeStamp == MFX_TIMESTAMP_UNKNOWN) _packet.dts = LVID_NO_TIMESTAMP_VALUE;
   else _packet.dts = LRescaleRational(_mfxBitstream.DecodeTimeStamp, MFX_TIMEBASE, MSEC_TIMEBASE); // DTS is set by QuickSync
   // Reset flags value and set valid value for this packet
   _packet.uFlags = 0;
   if ((_mfxBitstream.FrameType & MFX_FRAMETYPE_IDR) || (_mfxBitstream.FrameType & MFX_FRAMETYPE_xIDR)) _packet.uFlags |= LMEDIA_PACKET_I_FRAME;

   if ((_mfxBitstream.FrameType & MFX_FRAMETYPE_I) || (_mfxBitstream.FrameType & MFX_FRAMETYPE_xI)) _packet.uFlags |= LMEDIA_PACKET_I_FRAME;
   else if ((_mfxBitstream.FrameType & MFX_FRAMETYPE_P) || (_mfxBitstream.FrameType & MFX_FRAMETYPE_xP)) _packet.uFlags = LMEDIA_PACKET_P_FRAME;
   else if ((_mfxBitstream.FrameType & MFX_FRAMETYPE_B) || (_mfxBitstream.FrameType & MFX_FRAMETYPE_xB)) _packet.uFlags = LMEDIA_PACKET_B_FRAME;
   
   LFTRACEF(TEXT("Packet PTS=%lld; DTS=%lld"), _packet.pts, _packet.dts);
   _mfxBitstream.DataLength = 0;
   _mfxBitstream.DataOffset = 0;
   return MFX_ERR_NONE;
}

mfxFrameSurface1* LQuickSyncEncoder::GetFreeSurface(const LSizedArray<LPtr<mfxFrameSurface1> >& _saSurfaces)
{
   for (size_t i = 0; i < _saSurfaces.GetSize(); i++) {
      if (_saSurfaces[i]->Data.Locked == 0) {
         LFTRACEF(TEXT("Free surface idx = %d"), int(i));
         return _saSurfaces[i].get();
      }
   }
   return nullptr;
}

Input video 1:

00:00:12.438  MAIN  LQuickSyncEncoder::Encode

00:00:12.438  MAIN  LQuickSyncEncoder::GetFreeSurface Free surface idx = 0

00:00:12.438  MAIN  LQuickSyncEncoder::Encode Current frame pos = [700]

00:00:12.454  MAIN  LQuickSyncEncoder::WritePacketData Packet PTS=0; DTS=0

00:00:12.454  MAIN  LQuickSyncEncoder::GetNextPacket

00:00:12.454  MAIN  LMultiplexerMP4<class LOutputStreamFileNotify>::WritePacket

00:00:12.454  MAIN  LMultiplexerMPEG4Base<class LOutputStreamFileNotify>::WritePacketInternal

00:00:12.469  MAIN  LQuickSyncEncoder::GetNextPacket

00:00:12.469  MAIN  LImageBufferCopy

00:00:12.469  MAIN  LImageBufferCopy

00:00:12.469  MAIN  LQuickSyncEncoder::Encode

00:00:12.485  MAIN  LQuickSyncEncoder::GetFreeSurface Free surface idx = 0

00:00:12.485  MAIN  LQuickSyncEncoder::Encode Current frame pos = [734]

Exception thrown at 0x124E5296 (libmfxhw32.dll) in ***.exe: 0xC0000005: Access violation reading location 0x20BB4FA4.

Input video 2:

00:00:42.812  MAIN  LQuickSyncEncoder::Encode

00:00:42.812  MAIN  LQuickSyncEncoder::GetFreeSurface Free surface idx = 0

00:00:42.828  MAIN  LQuickSyncEncoder::Encode Current frame pos = [760]

Exception thrown at 0x115E5296 (libmfxhw32.dll) in ***.exe: 0xC0000005: Access violation reading location 0x1EA4DAA8.

Intel Media SDK calls trace:

2828 2019-10-22 15:32:2:735     bs.reserved[]={ 0, 0, 0, 0, 0, 0 }
2828 2019-10-22 15:32:2:735     bs.DecodeTimeStamp=0
2828 2019-10-22 15:32:2:735     bs.TimeStamp=0
2828 2019-10-22 15:32:2:735     bs.Data=0000000024BC6064
2828 2019-10-22 15:32:2:735     bs.DataOffset=0
2828 2019-10-22 15:32:2:735     bs.DataLength=0
2828 2019-10-22 15:32:2:735     bs.MaxLength=3264000
2828 2019-10-22 15:32:2:735     bs.PicStruct=1
2828 2019-10-22 15:32:2:735     bs.FrameType=4
2828 2019-10-22 15:32:2:735     bs.DataFlag=0
2828 2019-10-22 15:32:2:735     bs.reserved2=0
2828 2019-10-22 15:32:2:735     mfxSyncPoint* syncp=00005800
2828 2019-10-22 15:32:2:735 function: MFXVideoENCODE_EncodeFrameAsync(0.0245 msec, status=MFX_ERR_NONE) - 

2828 2019-10-22 15:32:2:735 function: MFXVideoCORE_SyncOperation(mfxSession session=00743BE0, mfxSyncPoint syncp=00005800, mfxU32 wait=60000) +
2828 2019-10-22 15:32:2:736     mfxSession session=05E45DCC
2828 2019-10-22 15:32:2:736     mfxSyncPoint* syncp=00005800
2828 2019-10-22 15:32:2:736     mfxU32 wait=60000
2828 2019-10-22 15:32:2:736 >> MFXVideoCORE_SyncOperation called
2828 2019-10-22 15:32:2:736     mfxSession session=05E45DCC
2828 2019-10-22 15:32:2:736     mfxSyncPoint* syncp=00005800
2828 2019-10-22 15:32:2:736     mfxU32 wait=60000
2828 2019-10-22 15:32:2:736 function: MFXVideoCORE_SyncOperation(0.4659 msec, status=MFX_ERR_DEVICE_FAILED) - 

2828 2019-10-22 15:32:4:3 function: MFXVideoENCODE_EncodeFrameAsync(mfxSession session=00743BE0, mfxEncodeCtrl *ctrl=00000000, mfxFrameSurface1 *surface=00000000, mfxBitstream *bs=007435BE, mfxSyncPoint *syncp=0074361A) +
2828 2019-10-22 15:32:4:3     mfxSession session=05E45DCC
2828 2019-10-22 15:32:4:3     bs.EncryptedData=00000000
2828 2019-10-22 15:32:4:3     bs.NumExtParam=0
2828 2019-10-22 15:32:4:3     bs.ExtParam=00000000

2828 2019-10-22 15:32:4:3     bs.reserved[]={ 0, 0, 0, 0, 0, 0 }
2828 2019-10-22 15:32:4:3     bs.DecodeTimeStamp=0
2828 2019-10-22 15:32:4:3     bs.TimeStamp=0
2828 2019-10-22 15:32:4:3     bs.Data=0000000024BC6064
2828 2019-10-22 15:32:4:3     bs.DataOffset=0
2828 2019-10-22 15:32:4:3     bs.DataLength=0
2828 2019-10-22 15:32:4:3     bs.MaxLength=3264000
2828 2019-10-22 15:32:4:3     bs.PicStruct=1
2828 2019-10-22 15:32:4:3     bs.FrameType=4
2828 2019-10-22 15:32:4:3     bs.DataFlag=0
2828 2019-10-22 15:32:4:3     bs.reserved2=0
2828 2019-10-22 15:32:4:3     mfxSyncPoint* syncp=00005800
2828 2019-10-22 15:32:4:3 >> MFXVideoENCODE_EncodeFrameAsync called
2828 2019-10-22 15:32:4:3     mfxSession session=05E45DCC
2828 2019-10-22 15:32:4:3     bs.EncryptedData=00000000
2828 2019-10-22 15:32:4:3     bs.NumExtParam=0
2828 2019-10-22 15:32:4:3     bs.ExtParam=00000000

2828 2019-10-22 15:32:4:3     bs.reserved[]={ 0, 0, 0, 0, 0, 0 }
2828 2019-10-22 15:32:4:3     bs.DecodeTimeStamp=0
2828 2019-10-22 15:32:4:3     bs.TimeStamp=0
2828 2019-10-22 15:32:4:3     bs.Data=0000000024BC6064
2828 2019-10-22 15:32:4:3     bs.DataOffset=0
2828 2019-10-22 15:32:4:3     bs.DataLength=0
2828 2019-10-22 15:32:4:3     bs.MaxLength=3264000
2828 2019-10-22 15:32:4:3     bs.PicStruct=1
2828 2019-10-22 15:32:4:3     bs.FrameType=4
2828 2019-10-22 15:32:4:3     bs.DataFlag=0
2828 2019-10-22 15:32:4:3     bs.reserved2=0
2828 2019-10-22 15:32:4:3     mfxSyncPoint* syncp=00000000
2828 2019-10-22 15:32:4:4 function: MFXVideoENCODE_EncodeFrameAsync(0.0053 msec, status=MFX_ERR_DEVICE_FAILED) - 

 

Update: the memory align is set to 1 outside of this code(#pragma pack 1). If restore the default value(8) the crash is not occurs. Similar behavior is sample_encode example. Doesn it means the Intel Media SDK strucures are sensitive to memory align ? Can you confirm default align(8) should be used ?

Intel Media SDK on Xeon Scalable processors

$
0
0

Hello, can the Intel Media SDK run on processors from the Xeon Scalable family, specifically on Xeon Gold 6140? And if yes, would it run on Ubuntu (16.04 or 18.04)? Many thanks

Which version of Intel Media SDK has Intel Xeon CPU E-2278GE support?

$
0
0

Please tell me the version of Intel Media SDK which supported a new CPU (Xeon E-2278G)?

 

Intel Media SDK on Xeon Scalable processors

$
0
0

Hello, can the Intel Media SDK run on processors from the Xeon Scalable family, specifically on Xeon Gold 6140? And if yes, would it run on Ubuntu (16.04 or 18.04)? Many thanks

HEVC SW Encoder

$
0
0

Hi,

Since the Intel Media Server Studio seems to no longer be sold by Intel - does this mean that the HEVC Software encode plugin

mfxplugin64_hevce_sw.dll

can now be included in commercial products for unlimited distribution?

On the webpage https://software.intel.com/en-us/media-sdk it states

Note Intel® Media Server Studio is no longer available but you can access its features in other products:

However the redist.txt in the Intel Media SDK 2019 R1 still lists that dll as a Limited Distribution Redistributable:

List of Limited Distribution Redistributables:

Intel(R) Media Software Development Kit 2018 R2 - HEVC Encoder:
mfxplugin64_hevce_sw.dll
mfxplugin64_hevce_gacc.dll

If you would like to purchase licenses for use with computer systems that contain additional or unlimited processor sockets, or to distribute additional or unlimited copies of Your Product, please contact your Intel sales representative or reseller, or visit https://software.intel.com/en-us/media-server-studio-pro

 

Is it just that the redist.txt was not updated correctly for the 2019 R1 release and that there are no longer any limited distribution requiremetns for it ?

 

Thanks

Viewing all 2185 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>