Quantcast
Channel: Intel® Software - Media
Viewing all 2185 articles
Browse latest View live

Low Latency Decode of H.264

$
0
0

Dear Tech Support:

I am using the latest version of the Media SDK and trying to decode an H.264 bitstream and need it to provide me with a low latency result.  Hence when I issue the decode call, I need it to provide me back a decoded frame ASAP and not queue up a bunch of them prior to providing me back a surface to draw into.  Currently when I make the following call to decode my bitstream:

sts = m_pmfxDEC->DecodeFrameAsync(pBitstream, &(m_pCurrentFreeSurface->frame), &pOutSurface, &(m_pCurrentFreeOutputSurface->syncp));

it returns me back 16 consecutive MFX_ERR_MORE_SURFACE results until I get a MFX_ERR_NONE result for me to draw into the output surface via a valid non-NULL pOutSurface pointer.  This creates a 16 frame delay and since I'm running in real-time, that creates over 1/2 second of delay for normal 30 FPS video - not cool. 

I have the following during my initialization:

AsyncDepth = 1
bUseHWLib = 1
m_b_DecOutSystem = SYSTEM_MEMORY
memType = D3D9_MEMORY

and the bitstream data flag set for MFX_BITSTREAM_COMPLETE_FRAME

I am providing the decoder with an H.264 bitstream with IP only frames - no B Frames and the first frame I am providing it with is always an I frame.  Can you tell me what I need to do in order for the H.264 decoder to provide me back an MFX_ERR_NONE and a non-null pOutSurface for me to use (hopefully after the first or no later that the 2nd decoder call) instead of having to wait for 16 frames before I can do so?  Please advise as to what I am doing wrong.

Thanks,
Tim

 


Media SDK 2017 R1 32 bit vs. 64 bit

$
0
0

Hello,

the Media SDK 2017 R1 release notes state that "This release supports only 64-bit Microsoft* Windows* applications."
As the installation package still includes 32 bit libraries I assume that this statement refers to 64 bit OS.
So, applications built with Media SDK 2017 R1 can still be 32 bit, right ?

Thank you,
Stefan.

invalid read/write shown by valgrind

$
0
0

While using Media-SDK calls with iHD video driver, valgrind shows some invalid read/write and conditional jumps in iHD_drv_video.so Some of the sample errors given by valgrind are pasted below. We are observing that our application crashes sometimes after processing about 100K-200K frames of VGA size (Color Conversion followed by h.264 encode). We are not able to simulate the crash on small application hence have to depend on tools like valgrind. If you suggest any other tool to check memory problems we can do that as well.

Media SDK used from - MediaServerStudioEssentials2017R3

OS: Ubuntu 16.04

H/W: Intel compute stick core-m3

==2164==
==2164== Conditional jump or move depends on uninitialised value(s)
==2164==    at 0xF253CE5: ??? (in /opt/intel/mediasdk/lib64/iHD_drv_video.so)
==2164==    by 0x12498A98: ??? (in /opt/intel/mediasdk/lib64/libmfxhw64-p.so.1.23)
==2164==    by 0x12498C0D: ??? (in /opt/intel/mediasdk/lib64/libmfxhw64-p.so.1.23)
==2164==    by 0x12646C18: ??? (in /opt/intel/mediasdk/lib64/libmfxhw64-p.so.1.23)
==2164==    by 0x1265A45E: ??? (in /opt/intel/mediasdk/lib64/libmfxhw64-p.so.1.23)
==2164==    by 0x12648D7E: ??? (in /opt/intel/mediasdk/lib64/libmfxhw64-p.so.1.23)
==2164==    by 0x1264F252: ??? (in /opt/intel/mediasdk/lib64/libmfxhw64-p.so.1.23)
==2164==    by 0x1247DF4F: MFXVideoVPP_QueryIOSurf (in /opt/intel/mediasdk/lib64/libmfxhw64-p.so.1.23)
==2164==    by 0x46FDCF: sr::VideoPreProcessorElement::setupSession(mfxVideoParam*) (VideoPreProcessorElement.cpp:328)

==2164== Invalid write of size 8
==2164==    at 0x4C3453F: memset (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==2164==    by 0xF1FF367: ??? (in /opt/intel/mediasdk/lib64/iHD_drv_video.so)
==2164==    by 0xF1FC333: ??? (in /opt/intel/mediasdk/lib64/iHD_drv_video.so)
==2164==    by 0xF1F6693: ??? (in /opt/intel/mediasdk/lib64/iHD_drv_video.so)
==2164==    by 0xF1DC2CF: ??? (in /opt/intel/mediasdk/lib64/iHD_drv_video.so)
==2164==    by 0xF26EDF4: ??? (in /opt/intel/mediasdk/lib64/iHD_drv_video.so)
==2164==    by 0xB66ECA3: vaCreateContext (in /usr/lib64/libva.so.1.9900.0)
==2164==    by 0x12592F84: ??? (in /opt/intel/mediasdk/lib64/libmfxhw64-p.so.1.23)
==2164==    by 0x1258DFA7: ??? (in /opt/intel/mediasdk/lib64/libmfxhw64-p.so.1.23)
==2164==    by 0x1251B1A7: ??? (in /opt/intel/mediasdk/lib64/libmfxhw64-p.so.1.23)
==2164==    by 0x12516C75: ??? (in /opt/intel/mediasdk/lib64/libmfxhw64-p.so.1.23)
==2164==    by 0x12479343: MFXVideoENCODE_Init (in /opt/intel/mediasdk/lib64/libmfxhw64-p.so.1.23)
==2164==  Address 0x7f74098e4000 is not stack'd, malloc'd or (recently) free'd

==2164== Invalid write of size 2
==2164==    at 0x4C3242B: memcpy@GLIBC_2.2.5 (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==2164==    by 0xF286D0B: ??? (in /opt/intel/mediasdk/lib64/iHD_drv_video.so)
==2164==    by 0xF1FFAC8: ??? (in /opt/intel/mediasdk/lib64/iHD_drv_video.so)
==2164==    by 0xF1DD16B: ??? (in /opt/intel/mediasdk/lib64/iHD_drv_video.so)
==2164==    by 0xF06C89A: ??? (in /opt/intel/mediasdk/lib64/iHD_drv_video.so)
==2164==    by 0xF066588: ??? (in /opt/intel/mediasdk/lib64/iHD_drv_video.so)
==2164==    by 0xF1DC38C: ??? (in /opt/intel/mediasdk/lib64/iHD_drv_video.so)
==2164==    by 0xF26EDF4: ??? (in /opt/intel/mediasdk/lib64/iHD_drv_video.so)
==2164==    by 0xB66ECA3: vaCreateContext (in /usr/lib64/libva.so.1.9900.0)
==2164==    by 0x12592F84: ??? (in /opt/intel/mediasdk/lib64/libmfxhw64-p.so.1.23)
==2164==    by 0x1258DFA7: ??? (in /opt/intel/mediasdk/lib64/libmfxhw64-p.so.1.23)
==2164==    by 0x1251B1A7: ??? (in /opt/intel/mediasdk/lib64/libmfxhw64-p.so.1.23)
==2164==  Address 0x7f74098d2180 is not stack'd, malloc'd or (recently) free'd
==2164==

 

Using Timestamps to Syncronize Display of Decoded Video

$
0
0

Dear Tech Support:

I am using the Intel Media SDK to decode an H.264 bitstream and needing to use timestamps to synchronize the displaying of the video frame.  I don't see the sample code addressing how to go about doing using a timestamp to render the recently decoded frame.  Per the sample code that is provided, there is a m_d3dRender.RenderFrame( frame, m_pGeneralAllocator ); call that I assume displays the newly decoded frame shortly thereafter.  Is this the place where I determine exactly when to render the frame based on a timestamp or is there a better way of doing this using the Media SDK example code?  Please advise.

Thanks,
Tim 

MFX_IMPL_AUTO_ANY bug when upgrading to Intel(R)_Media_SDK_2016.0.2 ?

$
0
0

Using a version of the Intel Media SDK prior to 2016.0.2, one of our products performed perfectly fine.  After upgrade, we now have strange behaviour.

We are using primarily the hardware decoding features of the SDK on supported Intel hardware to provide excellent H264 decoding in the product.

The product is deployed on 64-bit versions of Windows 7 & Windows 10.  The integration to the Intel SDK is via C++ native code.

We use the MFX_IMPL_AUTO_ANY flag in the setup of the SDK to choose hardware accelerated decoding where available (virtually always) and provide the libmfxsw64.dll file to allow fall-back to software decoding where not available.

We have tested the upgraded product on two Intel systems:

- Intel(R) HD Graphics 4600                    10.18.14.4578
- Intel(R) HD Graphics 520                     22.20.16.4749

The product performs as expected on the 4600 system, but on the 520 system the hardware accelerated decoding is not enabled *WHILST* the libmfxsw64.dll file is available.  If the libmfxsw64.dll file is removed, the 520 system will perform hardware accelerated decoding.

The bug happens on the 520 for both Windows 7 & Windows 10.

This very much feels like a bug introduced in the 2016.0.2 SDK, as two other posts [both with no solution] have similar experiences:

https://software.intel.com/en-us/node/738387
https://software.intel.com/en-us/forums/intel-media-sdk/topic/604995

The outputs from the SDK analyzer are attached.

Is there any work-around that can be suggested to correctly detect HW support on the 520?

Regards,

Mark

 

AttachmentSize
Downloadtext/plain520.txt8.48 KB
Downloadtext/plain4600.txt2.13 KB

Problem in encoding MVC with 2 RGB4 Input streams

$
0
0

I used the 2017 SDK R1 with the following command:

     sample_encode.exe mvc -rgb4 -i L.rgb -i R.rgb -o BRGB.mvc -w 1920 -h 1080

The ouput file has "noise" on the upper half while playing.

But the display will be ok when I encode each file separately with the following command:

     sample_encode.exe h264 -rgb4 -i L.rgb -o BRGB.mp4 -w 1920 -h 1080​

Furthemore, the display will also be ok when I convert L.rgb and R.rgb into L.yuv and R.yuv then use the following command:

     sample_encode.exe mvc -nv12 -i L.yuv -i R.yuv -o BRGB.mvc -w 1920 -h 1080

-----------------------------------------------------------------

I would like to know is it possibile to use RGB4 streams to make a MVC file with correct display?

Thanks,

Cannot find LA ENC plugin

$
0
0

Hi,

I am trying to use LA_EXT in Windows10 with KabyLake i5-7500 platform for both h264 and hevc encoding, but it failed. Media SDK Version is 2017R1.

mediasdk_release_notes.pdf says HEVC encode supports MFX_RATECONTROL_LA_EXT with lookahead plugin in page 12.

 

Par file:

la_ext_264.par:

-i::h264 test.264 -o::sink -gop_size 33 -dist 4 -num_ref 2 -u 4 -la_ext -lad 33 -async 50
-i::source -o::h264 output.264 -gop_size 33 -dist 4 -num_ref 2 -u 4 -la_ext -lad 33 -w 1920 -h 1080 -async 50

la_ext_265.par:

-i::h264 test.264 -o::sink -gop_size 33 -dist 4 -num_ref 2 -u 4 -la_ext -lad 33 -async 50
-i::source -o::h265 output.265 -gop_size 33 -dist 4 -num_ref 2 -u 4 -la_ext -lad 33 -w 1920 -h 1080 -async 50

 

Below is my command line:

C:\Users\gc\Documents\Samples for Intel(R) Media SDK 2017 for Windows 8.0.24.271\_build\x64\Release>sample_multi_transcode.exe -par la_ext_264.par
Multi Transcoding Sample Version 8.0.24.0

Par file is: la_ext_264.par

plugin_loader.h :166 [ERROR] Failed to load plugin from GUID, sts=-9: { 0x58, 0x8f, 0x11, 0x85, 0xd4, 0x7b, 0x42, 0x96, 0x8d, 0xea, 0x37, 0x7b, 0xb5, 0xd0, 0xdc, 0xb4 } (Intel (R) Media SDK plugin for LA ENC)
MFX HARDWARE Session 0 API ver 1.25 parameters:
Input  video: AVC
Output video: To child session
MFX dll: C:\Program Files\Intel\Media SDK\libmfxhw64.dll

MFX HARDWARE Session 1 API ver 1.25 parameters:
Input  video: From parent session
Output video: AVC
MFX dll: C:\Program Files\Intel\Media SDK\libmfxhw64.dll

[ERROR], sts=MFX_ERR_UNSUPPORTED(-3), TranscodingSample::CTranscodingPipeline::CalculateNumberOfReqFrames, m_pmfxPreENC.get()->QueryIOSurf failed at src\pipeline_transcode.cpp:2895

[ERROR], sts=MFX_ERR_UNSUPPORTED(-3), TranscodingSample::CTranscodingPipeline::AllocFrames, CalculateNumberOfReqFrames failed at src\pipeline_transcode.cpp:2727

[ERROR], sts=MFX_ERR_UNSUPPORTED(-3), TranscodingSample::CTranscodingPipeline::CompleteInit, AllocFrames failed at src\pipeline_transcode.cpp:3378

[ERROR], sts=MFX_ERR_UNSUPPORTED(-3), TranscodingSample::Launcher::Init, m_pSessionArray[i]->pPipeline->CompleteInit failed at src\sample_multi_transcode.cpp:369

[ERROR], sts=MFX_ERR_UNSUPPORTED(-3), wmain, transcode.Init failed at src\sample_multi_transcode.cpp:750

C:\Users\gc\Documents\Samples for Intel(R) Media SDK 2017 for Windows 8.0.24.271\_build\x64\Release>sample_multi_transcode.exe -par la_ext_265.par
Multi Transcoding Sample Version 8.0.24.0

Par file is: la_ext_265.par

plugin_loader.h :166 [ERROR] Failed to load plugin from GUID, sts=-9: { 0x58, 0x8f, 0x11, 0x85, 0xd4, 0x7b, 0x42, 0x96, 0x8d, 0xea, 0x37, 0x7b, 0xb5, 0xd0, 0xdc, 0xb4 } (Intel (R) Media SDK plugin for LA ENC)
MFX HARDWARE Session 0 API ver 1.25 parameters:
Input  video: AVC
Output video: To child session
MFX dll: C:\Program Files\Intel\Media SDK\libmfxhw64.dll

plugin_loader.h :170 [INFO] Plugin was loaded from GUID: { 0x6f, 0xad, 0xc7, 0x91, 0xa0, 0xc2, 0xeb, 0x47, 0x9a, 0xb6, 0xdc, 0xd5, 0xea, 0x9d, 0xa3, 0x47 } (Intel (R) Media SDK HW plugin for HEVC ENCODE)
MFX HARDWARE Session 1 API ver 1.25 parameters:
Input  video: From parent session
Output video: HEVC
MFX dll: C:\Program Files\Intel\Media SDK\libmfxhw64.dll

[ERROR], sts=MFX_ERR_UNSUPPORTED(-3), TranscodingSample::CTranscodingPipeline::CalculateNumberOfReqFrames, m_pmfxPreENC.get()->QueryIOSurf failed at src\pipeline_transcode.cpp:2895

[ERROR], sts=MFX_ERR_UNSUPPORTED(-3), TranscodingSample::CTranscodingPipeline::AllocFrames, CalculateNumberOfReqFrames failed at src\pipeline_transcode.cpp:2727

[ERROR], sts=MFX_ERR_UNSUPPORTED(-3), TranscodingSample::CTranscodingPipeline::CompleteInit, AllocFrames failed at src\pipeline_transcode.cpp:3378

[ERROR], sts=MFX_ERR_UNSUPPORTED(-3), TranscodingSample::Launcher::Init, m_pSessionArray[i]->pPipeline->CompleteInit failed at src\sample_multi_transcode.cpp:369

[ERROR], sts=MFX_ERR_UNSUPPORTED(-3), wmain, transcode.Init failed at src\sample_multi_transcode.cpp:750
plugin_loader.h :196 [INFO] MFXBaseUSER_UnLoad(session=0x0000026810BF56F0), sts=0

 

It seems the Media SDK doesn't contain the LA ENC plugin. Please let me know how to get the plugin or how to use la_ext in Windows. Thank you.

Best regards,

Charles Gao

ffmpeg: mpegts muxer: Non-monotonous DTS in output stream when using b-frames

$
0
0

When encoding using main or high profile with b-frames, and output via mepgts muxer and udp, I get somewhat frequently (around each 10-30 seconds) warnings about "Non-monotonous DTS in output stream" from the mpegts muxer. It seems that (though only sometimes and not on all b-frames, because there are plenty of b-frames in the stream), that the b-frames are somehow "out of order" regarding DTS.

An example log of 1 such occurence with 3 frames "affected":
2017/11/14 13:15:34 [mpegts @ 0x4609da0] Non-monotonous DTS in output stream 0:2; previous: 973677600, current: 973645200; changing to 973677601. This may result in incorrect timestamps in the output file.
2017/11/14 13:15:34 [mpegts @ 0x4609da0] Non-monotonous DTS in output stream 0:2; previous: 973677601, current: 973648800; changing to 973677602. This may result in incorrect timestamps in the output file.
2017/11/14 13:15:34 [mpegts @ 0x4609da0] Non-monotonous DTS in output stream 0:2; previous: 973677602, current: 973652400; changing to 973677603. This may result in incorrect timestamps in the output file.

1 occurence can have 1 or more frames that are in sequence, up to max b-frames setting for encoder.

I am pretty unsure if this is a real problem as I was unable to link that to any obvious picture artifacts in the player when playing such stream. Regardless, I would like to understand if this is a known issue and if it is ffmpeg or Media SDK caused.
Please help with any indication on where to look for the cause of this.

Using ffmpeg version n3.2.8

python sys_analyzer_linux.py
--------------------------
Hardware readiness checks:
--------------------------
 [ OK ] Processor name: Intel(R) Core(TM) i7-4860EQ CPU @ 1.80GHz
--------------------------
OS readiness checks:
--------------------------
 [ OK ] GPU visible to OS
 [ OK ] Linux distro suitable for Media Server Studio 2016 Gold
--------------------------
Media Server Studio Install:
--------------------------
 [ ERROR ] user not in video group.  Add with usermod -a -G video {user}
 [ OK ] libva.so.1 found
 [ OK ] vainfo reports valid codec entry points
 [ OK ] /dev/dri/renderD128 connects to Intel i915
--------------------------
Component Smoke Tests:
--------------------------
 [ OK ] Media SDK HW API level:1.17
 [ OK ] Media SDK SW API level:1.17
 [ OK ] OpenCL check:platform:Intel(R) OpenCL GPU OK CPU OK

 


How to test the samples?

$
0
0

Greetings from a beginner programmer, I've downloaded the samples and the codes are able to build and compile. As I've clicked the application (e.g sample_decode.exe), the console exits itself.

I wonder how I could test the samples where I've found the ones in "_bin/content" folder.

CPU load and privete bytes of Quick Sync softwae is ever increasing.

$
0
0

Hello.

I have 2 issues on H.264 decoding software using Intel media SDK - Quick Sync hardware decoding.

Issue 1: CPU load of the software is ever-increasing. It reaches to 100% in 5 or 10 hours then keep 100%. We checked using process explorer and found there is many MFXVideoVPP_GetVPPStat threads (150 threads) and CPU load of each thread is ever increasing.

Issue 2: Private bytes of the program is ever-increasing. It seems memory leak.

My environment:
Windows10 Enterprise 1607
16GB memory
SkyLake Xeon E3-1275 v5 (I have same issues on KabyLake Xeon E3-1275 v6)
Driver version on device manager: 22.20.16.4815 (Old version driver didn't have these 2 issues but it had more critical issue so I cannot use the old driver.)

I need solution of these 2 issues ASAP. Kindly give me any advice.

Copying buffers from across MSDK sessions

$
0
0

Hi,

I have a question related to copying the data across media SDK sessions. I have two sessions A and B.

Session A does only decoding .

Session B does only VPP scaling.

i am providing Session A output as an Input to Session B.  As far as i understand that i can not allocate the buffers for the decoded output in the video memory. I have to allocate the decoder output buffers in the system memory and provide that as an input to the Session B .

In general Input and output to any sessions are required to be in the system memory?

Please confirm

 

Best Regards,

Rajesh

 

 

Errors encountered when executing the samples

$
0
0

Greetings from a beginner programmer, I've built the sample codes for Decode and Encode by using Visual Studio Express. I started testing them by using the Command Prompt. After I've filled the parameters, the following errors appeared.

For encode: sample_encode.exe h264 -i input.yuv -o output.h264 -w 720 -h 480 -hw

[ERROR], sts=MFX_ERR_UNKNOWN(-1), CEncodingPipeline::GetFreeTask, m_TaskPool.SynchronizeFirstTask failed at src\pipeline_encode.cpp:1516

[ERROR], sts=MFX_ERR_UNKNOWN(-1), CEncodingPipeline::Run, m_pmfxENC->EncodeFrameAsync failed at src\pipeline_encode.cpp:1851

[ERROR], sts=MFX_ERR_UNKNOWN(-1), wmain, pPipeline->Run failed at src\sample_encode.cpp:1072

For decode: sample_decode.exe h264 -i text.mp4 -o output.yuv -d3d -sw

[ERROR], sts=MFX_ERR_MORE_DATA(-10), CDecodingPipeline::InitMfxParams, m_FileReader->ReadNextFrame failed at src\pipeline_decode.cpp:702

[ERROR], sts=MFX_ERR_MORE_DATA(-10), CDecodingPipeline::Init, InitMfxParams failed at src\pipeline_decode.cpp:404

[ERROR], sts=MFX_ERR_MORE_DATA(-10), wmain, Pipeline.Init failed at src\sample_decode.cpp:719

Tuition Jobs Jaipur

SDK2017R3 gold install

$
0
0

When installing SDK2017R3 on an older version CentOS like 7.2 (which I had installed because it was required by an older version of SDK)...
The installation script MediaServerStudioEssentials2017R3/SDK2017Production16.5.2/CentOS/install_sdk_CentOS.sh seems
1. Not to install the required kernel, and fails installing
2. If the kernel is allready there, it does not bring the whole CentOS up to 7.3.1611, leaving it in a wierd state of 7.2 with some packages of 7.3 installed
 

How do I use QSV by ffmpeg

$
0
0

 

Hello.

I get an error using 'qsv'.

 

-----------------------------------------
ffmpeg -i input.mp4 -vcodec h264_qsv output.mp4
ffmpeg version 3.4 Copyright (c) 2000-2017 the FFmpeg developers
  built with gcc 4.8.5 (GCC) 20150623 (Red Hat 4.8.5-16)
  configuration: --prefix=/usr/service --extra-cflags=-I/home/junkl/test/ffmpeg_build/include --extra-ldflags='-fopenmp -L/home/junkl/test/ffmpeg_build/lib -L/usr/lib64' --extra-libs='-lgomp -lgpg-error' --pkg-config-flags=--static --optflags='-O2 -g' --enable-gpl --enable-nonfree --enable-version3 --disable-stripping --enable-frei0r --enable-gmp --enable-libass --enable-libfdk-aac --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-libsnappy --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libzimg --enable-libmfx --enable-nvenc --enable-avisynth --enable-bzlib --enable-fontconfig --enable-iconv --enable-libxvid --enable-lzma --enable-zlib --enable-gnutls --enable-gcrypt --enable-gmp --enable-librtmp --enable-libsoxr --disable-doc --disable-htmlpages --disable-manpages --disable-podpages --disable-txtpages
XXXXXXXXXX(skip message)
[h264_qsv @ 0x4b93420] No device available for encoder (device type qsv for codec h264_qsv).
[h264_qsv @ 0x4b93420] Error initializing an internal MFX session: unsupported (-3)
Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
[aac @ 0x4c79f40] Qavg: 39017.027
[aac @ 0x4c79f40] 2 frames left in the queue on closing
Conversion failed!
-----------------------------------------

Please teach me step by step.

My system info is ....

CPU : Intel(R) Xeon(R) CPU E3-1275 v6 @ 3.80GHz
VGA :
    00:02.0 VGA compatible controller [0300]: Intel Corporation Device [8086:591d] (rev 04)
    01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP106GL [Quadro P2000] [10de:1c30] (rev a1)
OS: LINUX (CentOS Linux release 7.4.1708 (Core))
    Linux localhost.localdomain 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
MOD: (not found module. by command (lsmod | grep i915) )
ETC: (omitted devel package list)
    libmfx-1.21
    libva-1.2.1
    libdrm-2.4.74
    cairo-1.14.8
    intel-gpu-tools-2.99.917
    xorg-x11-drv-intel
    xorg-x11-drv-intel-2.99.917
    gstreamer1-vaapi-0.6.1

--------------------------------------
# vainfo
error: can't connect to X server!
libva info: VA-API version 0.34.0
libva info: va_getDriverName() returns -1
libva error: va_getDriverName() failed with unknown libva error,driver_name=(null)
vaInitialize failed with error code -1 (unknown libva error),exit
---------------------------------------

 

PS. 
    How do install MediaServerStudioEssentials2017R3 on CentOS7.4.1708 (not centos 7.2)
    Dose QSV is require X-server?

 


Change Bitrate of Hardware Encoder at runtime?

$
0
0

Hello,

I wanted to know is it possible to change Bitrate in Hardware H264 Encoder?

If yes, then guide me the steps.

 

configuring HEVC tiles in sample_encode R3 2017 release.

$
0
0

I have tried using the mfxExtHEVCTiles structure as mentioned in the API document . I have made a few changes in the sample_encode (R3 2107-version) to control the number of tiles in the picture but for any value above 1 for NumTileRows or NumTileColumns I get an error during query under alloc. The error is MFX_ERR_UNSUPPORTED.

Does this mean that this version of the sdk not support multiple tiles or am i doing something wrong??

A dilemma: X display & Linux remote desktop & vainfo & Code Builder

$
0
0

hi,

I am accessing my Ubuntu from windows10 remotely (as this way describes, https://askubuntu.com/questions/592537/can-i-access-ubuntu-from-windows-remotely ). I have MSS installed. And using SSH terminal, vainfo and samples runs perfectly without problem. That's fine. However problems happens with remote desktop. And I find myself is in a dilemma - unset DISPLAY or export it?

Let me illustrate.

1. Problem begins with CodeBuilder of eclipse failure with error - "Failed to update machine list: Could not load required libraries: please make sure to set the correct path under the Code Builder for OpenCL preference page."

2. So, I went to check "python sys_analyzer_linux.py -v". And got this error, (please note: everything works find with SSH client terminal)

--------------------------
Hardware readiness checks:
--------------------------
 [ OK ] Processor name: Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz
 [ INFO ] Intel Processor
 [ INFO ] Processor brand: Core
 [ INFO ] Processor arch: Skylake
--------------------------
OS readiness checks:
--------------------------
 [ INFO ] GPU PCI id     : 1912
 [ INFO ] GPU description: SKL DT GT2
 [ OK ] GPU visible to OS
 [ INFO ] no nomodeset in GRUB cmdline (good)
 [ INFO ] Linux distro   : Ubuntu 16.04
 [ INFO ] Linux kernel   : 4.4.0
 [ INFO ] glibc version  : 2.23
 [ INFO ] Linux distro suitable for Generic install
 [ INFO ] gcc version    : 20160609 (>=4.8.2 suggested)
--------------------------
Media Server Studio Install:
--------------------------
 [ OK ] user in video group
 [ OK ] libva.so.1 found
 [ ERROR ] libva not loading Intel iHD
 [ ERROR ] vainfo not reporting codec entry points

 [ INFO ] i915 driver in use by Intel video adapter
 [ OK ] /dev/dri/renderD128 connects to Intel i915
--------------------------
Component Smoke Tests:
--------------------------
 [ OK ] Media SDK HW API level:1.23
 [ OK ] Media SDK SW API level:1.23
 [ OK ] OpenCL check:platform:Intel(R) OpenCL GPU OK CPU OK
platform:Experimental OpenCL 2.1 CPU Only Platform GPU OK CPU OK

3. Then I ran "vainfo" (also with remote desktop). Problem shows:

libva info: VA-API version 0.99.0
Xlib:  extension "XFree86-DRI" missing on display ":10.0".
libva info: va_getDriverName() returns -1

libva error: va_getDriverName() failed with unknown libva error,driver_name=(null)
vaInitialize failed with error code -1 (unknown libva error),exit

4. I noticed environment variable "DISPLAY=:10.0" is set with remote desktop. If unset DISPLAY, vainfo works without problem.

5. But I cannot unset DISPLAY or export "DISPLAY=:0.0". Because, if I unset it, eclipse would fail with X server connection issue:

Failed to connect to Mir: Failed to connect to server socket: No such file or directory
Unable to init server: Could not connect: Connection refused
Eclipse: Cannot open display:
Failed to connect to Mir: Failed to connect to server socket: No such file or directory
Unable to init server: Could not connect: Connection refused
Eclipse: Cannot open display:
Failed to connect to Mir: Failed to connect to server socket: No such file or directory
Unable to init server: Could not connect: Connection refused
Failed to connect to Mir: Failed to connect to server socket: No such file or directory
Unable to init server: Could not connect: Connection refused

Summary: the dilemma is, I have to keep "DISPLAY=:10.0" to start eclipse; while at the same time I have to disable it to make CodeBuilder work (it seems CodeBuilder depends on vainfo-like va driver access).

 

 

P.S. My system info:

Media SDK Client or Media Server Studio version installed: MediaServerStudioEssentials2017R3.tar.gz

Processor Type (required both for Linux & Windows OS): Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz

Operating System (required both for Linux & Windows OS): Ubuntu 16.04.3 LTS

In addition for Linux OS, please let us know the o/p of uname -r cmd on your system: Linux iot-demo 4.4.0 #2 SMP Tue Oct 17 11:42:45 CST 2017 x86_64 x86_64 x86_64 GNU/Linux   (Ubuntu 16.04 original kernel version4.10.0-28. I downgraded it as MSS release required.)

Add more options to sample_encode/sample_multi_transcode

$
0
0

 

Hello Intel Developers

I'm current using the latest Intel Media Samples and i need some help to add to current code of sample_multi_transcode 

cropping function,and noGPB function and to sample_encode sample i need to add all target usage like MFX_TARGET_USAGE_1 to 7

becouse current supports only 3 main modes quality, balanced and speed, can someone help me out?

I Always encode with TU3 quality mode

Thanks

How much data I should pass to BS for better performance?

$
0
0

 I have video editing application and using Intel MSDK 2017.0.1 for decoding the frames (Hardware acceleration). I have my own demuxer that is actually feeding encoded bits to decoder. I know that  my demuxer/splitter can provide the input bitstream as one complete frame of data, less than one frame (a partial frame), or multiple frames. 

I want to weigh two options here : 

1) Providing only single frame of encoded data to the decoder and asking decoder to return me the single frame. This I have achieved by setting AsyncDepth Bit to one and setting MFX_STREAM_COMPLETE_FRAME data flag in mfxBS. Please note that for this DecodeFrameAsync API was returning MFX_ERR_MORE_DATA first time I call with single frame of encoded data and then I had to explicit call DecodeFrameAsync with NULL as BS input, then decoder returned me MFX_ERR_NONE and then I fetched the decoded frame.

2) Second option is to provide the decoder as much input encoded data it is asking and let it buffer internally and finally calling SyncOperation once DecodeFrameAsync API returns MFX_ERR_NONE.

I want to know when to use which option. In what scenarios I should use the first and when I should use the approach second?

Viewing all 2185 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>