Hi
Would Intel's programming software Media Server support the development of a Direct-to-Home (DTH) Satellite Distribution Network?
Hi
Would Intel's programming software Media Server support the development of a Direct-to-Home (DTH) Satellite Distribution Network?
I use SDK2017 to Real-time encode HEVC,After a period of time,The encoder prompt -21(MFX_ERR_GPU_HANG),What causes this problem.
OS : Centos 7.2.1511
CPU : Intel(R) Core(TM) i7-6820EQ CPU @ 2.80GHz
GPU : 00:02.0 VGA compatible controller: Intel Corporation Device 191b (rev 06)
Hi,
I failed to set up H.264 encoder accelerated by Intel hardware, on platform as follows:
CPU: Intel Core i3-2120 (with Intel HD Graphics 2000)
OS: Microsoft Windows Server 2008 R2 64-bit
After searching and downloading from Intel download center website, none of the following drivers works:
(1) Intel® HD Graphics Driver for Windows Vista* 64 (exe) https://downloadcenter.intel.com/download/21414/Intel-HD-Graphics-Driver... Version: 15.22.54.64.2622 (Latest) Date: 1/21/2012
(2) Intel® HD Graphics Driver for Windows* 7/8-64-bit https://downloadcenter.intel.com/download/24971/Intel-HD-Graphics-Driver... Version: 15.28.24.64.4229 (Latest) Date: 6/5/2015
The results are the same, either mediasdk_system_analyzer_64 or mediasdk_system_analyzer_32.exe just reports:
Version Target Supported Dec Enc
1.0 HW No
1.0 SW No
1.1 HW No
1.1 SW No
1.3 HW No
1.3 SW No
1.4 HW No
1.4 SW No
1.5 HW No
1.5 SW No
1.6 HW No
1.6 SW No
1.7 HW No
1.7 SW No
1.8 HW No
1.8 SW No
" - HW target does not work: If you expect it should, then make sure to install latest Intel gfx driver, and that Intel gfx is selected as primary driver"
Hence my conclusion is, given that Intel download center website doesn't provide Graphics Driver for Windows Server 2008 R2, it's unfortunate that similar drivers for Windows 7/Vista just don't fit in Windows Server 2008 R2 as expected. So I don't know where to get proper driver for this platform, and consequently make H.264 hardware encoder working on this.
Our requirement on Windows Server 2008 R2 is mandatory, could you please help on this ASAP?
thanks
Charles
Hello,
I am trying to install Media server studio (MediaServerStudioProfessionalEvaluation2016.tar) on Ostro os.. I have Intel joule Iot Board.. I am stuck in the apply kernel patches 4th step which is "for i in ../intel-kernel-patches/*.patch; do patch -p1 <$i; done"
results i refer the below screenshot
However i tried further steps which are
make olddefconfig
make -j 8 // got error again for this step... refer below screenshot
Whether Intel Media SDK could be implemented on virtual machine (VMWare Workstation 12.5.2) win 7 x64?
I have a CCTV system, where there are 4 analog CCTV cameras connected to an Hikvision DVR box. Hikvision box has a C API which allows us to retrieve H264 streams from each of the 4 analog cameras. The problem we have is that the aggregate bandwidth is too much (~4 Mbps costs too much) since we plan to send this over the network. So what we are thinking is to transcode the stream with a more efficient codec like HVEC / H265.
This leads us to Intel Media SDK.
Is my understanding correct that I could just study the code in `sample_multi_transcode`? Will I be able to simply take the raw H264 stream and put it through the appropriate Intel Media SDK functions and it would be able to transcode the stream in real-time?
As for the hardware itself, I plan to use Skylake or Kerby Lake (when it's out) for maximum performance because, frankly, I'm afraid that the CPU performance will not be enough. The hardware running the program will be installed at each CCTV pole unsupervised and "low power" (as in, it cannot consume power like rack servers).
Media SDK Client or Media Server Studio version installed: unknown
Processor Type (required both for Linux & Windows OS): Intel(R) Core(TM) i7-5557U CPU
Driver Version(required only for windows OS): 20.19.15.4531
Operating System (required both for Linux & Windows OS): Windows 10 Pro 10.0.14393
Media SDK System Analyzer(only for windows OS): N/A
Concise Description of the Issue:
I want to encode video using the "Intel® Quick Sync Video H.264 Encoder MFT". I'm using the MFT manually, without using a paired decoder MFT, or any other MediaFoundation components. Feeding normal buffers (IMFSamples with buffers created by MFCreateAlignedMemoryBuffer) works well.
Now I'm investigating whether I can feed it ID3D11Texture2D surfaces as input (DXGI_FORMAT_NV12, 1280x720) in order to improve performance. I tried to pass IMFSample instances created with MFCreateVideoSampleFromSurface or MFCreateDXGISurfaceBuffer to IMFTransform::ProcessInput and made multiple experiments (trying different texture creation flags), but the best result was that all input samples were accepted, but no output samples produced. In case it matters, I never actually tried uploading data to the textures, assuming this would not make a difference from textures filled with garbage pixel data.
Additionally, I tried setting a IMFDXGIDeviceManager instance with IMFTransform::ProcessMessage and MFT_MESSAGE_SET_D3D_MANAGER, but this call returned E_FAIL. (The same works when doing hardware decoding with the Microsoft H264 decoder MFT.) In my understanding, this is a prerequisite for the hardware encoder to work.
Am I doing something wrong? Is this a bug in the Intel MFT? Is it just not supported?
I wish I could post a mftrace log, but the Intel MFT will fail with "Merit validation failed" under mftrace - there are others who seem to have this problem, but I've seen no solution.
There's another problem unrelated to the one described above. The Intel encoder MFT does not initialize the output IMFMediaType with MF_MT_MPEG_SEQUENCE_HEADER. This happens only after you send it a few input frames. Then the MFT will signal MF_E_TRANSFORM_STREAM_CHANGE, and add MF_MT_MPEG_SEQUENCE_HEADER.
This is very inconvenient. Usually you need MF_MT_MPEG_SEQUENCE_HEADER at the start of the transcoding process to e.g. write the CodecPrivate element for Matroska. The Microsoft encoder MFT does not show this unexpected behavior.
I'm working this around with a bad hack by encoding a dummy frame with CODECAPI_AVEncVideoForceKeyFrame and then discarding the first output sample (since trying to reset/flush the decoder MFT to forget previous input doesn't work either). I know that the underlying quick sync API can retrieve this information via MFX_EXTBUFF_CODING_OPTION_SPSPPS before sending any input frames, so this seems like an unnecessary complication.
which media sdk recommended for encoding 8 SDI video HD sources simultaneously
with better performance quality CBR and low latency
using the following HW :
5th Generation Intel® Core™ i7 Quad Core @ 2.6GHz (Broadwell)
HD Graphics 5600
and with OS windows 7
Media SDK for client 2016 or Media server SDK or maybe the INDE framework
do you think maybe it is necessary to upgrade the core cpu to skylake or/and the HD Graphics or the OS ?
thanks
I get 4 live CCTV camera feed aggregate at about 4 Mbps encoded in H264.
My objective is I would like to cut down on the bandwidth as I need to send this feed across the network. So I'm thinking this is where Intel Media SDK can help me. My hope is that the SDK could transcode – in real-time – H264 video stream to H265.
If I purchased the Kaby Lake processor, and wait a bit until the Intel Media SDK updated its software to fully support the new processor, will the SDK allow me to fully take advantage of transcoding in hardware (thus reducing CPU consumption, thus reducing heat and power requirements?)
Anyone could help give some details to get this geek excited? ;-)
Thanks!
When I use the mutiltranscode samples to transcode an avc file to another avc one,it warning me that
chance exception at 0x7553969b in sample_mutiltranscode.exe: Microsoft C++ exception: UMC::h264_exception at memory location 0x0012d338
who can tell me how to due with it?
my SDK versions is 2016 R2
when i try to install Media SDK 2016,there is an error. system is Centos 7.1
./build_kernel_rpm_CentOS.sh
Building target platforms: x86_64
Building for target x86_64
error: Failed build dependencies:
net-tools is needed by kernel-3.10.0-229.1.2.47109.MSSr1.el7.centos.x86_64
Error... ERROR with "rpmbuild -bb kernel_intel_mod.spec --target=x86_64 --with firmware --without debug --without debuginfo --without perf --without tools --define _topdir /MSS/rpmbuild/SOURCES/.. --define _specdir /MSS/rpmbuild/SOURCES", Return status 1.
https://software.intel.com/en-us/articles/intel-inde-media-pack-for-andr...
I followed the instructions, and everything(That I can see) is compiling and running and initializing. When I call StartCapturing the app crashes and puts out this error log:
Any clues on what I can do to fix this issue?
Hi!
We are using Intel Media SDK while HW decoding h.264 video from stream and zero-latency is very important in this case.
We noticed that we get always latency of two frames.
After we send bitstream frame #N to decoder, we get decoded frame #(N-2).
We are using official samples (sample_decode) with d3d11 output, already set AsyncDepth = 1.
The other thing is that this 2-frames latency does not depend on HW decoding used or not in Intel Media SDK.
Note that this behaviour does not depend on system performance and if we provide 1fps at DecodeFrameAsync input, we receive first decoded frame with two seconds latency.
Can you please help us how to obtain real-time encoded frames with no latency.
Thank you very Much.
Svyatoslav Krasnitskiy
I use Intel (R) _Media_SDK_2016.0.2 and directx9 to develop software
that installs on the client's computer to receive streams from the webcam for hardware decoding.
I currently only use the following code to determine whether to support hardware decoding.
bool IsSupportHWDecode()
{
mfxIMPL impl = MFX_IMPL_AUTO;
mfxSession session;
mfxVersion ver = { {1, 1 }};
memset(&session, 0, sizeof(session));
MFXInit(MFX_IMPL_HARDWARE_ANY | MFX_IMPL_VIA_ANY,&ver,&session);
MFXQueryIMPL(session, &impl);
MFXClose(session);
return MFX_IMPL_BASETYPE(impl) == MFX_IMPL_HARDWARE ? true:false;
}
Because the use of directx9 architecture, so whether the use of hardware decoding depends on whether the intel Graphics adapter is active.
(In Intel_Media_Developers_Guide.pdf, "the Intel Graphics adapter needed to have a monitor associated with the device to be active")
My question is how to know in advance to detect the hardware decoding can be used? Is there any other API can be used?
Including the detection of the above problems, hardware decoding limit ... and so on.
i run the sample_camera with the command line , the params :
sample_camera_d.exe -i camera\single\b_ff.argb16 -f argb16 -w 1280 -h 720 -o camera\single\out_b_ff_16_n_
i use the argb16 format as input, then i get the error info:
Return on error: error code -3, d:\program_project\mediasdk_samples_git\samples-
f7b203bc2e5d601079721aca3508522ce025e1c5\samples\sample_camera\src\pipeline_came
ra.cpp 1053
Return on error: error code 1, d:\program_project\mediasdk_samples_git\samples-
f7b203bc2e5d601079721aca3508522ce025e1c5\samples\sample_camera\src\sample_camera
.cpp 625
when i use the bayer file as input ,the sample can run successfully.
the command line:sample_camera_d.exe -i camera\single\test_RAW14.bin -f rggb -w 5494 -h 4114 -b 14 -o camera\single\out_test_RAW14_
result:
Input format R16
Resolution 5504x4128
Crop X,Y,W,H 0,0,5494,4114
Frame rate 24.00
Output format RGB4
Resolution 5504x4128
Crop X,Y,W,H 0,0,5494,4114
Frame rate 24.00
Input memory type system
Output memory type d3d
MediaSDK impl hw
MediaSDK version 1.19
Camera pipe started
Total frames 1
Total time 1.59 sec
Total FPS 0.63 fps
Camera pipe finished
sample version:2016 6.0.0.142 OS:win7 64bit VS2015
Does this sample support argb16 input?
Could someone help me with this problem?
Thanks.
Hi all,
I'd like to ask if you have any information about rack based P580 server. Like E3-1585-V5 with P580 GPU.
Better to have a PCIE-slot to install capture card.
Thanks
Paul
Is there an option to pay in order to get faster Media SDK support from Intel? Premier support only seems to be available for products that are not free.
Hi I have a problem,
When I using Intel GPU decode H264 stream it always return fail, But it is OK when using FFmpeg decode.
I have updated two streams, one it can decode sucessfully by FFmpeg and Intel GPU, the other one it can only decode sucessfully on FFmpeg.
My computer system is "Intel Core i5-4590 CPU 8G ram , windows 8.1"
Intel_GPU_Decode_Fail_FFmpeg_Decode_OK.264 => Intel GPU decode Fail, FFmpeg decode OK
Intel_GPU_Decode_OK_FFmpeg_Decode_OK.264 => Intel GPU decode OK, FFmpeg decode OK
thank you for your help,
Bill
Attachment | Size |
---|---|
Download![]() | 2.92 MB |
Hi,
I'm running simple_transcode_opaque_async_vppresize (from mediasdk-tutorials-0.0.3) with an H264 ES extracted from this file: http://download.blender.org/peach/bigbuckbunny_movies/BigBuckBunny_320x1.... I'm passing -b 500 -f 24/1 to the program.
Here is the stack trace:
Program received signal SIGSEGV, Segmentation fault.
0x00007fffefe2812d in ?? () from /opt/intel/mediasdk/lib64/iHD_drv_video.so
(gdb) bt
#0 0x00007fffefe2812d in ?? () from /opt/intel/mediasdk/lib64/iHD_drv_video.so
#1 0x00007fffefc24097 in ?? () from /opt/intel/mediasdk/lib64/iHD_drv_video.so
#2 0x00007fffef4a8b79 in CmDevice_RT::OSALExtensionExecution(unsigned int, void*, unsigned int, void**, unsigned int) ()
from /opt/intel/common/mdf/lib64/igfxcmrt64.so
#3 0x00007fffef4acda3 in CmDevice_RT::Destroy(CmDevice_RT*&) () from /opt/intel/common/mdf/lib64/igfxcmrt64.so
#4 0x00007fffef4ad567 in DestroyCmDevice () from /opt/intel/common/mdf/lib64/igfxcmrt64.so
#5 0x00007ffff59d0256 in ?? () from /opt/intel/mediasdk/lib64/libmfxhw64-p.so.1.16
#6 0x00007ffff59b8321 in ?? () from /opt/intel/mediasdk/lib64/libmfxhw64-p.so.1.16
#7 0x00007ffff59c8e8c in ?? () from /opt/intel/mediasdk/lib64/libmfxhw64-p.so.1.16
#8 0x00007ffff59c8ff9 in ?? () from /opt/intel/mediasdk/lib64/libmfxhw64-p.so.1.16
#9 0x00007ffff59b598a in ?? () from /opt/intel/mediasdk/lib64/libmfxhw64-p.so.1.16
#10 0x00007ffff59b59de in ?? () from /opt/intel/mediasdk/lib64/libmfxhw64-p.so.1.16
#11 0x00007ffff59b5d09 in ?? () from /opt/intel/mediasdk/lib64/libmfxhw64-p.so.1.16
#12 0x00007ffff59acf1a in MFXClose () from /opt/intel/mediasdk/lib64/libmfxhw64-p.so.1.16
#13 0x000000000040d77d in MFX_DISP_HANDLE::UnLoadSelectedDLL() ()
#14 0x000000000040d7e9 in MFX_DISP_HANDLE::Close() ()
#15 0x000000000040a8b2 in MFXClose ()
#16 0x00000000004048d8 in MFXVideoSession::Close (this=0x7fffffffe220) at /opt/intel/mediasdk/include/mfxvideo++.h:50
#17 0x00000000004047d9 in MFXVideoSession::~MFXVideoSession (this=0x7fffffffe220, __in_chrg=<optimized out>)
at /opt/intel/mediasdk/include/mfxvideo++.h:43
#18 0x000000000040472d in main (argc=7, argv=0x7fffffffe3b8) at src/simple_transcode_opaque_async_vppresize.cpp:695
(gdb)
System details:
1) Centos 7.1.1503
2) i7-5650U CPU @ 2.20GHz
3) Packages:
A) intel-linux-media-16.4.2.1-39163.el7.x86_64
B) intel-gpu-tools-2.99.916-5.el7.x86_64
C) intel-opencl-1.2-16.4-39163.el7.x86_64
Besides this crash, I ran the sample in valgrind, & there are a lot of invalid reads & writes reported by it. I can attach the log file, if it'll help in debugging further.
Please help!
/shastri
Hi,
I am trying to use the sample_decode application which comes as part of media sdk samples, to render the output on screen.
I am using following command for the same:
./sample_decode h264 -i in.264 -hw -vaapi -rdrm -window 0 0 704 480
I am getting following error:
pretending that stream is 30fps one
libva info: VA-API version 0.99.0
libva info: va_getDriverName() returns 0
libva info: User requested driver 'iHD'
libva info: Trying to open /usr/lib/media-libva//iHD_drv_video.so
libva info: Found init function __vaDriverInit_0_32
libva info: va_openDriver() returns 0
drmrender: trying connection: HDMIA
drmrender: found crtc with global search
drmrender: succeeded...
drmrender: connected via HDMIA to 1280x720@60 capable displayReturn on error: error code -1, /opt/intel/mediasdk/samples/sample_common/src/general_allocator.cpp 123
Return on error: error code -1, /opt/intel/mediasdk/samples/sample_decode/src/pipeline_decode.cpp 991
Return on error: error code -1, /opt/intel/mediasdk/samples/sample_decode/src/pipeline_decode.cpp 387
Return on error: error code 1, /opt/intel/mediasdk/samples/sample_decode/src/sample_decode.cpp 630
But the same for dumping to a file is working fine:
./sample_decode h264 -i in264 -hw -o out.yuv
My platform details are as below:
GordenRidge MRB with Intel Broxton SOC.
Linux distribution : Linux version 4.1.13-abl.
BSP used: Gordon Peak BSP (Apollo Lake) Alpha EC15(GP_BSP_WW20.5_EC15_RC2)
Any help would be greatly appreciated
Best Regards,
Thushara