I'm trying to increase density of our real-time media server implementation using the Media SDK. The code is currently only being used for color model conversion (YV12 to NV12 via. VPP) and encode for 720p at 30 fps. With 8 channels, intel_gpu_top shows between 80 and 100% render, video quality is slightly affected. With 16 channels, intel_gpu_top shows a solid 100% render, our results indicate video quality is poor. Are these results in line with estimates? I was expecting to get better density. Is there any way to analyze if I am doing something less than optimal?
Platform:
[root@sut-1300 SDK2015Production16.4.2.1]# lspci -nn -s 00:02.0
00:02.0 Display controller [0380]: Intel Corporation Xeon E3-1200 v3 Processor Integrated Graphics Controller [8086:041a] (rev 06)
vendor_id : GenuineIntel
cpu family : 6
model : 60
model name : Intel(R) Xeon(R) CPU E3-1285 v3 @ 3.60GHz
Thanks - Bob K., Dialogic