2024-04-22, 08:28 PM
(This post was last modified: 2024-04-22, 08:50 PM by pirako. Edited 1 time in total.)
Hi. I observe high CPU usage (50%) when transcoding a video for a Android TV client (Jellyfin 0.16..
ffmpeg process looks like this:
I've read that Intel N100 with UHD should handle 9 x 1080p transcodes so i am puzzled.
How to debug this?
Thanks a lot!
fwiw:
intel-gpu-top: Intel Alderlake_n (Gen12) @ /dev/dri/card0 - 0/ 0 MHz; 100% RC6; 0.00/ 3.53 W; 0 irqs/s
ENGINES BUSY MI_SEMA MI_WAIT
Render/3D 0.00% | | 0% 0%
Blitter 0.00% | | 0% 0%
Video 0.00% | | 0% 0%
VideoEnhance 0.00% | | 0% 0%
ffmpeg process looks like this:
Code:
/usr/lib/jellyfin-ffmpeg/ffmpeg -analyzeduration 200M -probesize 1G -ss 00:02:00.000 -init_hw_device vaapi=va:/dev/dri/renderD128,driver=iHD -hwaccel vaapi -hwaccel_output_format vaapi -noautorotate -canvas_size 1920x1080 -i file:/media/seagate/test.mkv -noautoscale -map_metadata -1 -map_chapters -1 -threads 0 -map 0:0 -map 0:2 -map -0:0 -codec:v:0 h264_vaapi -rc_mode VBR -b:v 9225839 -maxrate 9225839 -bufsize 18451678 -force_key_frames:0 expr:gte(t,n_forced*3) -filter_complex [0:6]scale=-1:1040:fast_bilinear,scale,crop,pad=max(1920\,iw):max(1040\,ih):(ow-iw)/2:(oh-ih)/2:black@0,crop=1920:1040,format=bgra,hwupload=derive_device=vaapi[sub];[0:0]setparams=color_primaries=bt709:color_trc=bt709:colorspace=bt709,scale_vaapi=format=nv12:extra_hw_frames=24[main];[main][sub]overlay_vaapi=eof_action=pass:repeatlast=0:w=1920:h=1040 -start_at_zero -codec:a:0 copy -copyts -avoid_negative_ts disabled -max_mu
I've read that Intel N100 with UHD should handle 9 x 1080p transcodes so i am puzzled.
How to debug this?
Thanks a lot!
fwiw:
intel-gpu-top: Intel Alderlake_n (Gen12) @ /dev/dri/card0 - 0/ 0 MHz; 100% RC6; 0.00/ 3.53 W; 0 irqs/s
ENGINES BUSY MI_SEMA MI_WAIT
Render/3D 0.00% | | 0% 0%
Blitter 0.00% | | 0% 0%
Video 0.00% | | 0% 0%
VideoEnhance 0.00% | | 0% 0%