Developers needed to improve FFmpeg (and sView) with support for Frame Interleaved video
We have spoken about the need to return to existing high quality standards, instead of using creepy solutions to store and play 3D videos. King tablet owners know that sView opens the frame interleaved videos of their 3D Blu-rays, but instead dividing each frame to each eye, frames are shown to both eyes, making a strange effect.
This is because sView relays on the open source FFmpeg project, and the support for frame interleaved video wasn’t finished. So I contacted the sView developer to know what any experienced developer could do to update ffmpeg with frame interleaved support and help not only sView, but any other 3D App that could want to easily support frame interleaved video. Also, there’s need to take in consideration that sView is multiplatform, so the update will benefit lots of devices.
Here’s the explanation of what a developer could do to contribute to ffmpeg project and implicitly to 3D Apps:
The basic idea is to allow straight-forward decoding of MVC streams using FFmpeg API (e.g. could be tested without sView). That is: - FFmpeg should detect multiple video streams. - FFmpeg already reports multiple streams in such files, so I guess there is no issue here (apart from detecting that secondary streams depend on another one). - FFmpeg should be able to decode secondary video streams. Currently, this doesn't work - attempts to decode such streams in a normal way leads to decoding errors. As a simple test, ffmpeg (a command-line tool) should be able exporting specified streams into dedicated video files not using MVC. As another test, ffmpeg/ffplay should be able to export/display stereoscopic pair from MVC file using Anaglyph or similar filter (which ffmpeg already supports). Of course, this subject covers decoding of unencrypted BD3D files, as digital protection is a different matter (sView doesn't handle this). The fundamental issue with MVC streams is that (as far as I know) FFmpeg has no concept of one stream depending on another stream. Ideally, MVC might allow reducing decoding efforts by reusing some intermediate decoding results of main stream by secondary streams, but implementing this might require considerable efforts in FFmpeg and unclear performance benefits. More straightforward approach would be just duplicating packets of main stream within multiple streams (so that they will be processed twice for each decoded stream), but this might also require some FFmpeg basic logic changes. There were some efforts in MVC support done outside of main FFmpeg projects - I'm not familiar with these works and why they haven't been finished / contributed to main project. FFmpeg project provides profiles of several experienced developers that are ready to make some work or consulting: http://ffmpeg.org/consulting.html