For creators, brand marketers, and business owners, producing content is only half the journey. To ensure that the videos serve their intended purpose, video-makers must also consider several other factors—testing, targeting, scheduling, and distribution, among other things.
One particularly important step that’s usually foregone by smaller creators and businesses is testing. This is usually due to the limited, expensive, and time-consuming nature of traditional market research.
In this article, we’ll demonstrate how Aifilia’s technology can provide anyone working with video content an efficient and proven means to test their creations. In case you’re new here and unfamiliar with Aifilia, we suggest first having a quick look at this infographic about our AI technology.
Sirui Test Footage vs. Cinematography Reel 2020
Both created by DreamDuo Films, Sirui 35mm Anamorphic Lens TEST FOOTAGE (pure cinematography) and Cinematography Reel 2020 feature some similarities: long takes; aerial shots combined with slow movements; sullen scenes and vibrant colors blended together wonderfully—pretty much the workings of a seasoned filmmaker.
As you can see, however, while 2020 was a collection of clips taken in various locations over the course of the year, Sirui Test Footage was filmed in a city within a couple of hours.
Biometric-trained engagement predictions
Using sophisticated biometric analysis and computer vision, Aifilia’s AI is trained to predict human engagement to visual content. This produces a prediction of human engagement for each content uploaded to the platform. You will see below the engagement summary data linked to each video.
Peak Count by Section
There are some parallels in the number of high-engagement frames of each video. In our analyses, “high engagement” means being in the top 20% of engagement values. In all but one quadrant, both videos show cases of engagement peaks. The relatively uniform distribution of engagement peaks is a signature of both these videos. This is not often the case, and we have seen many examples where distribution of peaks is not balanced across a video.
Distribution of Interpeak Intervals
The Distribution of Interpeak Intervals shows how peaks are spread out throughout the video. You can easily notice that the vast majority of peaks fall under 5.0 seconds and below in both videos. This is because most of the peaks occur in clusters where this is little or no separation, followed by relatively long waits till the next cluster of peaks. We see that in Sirui, there are two sections where viewers have to wait 20 seconds or more between highly engaging content.
Average Engagement by Section
There is, however, a consistently lower average engagement for 2020—with its highest value topping at 1.0—compared to Sirui, where each section posts a value higher than 1.0 for each section. While there isn’t a definitive way to say for sure, this difference is possibly due to the lively, urban setting featured in the Sirui video.
Wrap-up
While the similarities revealed in the analysis may not be a result of intentional design, they do show that the AI can produce insights into decisions related to creation of video content.
Comparing multiple videos is nothing new. However, with legacy methods like focus groups costing thousands of dollars and A/B tests taking up a weeks to bear results, this isn’t a viable option for most people.
Human-oriented AI not only supersedes these on both cost and time, it also allows for a deeper, non-subjective analysis. Having been trained on human biometrics, the objective predictions are not muddied by ‘noise’ produced by people’s own self-reports or subjective ratings of content.
Whether it’s in contrast to a competitor’s video, a different version, or to one that has worked before, comparing engagement analyses for multiple videos could prove useful in determining how to make full use of your created content.
For questions or clarifications, feel free to email us at community@aifilia.com. We love hearing from existing and potential users.