The current versions are available in the store:
It is a little known fact that Rotobot was trained in the colour space of the Internet sRGB or a Gamma of 1.8-2.2 with respect to light. This means that if your footage is not in sRGB it is not going to detect as well as if it were in sRGB. So how do you do that?
Using a OCIO Colorspace Transform node, you can come from an “in” color space of linear to an “out” colour space of sRGB
Kognat has had feedback that the processing times of a Rotobot are long, compared to the typical compositing or colour grade process which is a lot more interactive. To this end we have put up a video tutorial of processing footage with Rotobot, in an OpenSource OpenFX host called Natron, which is available for download from the internet, to run the software you will need to allocate trust to the developers, as it not certified by the operating system. The demonstration is on macOS, but if needed I can repeat the demonstration on other operating systems. It makes use of the command line interface called the “Terminal” on OSX, but more generally known as the command prompt or the shell.
This is a simple process, in large visual effect facilities this batch process would be divided up among many machines where each machine will process on frame of footage and respond that the frame is complete and ask for another frame or batch of frames to process.
Using an Free OpenSource OpenFX host means you can free up license costs. While calculating long computational cycles.
Full house at Lot 14 on North Terrace in Adelaide. A great opportunity to collaborate with other AI researchers, developers and business people. Looking forward to hear the presentations. Happy Launch Day Kognat.
Gamurs AI using computer vision and AI to analyse game play footage to improve the performance of e sports teams
Detecting subterranean vapour and liquid on Mars using ML on remote imaging and how it relates to wind patterns.
Great demo on containers and queues for non HPC based training by Adam from IBM
Frontier microscopy using “Marvin” rotobotic microscope and AI to detect asbestos in microscope samples of air filters to determine health risks. Well worth automating.
Great panel discussion
We have been training with segmentation rather than instance segmentation. Meaning that all the “people ” appear in one layer rather than each person split into their own layer.
The upside it is near pixel accurate on a 1080p canvas.
The downside is that it takes a couple of minutes a frame to calculate , it can be GPU accelerated given enough GPU memory and we know the requirement is more than a 4Gb card can provide.
We hope to make this available to the public within a fortnight.
Above is a screenshot of the script in Foundry’s Nuke . The deep purple node is carrying the neural network’s load the other nodes are coloring the background green.
I hope you enjoy some holiday footage from 2012.
EDIT: 16th November 2018
Here are some more samples