OK, so had a quick play. It’s an interesting idea. Their demo image looks worse after it’s gone through the process, and I think the background would need some touching up to make it look nicer. My suspicions are that for best results, you’ll need a small amount of blur rather than a huge amount.
So, here are a couple of shots taken at a track day, which have a fair amount of motion blur. Corrected image to the right. In the car images, I did try the advanced settings to see if anything could be pulled back


Click the images to see the full fat 2048 width images.
So, we clearly aren’t going to get a high fidelity image from a very blurry one. But what we can do, it seems, is rescue those pictures which have minor blur damage:



Which, although rather contrasty (which can be fixed in Photoshop or similar), make improvements on the originals. In fact, I think these (which were processed using the default settings), look a little too over-sharp and may need the deblurring settings tweaked to make them less harsh. For this kind of blur, the recovery is pretty good.
All of these images are reduced copies (1024px along he longest side) of the full images size, which is about 10MP. It is possible that the full sized image would fare better simply because the sample size can be more focussed, and there is just more data for the algorithm to play with.