Gray Hodgkinson

Associate Professor
ADM School Of Art, Design And Media
NTU Singapore

Gray Hodgkinson is a digital media designer and researcher, with a specific interest in visual research methods and computer animation. Gray has been developing and leading animation education for 17 years, 14 of those at Massey University, New Zealand, and now at Nanyang School of Art, Design and Media. He has been instrumental in creating links between tertiary institutes and industry in New Zealand and internationally. Gray has presented papers on animation research and pedagogy in Melbourne, Germany, the U.K., Japan, Taiwan and South Korea. In recent work, Gray has been exploring the inclusion of 3D virtual reality to animation. Animation and virtual reality share a common fundamental in that they both take place inside an artificially constructed world. This commonality provides a starting point to explore how narrative and direction is affected when virtual reality is employed.

Lights, Virtual Camera, Action!

The recent commercialisation and affordability of virtual reality technology has enabled widespread exploration of some rather advanced methods of developing new visual experiences. In December of 2016 Kert Gartner (Techrunch 2016) tweeted his success with a virtual camera assembled together using an HTC Vive VR Headset, an iPhone, and three Vive hand controllers. This performs the same essential function as the virtual cameras developed by large studios such as Weta Digital and Pixar, where the camera operator or director films the artificial world and its animations through a hand-held screen, referred as a virtual camera. Gartner’s prototype represents a democratisation of this technology that enables smaller studios and individuals to adopt a similar approach – to film their animations in real time, using the same actions and behaviours of a real film camera. Furthermore, if animators made use of game engines for production the characters could effectively be acted by players, while being filmed in real-time by in-game camera operators. One option would be to use a multi-player game, with multiple game players, who are choreographed to act out a pre-determined script. The in-game director plays a special character with super abilities, flying freely throughout the game world, recording the action in real-time with full screen video recording. Also known as digital puppetry, a process used to create machinima, this technique makes use of the animations embedded in the particular game’s assets. This is also a limitation, as these actions are designed for game play, not narrative driven acting. Another variation on this approach, and one that follows more closely the production characteristics of animation, is to use pre-animated character actions. While this ensures the pre-planned animations are retained, it does remove the advantages of actor spontaneity. Over recent years game engines such as Unity and Unreal Engine have increased their animation authoring capabilities. The demonstration game animation “Boy with a Kite” positions Unreal Engine 4 as a viable animation authoring platform, with all the advantages of real-time rendering. While the production quality of this demo is very high, and can appear equally complex as conventional computer animation, (GDC 2015) the advantages of real-time rendering, and the opportunities for interaction, including virtual cameras, presents new opportunities for animation production.
This presentation will discuss the implications of these opportunities, and how these may affect authorship, acting, performance, pipeline and rendering. A test project will explore possibilities, while remaining cognisant of workload, technical demands, cost and practicality. The ultimate goal is to reveal processes that increase the naturalness of producing animation, while reducing technicality, and to shorten the distance between the animator’s vision and their realisation.

Advertisements