Baku Hashimoto

橋本 麦

Making-of: group_inou – EYE

Video
Below is quick technical note.

How to Make?

Basic idea of this video is combination of Google Street View sequence and stop motion animation of group_inou(cp and imai). Building efficient methods of getting background panoramas from Street View and shooting pictures of members were keys to make the idea real in short period.

Here is a basic workflow.

  1. Location Hunting: Writing down good locations onto spread sheet.
  2. Fetching proxy images of Street View and making pre-visualization.
  3. Shooting.
  4. Converting images to high-res and compositing with characters.

Below is set of tools I had built in making process while large part of their codes is messed up.

baku89/group_inou-EYE – GitHub

Hyperlapse.js

To get images from StreetView, I forked Hyperlapse.js which was an open source library from teehan+lax.

I ended up barely used Hyperlapse.js itself but mainly used a library called GSVPano.js which Hyperlapse.js depended. An original repository was deleted by violating Term of Service so I used the modified version in Hyperlapse.js. The official blog of teehan+lax said they rewrote the part of code which downloads panorama-tiled images to use official API methods not to conflict with ToS.

Fetching Panorama

Hyperlapse.js automatically searches route between two points with Google Map API, subdivides the route to array of points, and finds corresponded positions in StreetView. This means Hyperlapse.js cannot go out-of-way places, inside buildings, or elevated railroad so a tool exports array of contiguous panorama IDs was required instead of a one searches route.
So I made this.

Pano ID Extractor

GSVPano.js loads the JSON array of panorama ID and downloads each panorama images.

toDataURL() converts the panorama image drawn on canvas into base64 and “fs.saveFile()“ in Node.js saves them into local. I used NW.js to enable Node.js functions to be called from browser.

Embeding Data in Panorama Images

Below is edited panorama images.

A red and green dots on bottom edge of this video are the saved Panorama data.

{
    "id": "WysqsRo6h5YGNmkkXVa9ZA",
    "rotation": 35.11,
    "pitch": 1.09,
    "latLng": {
        "A": 51.59035,
        "F": 3.677251999999953
    },
    "date": "2009-09",
    "heading": 0
}

These JSON converted into byte arrays and writes each bit on bottom of video. (corresponding code)
The reason to go though this process was to replace with high-res images later. The dimension of sky map image needed for rendering 1080p is at least 10Kpx but it could cause some trouble if I kept downloading too much of such large images so I got 1600px-wide good looking images for that time. I built proxy-replacing tool for that.

Making Pre-visualization

A detailed previz was required at pre-production stage because shooting could be very tough otherwise. I brought edited panorama images into C4D and added camera motion. Fortunately, there were low-poly models of both characters from another project so it was easy to set motion of members.

Shooting

EOS 5Dm2 was used for shooting. To add realistic adjustment for lighting and perspective, the idea of taking images of many general poses so to use them repeatedly was not good enough. So I decided to shoot from begin to end in turn. (It seemed to be much better to dare to adopt the hard way for shooting. )

It turned out to be around 3,000 frames to shoot. A strong automation was required so I built a shooting system with NW.js.

Controlling lights with DMX

DMX a protocol generally used for controlling stage lights was applied to control lightings. ENTTEC DMX USB PRO was used to send DMX signals. It was tough to install driver into mac with OS after Mavericks because of some known bugs. This is an support article.

DMX is basically a protocol sends light intensity with XLR so stage lights usually have XLR input for receiving DMX but there is no such things on photographic lighting. An alternative was ELATION DP-415 to control input voltage directly.

Test with some lights at office.

Calculating Camera Position and Ajusting Direction of Characters

I added Xpresso in C4D project to calculate camera position. Time direction was used to set direction of members. Distance from members are written on floors and scale was written on tripod feet to make angle more accurate. Plates with each time were around them. A video guide was set above them so they could see the direction to turn next.

DIY Release and Electric Follow-focus with Arduino

Since position of camera was high, I built PC control for shutter and zoom ring with Arduino.

(I found there were official Canon SDK and add-on for oF after everything was done by the way.)

I used stepping motor for follow-focus. It was little risky because it did not recognize end of rotation of zoom ring but I there was no time to fix that problem. It was ugly but still worked all fine.

C4D with Python Libraries

External python libraries works fine inside C4D. This means OSC and DMX also works on C4D with related libraries. I used python-orc to synchronize timecode between shooting system of Node.js and C4D, and used pySimpleDMX to control lighting.

ex) Send OSC with python tag

Use of VDMX

A large video guide was projected using VDMX. This helped members to image their motion.

Actual Shooting

It turned out to be system like above. Pink arrows indicate OSC. With this system, all of 3,500 frames were shot in two days.


Test shooting

Post Production

All sources were edited with AfterEffects. Background was replaced with high-­res version. Because of too fast motion, “Warp Stabilizer” effect did not work well so most of the frames were adjusted by hand. Positioning of characters was done frame by frame that almost killed us.


I guess I will not make another video with same method and I will not take street view work so if you have any question, please feel free to mention to @_back89.

Special thanks to nolie for translating.


Feb 4, 2016