The Case Against Reality

Last week, I left the screening of Mad Max early (a bit regrettably) to see a talk in SCI by cognitive scientist Donald Hoffman, who studies humans’ perceptions of reality. In his research, he has developed a radical hypothesis: that, if humans’ perception is shaped by natural selection , it has a 0% chance of being objectively true and complete, as it has evolved for reproduction and survival and not for “accuracy”. He uses modern understandings of cognitive science, population simulation, and mathematical models to support this claim. According to his research, it has been adaptive for our species (and all other species) to perceive reality in a certain way.

Extrapolating this to physics with original research, he concludes that our understanding of the observable universe is limited by our perception, therefore space-time is likely not fundamental and is better viewed as a data structure.

He draws the parallel between our perceived reality and a computer desktop interface – while the interface has meaning and we can interact with it, it is not fundamental. That is to say, the inner workings of the computer are obscured and our interaction with the computer is shaped by the interface.

I have included links below to check out, since my explanation may be hard to follow.

In addition to this just being interesting in of itself, I’m also interested in its intersection with media study and semiotics. We often study signs and media in relation to our perceived reality, but calling into question the fundamental nature of our perceived reality also seems to raise interesting questions regarding the role of media. Does media exist as part of our perceived reality or is it best viewed simply as another layer on top of several layers of reality, only some of which are perceivable? What is shared between the ontology of media and the ontology of reality – should they be studied identically?

https://www.quantamagazine.org/20160421-the-evolutionary-argument-against-reality/

What kind of films really needs 3D?

“A 3D film can end up gaining 33% higher profit than 2D.”

According to this reason, studios are making more and more 3D films. Some of them were shot in 3D, and some of them were converted to 3D in post.

When go to the theatre audience normally gets to choose 2D or 3D version to watch. But according to this news, audience in China did not have a choice when watching Jason Bourne (2016).

http://www.bbc.com/news/technology-37212239

“Out of 149 cinemas in Beijing, only eight are currently showing the 2D version, according to local media. In Shanghai, it’s said to be only nine out of 174.” As a result, Universal earned $11.8 million and set a new single-day record on the opening night in China.

As a film shot in 2D, Jason Bourne obviously is not suitable for 3D due to its shooting and editing style, compared to the two films we saw at the SCA IMAX theater. There are too many cuts; shots are too close to the object, audience’s eyes needs to move too frequently to track the object in different cuts and the hand-held camera made the experience worse. When screened in 3D, these movements could not be adjust by human brains correctly, which lead to Chinese audience’s nausea during the film.

While Born To Be Wild and Hubble almost kept the same object scale, had long shots, slow cuts, which make the audience dive into the amazing 3D effects.

To make a 3D film, the technology is not the only aspect the filmmakers need to consider about. They need to adjust other film aspects to make 3D work, such as Editing, Camera position and movements, Shot scales and so on.

What’s interesting is that after the complain from Chinese audience, Universal decided to add more 2D screenings. Which proves that the market can determine what consumers need, and consumers can determine the market as well. It’s a two way relationship.

There’s Nothing Sweeter Than The Sweet Sounds Of The Soundboard

Sound equals space. But we’ll get there… First, we have to start with The Grateful Dead. They were among the first bands to become notable for welcoming taping of their shows by the audience and the trading of these tapes freely within the community, or jam band “scene” that was already forming. This was distinct from other rock and roll bootlegs made famous by Bruce Springsteen or The Rolling Stones as the practice was actually encouraged by the band and because every show was designed to have a different setlist and different musical explorations in the “jam” portions of songs, making each and every individual concert something special. Taping was encouraged to preserve the memory from each show and to enable fans that couldn’t make it to each show the opportunity to then go back and listen to these shows in the future. The Grateful Dead and their approach to live performances (changing the setlist every night, performing more than one set of music with an intermission in between, playing multiple concerts at the same venue in consecutive days, etc.) has influenced a plethora of jam bands today that emulate this live performance setting and ritual. This also means that they allow taping of their concerts, and many, many bands today also tape soundboard recordings for sale to the public as an extra revenue stream, especially for concerts deemed more important or special, such as New Years runs or the final two or three shows of a particular touring season. The list of bands that have similar taping policies is a long one: Dead and Company, The Allman Brothers Band, Widespread Panic, Umphrey’s McGee, The String Cheese Incident, Dave Matthews Band, Blues Traveler, Gov’t Mule, and many more. But perhaps most importantly, the greatest living rock band: Phish. This project will focus on their live recordings, as they are the band in which I am the most well versed. And because they rock!

Basically, there are the audience recordings that I alluded to, and there are official recordings done by the band that are much higher quality as they are plugged directly into the soundboard where the show is being mixed live. Audience recordings tend to highlight the experience and feeling of being at the show, while soundboard recordings highlight the actual instruments more. Therefore, the quality of the SBD recording is essentially the exact same as when you heard the show live, or as good as it would have sounded if you were not there. The soundboard shows offered by the band (you can now stream any show since 2009 as we live in the future) are recorded by the same soundboard but transmitted to a separate digital audio tape that does include all of the “air” in a show. That is to say that many of the songs feature the crowd applauding or cheering at the end and if the band took two minutes in between songs to discuss where the set may go, that is included as well. So you’re not even missing anything by choosing SBD over AUD.

While there are camps of people who enjoy both types of recordings, it is hard to deny that having the band release FLAC or mp3 files straight from the soundboard is supreme. There is no better way to hear the full scope of all the instruments, mixed correctly, than to have a soundboard recording. However, the main argument against these recordings is losing the experience of feeling like the listener is present at the actual show, which looks something like this. This is a typical view of the audience for an outdoors Phish show. This is the environment that AUDs aim to capture. Note the tall, skinny microphones at the beginning of the video. There are lots of them! Those are tapers, recording the show as an AUD for themselves.

With an audience recording, you will almost certainly lose some sound quality, but you also gain the accompanying sounds of a crowd reacting to everything the band is doing, which of course, is completely unexpected. Anything can happen on any night. Even though the band does release every single show they play with a soundboard recording, taping amongst the audience is still allowed and encouraged, and there is a strong subgroup of the overall fan base who exclusively listen to audience recordings even today. It should also be noted, however, that trading audience recordings in today’s internet climate is unbelievably easy and FREE – especially on apps like ReListen. Meanwhile the band charges $9.99 for most show downloads, the only exception being if you had a physical ticket to the show, which includes a free download code every time.

Here’s an example of an audience recording of a show from the summer of 2016 in San Francisco. Skip around to any part of the recording and you’ll immediately notice the quality difference between this and your favorite studio-recorded song. However, though the sound may not be loud or crisp enough for true audiophiles, it is a fairly faithful representation of what really went down onstage and in the audience.

Now here’s where it gets interesting. This is the soundboard recording of the same show from summer of last year in San Francisco, and clearly, what you lose in atmosphere, you easily make up for with the power of crisp sound.

Soundboard recordings are always high fidelity, a must for audiophiles and Phishphiles alike, meaning the soundboard provides a flat frequency response within the intended frequency range of the speakers for the arena or amphitheater. That is a fancy way of saying that the sound engineers for the band have found a delicate balance where they can reproduce the exact input signal (the band’s instruments which are mic’d on stage) with no distortion whatsoever, or at least a level of distortion so insignificant that it is not perceptible by humans. Now the people that argue for audience recordings because they give you the feeling of being in the actual physical space with the band are forgetting one important factor: we have stereophonic sound for a reason! Stereophonic sound, more commonly referred to as simply stereo, purposely creates a sense of spatial realism, as stereo consists of two independent audio signal channels designed to replicate the aural perspective of instruments on a stage! When listening to a soundboard recording, the listener can comprehend how the engineers mixed the show live, bringing up the bass for certain parts or taming the higher frequencies of Trey Anastasio’s guitar in others. They were catering it for your ears! You have two ears, so you should be treated to two independent audio signal channels. Soundboards these days have many, many inputs and channels to help provide the best possible sound, and they can be absolutely humongous, like this one.

Phish now professionally videotapes every show, and sells most of them as webcasts so that fans who cannot make it to the concert can watch from home. This is called “couch touring,” and the same arguments that apply to SBD vs. AUD come back into play when video is involved, as the sound is still the main point. These are concerts, after all. Here’s a video shot professionally by Phish. Some people love this because they get to see the band up close in high definition, while the purists contend that this is not how you would be able to see the band at a show in person. They don’t like that you can’t see the whole band and what everyone is doing at the same time. The multiple cameras and angles also don’t allow those viewers to get the same feel for the venue, or for the overall experience. Remember, these are generally fans who have seen anywhere from 50 to 200 Phish concerts – so they know what they are looking for!

For comparison, here is an example of a video that was not professionally shot by Phish, but rather by a fan from the audience in the taping section. You’ll notice that the sound resulting from a random audience member taping the sound from the nearest speakers through a less-than-stellar 1990s camera microphone is not ideal. The sound is quite tinny, and it loses the umph of the bass guitar, which is incredibly important for a song like this one, called “Cavern,” which relies heavily on the strength of the bass and drums.

As we know, sound absolutely changes the feeling of a particular space, and in that regard, I completely understand the fondness for audience recordings. Some people just like hearing the show within the actual dimensions of the venue, with people becoming even further split on the issue of indoor vs. outdoor shows. Phish has been known to be quite comfortable playing outdoor shows at amphitheaters, but I prefer to see them inside, as sound is essentially the compression of air, and where is there more air than you know what to do with? Outside. The sound can travel and escape from you, quickly, whereas in a venue like Madison Square Garden, the sound has nowhere to go but right into your ears!

So where is the future of this debate heading? Simple (and that’s not just the name of a Phish song!) With the direction in which virtual reality is already trending, we should reach a point in the not-so-distant future in which Phish will set up multiple VR-capability cameras in the middle of the audience, most likely near the soundboard. They would transmit what the cameras and soundboard were picking up directly to your VR headset at home, where you could immerse yourself in the concert and truly feel like you were standing there with everybody, able to look in every direction and hear every note clearly. Talk about taking couch touring to the next level. One of my favorite TV shows, Portlandia, parodied this very eventuality perfectly. Of course, because it is parody, everything goes wrong for them during their viewing experience… But I truly believe that this is the world we are headed toward. Until then, I suppose I am just going to have to rely on my trusty soundboard recordings to experience my favorite jam band.

I leave you with a trailer for an actual 3D movie that Phish shot in 2009. They released the film as a limited engagement in theaters all over the country. And it gives you the perfect look at a concert you just have to experience.

 

References:

Video Killed the Cinema Star

by Aaron Port

Widely discussed is changing state of cinema with the rise of digital imaging technology and the decline of analog film, in addition to the development of digital distribution platforms. Despite these revolutionary transitions in technology, commercial narrative cinema still largely operates on the cinematic form inherited from analog film and its technological limitations and refrains from exploring the unique creative potential afforded by the digital video medium; one strong example of this similarity can be seen with Rogue One: A Star Wars Story (2016) in relation to Star Wars (1977).

“Digital cinema” is difficult to quantify, as it refers to a collection of technologies including digital image capture, computer synthesis, computer post-production, and digital distribution (Rodowick, 94). These technologies contrast with counterpart analog film technologies that are generally falling out of industry favor, although digital cinema has directly borrowed visual aesthetics from film. Recent advancements of the digital image technology have achieved greater automation, resolution, color depth, and dynamic range in the pursuit of “photographic realism.” While this benchmark is arbitrary and largely culturally conditioned, the “realism” of analog film has been largely unchanged (Rodowick, 99). Thus developments in digital video and effects technology to advance “photographic realism” to better fulfill the aesthetic needs of cinema, and image quality of the digital image is thus gauged against that of film stock due to the subjective cultural value of film.

Although digital cinema can be seen as a formal continuation of analog cinema, the fundamentals behind digital cinema technology are far from new. It is a direct descendent of electronic video technologies dating back to the early days of television, and as such, is a technological continuation of digital video. This is important because, as Roy Armes states, video is a unique medium with specific historical context and aesthetics (Armes, 9). This should not be disregarded in contextualizing modern digital cinema, as film and digital video are two distinct mediums despite their apparent similarities. At their core, digital is an electronic and film is physical.

Film requires considerably more conscientious involvement to work with than video, and its form has historically reflected this. Film takes time, expertise, and money to load, remove, and develop. The mechanical limitations of the physical medium has also historically lead to in large camera sizes, which also resulted in the need for sturdy, cumbersome, and expensive equipment on which to place the camera. Post-production in film also requires more expertise, precision, and time than video post-production. All of these factors reduce the capacity for immediacy of analog film. Resultantly, commercial narrative film has developed an aesthetic of constructed stability and continuity.

According to Armes, video has the qualities unique unto itself and not shared with film. These can be summarized as immediacy and contrivance. Its immediacy is a product of its technology: the low cost of recording, the ease of instant replay and post-production, small camera size, synchronized sound capture, and the common use of the zoom lens. Similarly, its contrivance is a product of the potential for image manipulation with digital technology. The confluence of these two factors lends video to being a more “personal” medium than film, as the entire process can be within the realm of control of one individual, while the use of analog film technology inherently requires multiple individuals working simultaneously (Armes, 187).

Another pair of historical qualities unique to video is intended screen size and mode of address. While analog film is a medium almost ubiquitously created for large-screen projection to an audience, video has historically been used for small home television screens. Due to this and the nature of the distributed content, the visual system in use by commercial film has generally denied the camera’s role in capturing the action, while television has a well-established history of acknowledging the camera’s role (Armes, 193). Commercial cinema typically addresses the audience indirectly while television addresses the audience both directly and indirectly.

This difference in modes of address results in different viewing contexts for the audience between film and video. Cinema is based on a theatrical mode of presentation, typically emphasizing subjectivity and relationships between characters. In this viewing context, a paying audience member sits in comfort to individually experience a privileged view of the action with undivided attention. Television, on the other hand, is viewed in the home, and thus more often experienced socially or distracted. Thus, film and television have historically had entirely different viewing contexts: cinema pulls audience members out of their immediate situations while television accompanies them in their situations (Armes 193-143). This leaves digital cinema in a unique place, especially with the increase of distribution through Internet streaming. Feature films are increasingly being viewed at home on small screens. Thus, not only has the distinction in production and distribution technologies film and video blurred, but the distinction between viewing contexts has as well.

Despite the amount of overlap between digital cinema and video in technology and in viewing context, digital cinema utilizes few of the unique qualities of video, instead relying on aesthetics historically associated with cinema and analog film. Indeed, D.N. Rodowick argues that “today most so-called new media are inevitably imagined from a cinematic metaphor” but “this also means that it is difficult to envision what kinds of aesthetic experiences computational processes will innovate once they have unleashed themselves from the cinematic metaphor and begin to explore their autonomous creative powers.” (Rodowick, 98).

This can be witnessed in the similarities between Star Wars and Rogue One: A Star Wars Story in terms of visual structure and modes of address. These are particularly relevant choices to discuss commercial narrative cinema due to similarities in their business context and their use of film technology. Both of these films were very popular with audiences: Rogue One was the second-highest grossing film of 2016, and its 1977 predecessor was the highest-grossing film of its year (Box Office Mojo). Rogue One, as a spin-off of sorts, draws obvious influence in form from the original, and is also characteristic of the larger commercial film market’s increase in use of existing intellectual property. Also like Star Wars, Rogue One was pioneering in its of cinematic visual effects. Despite these similarities and similarities of production, theme, and story structure, these two works of cinema in the same franchise used two different mediums: Star Wars on analog film and Rogue One on digital (Miller).

Compare a scene from Star Wars: (https://www.youtube.com/watch?v=JGp_5gOww0E) with a similar scene from Rogue One: (https://www.youtube.com/watch?v=C4qw0T8O3eI). Both are typical scenes from their respective films, and they are extremely similar apart from the specific action, dialogue, characters, lighting, editing, and tone. The similarities include locked-off camera with added shake; special-effect exterior shots; a combination of wide, medium, and close-up shots; non-diegetic orchestral score, and dramatic, indirect modes of address to the audience. Overall, these scenes function very similarly in their respective narratives, and despite their differences, conform to similar cinematic rules despite thirty-nine years between the films’ releases.

However, very different technologies were used to achieve these similar scenes. Most obvious is that Star Wars was shot and distributed on film and Rogue One on digital. However, both were shot rather conventionally, with clearly constructed sets and artificial lighting. In essence, despite Rogue One being shot digitally, its style rejects immediacy uniquely afforded to video capture, instead opting for the cinematic aesthetic. More interestingly, however, are the effects: practical effects were constructed on set and shot on film, then composited for the scene in Star Wars. In Rogue One, digital effects were used. Despite the creative potential possible with digital effects, both strategies achieved the same goal: the effects fit within the “world” of the story and are diegetic. That is to say, they are portrayed as “real” to the characters and thus to the audience through indirect address. In fact, one of the characters in the scene (the droid, K-2SO) is created entirely by digital effects but is treated as any other character in the story of the film. Thus, despite differences in technology and thus creative potential within technology, both films conform the “cinematic” aesthetic and mode of address.

In essence, digital technology in Rogue One afforded the filmmakers an incredible contrivance unique to video – the ability to digitally construct entire settings and characters – but only in diegetic, “realistic” circumstances. That is to say, contrivance is only used in circumstances that do not draw attention to the contrivance. This can be witnessed in countless other contemporary commercial films. Rogue One, perhaps infamously, takes this a step further in its recreation of the likeness of deceased actor Peter Cushing from Star Wars (https://www.youtube.com/watch?v=QtY3bsHVSTw) through digital effects (https://www.youtube.com/watch?v=_NFh6NSRNIU).

Compare this execution to the effects used in the music video Money for Nothing by the Dire Straits: (https://www.youtube.com/watch?v=lAD6Obi7Cag). Released in 1985 during the explosion of the music video format, the video featured effects that make no attempt to fit within the world captured in the live-action footage. Instead, they intentionally draw attention to themselves. This is, in a sense, a direct mode of address to the audience. The contrivances afforded by digital video technology are, in a sense, acknowledged as such and the video celebrates it. As Armes asserts, the pop video needs not conform to the cinematic aesthetic and can embrace its contrivance (Armes, 157-158). Of course, some films explore direct modes of address through effects (Scott Pilgrim vs. the World – example: https://www.youtube.com/watch?v=8oJFcr42adA) or through dialogue directly addressing the audience (Deadpool – example: https://www.youtube.com/watch?v=QfPsRh8G0vA), but these are used only in circumstances that uniquely serve the story and are not part of common commercial film grammar.

“Digital cinema”, despite using the technological toolset inherited by video, is still envisioned, produced, and largely categorized in the context of “cinema,” which historically has been ontologically defined by its use of analog media. Digital cinema blurs this ontological distinction between film and video, as seen in Rogue One: A Star Wars Story. However, video is not only a collection of technologies – it is a creative medium in its own right with unique qualities not shared by film such as immediacy and contrivance. Thus, because digital cinema largely draws upon cinematic language, a unique untapped creative potential exists that is not being explored commercially in contemporary digital cinema.

 

Works Cited

Armes, Roy. “On Video.” Routledge, Chapman, & Hall, 1988.

Box Office Mojo. “1977 Domestic Grosses.” Box Office Mojo. Web. 07 Mar. 2017.

Box Office Mojo. “2016 Domestic Grosses.” Box Office Mojo. Web. 07 Mar. 2017.

Miller, Neil. “The Amazing Camera Technology Behind The Look of Rogue One.” Film School Rejects, 16 Dec. 2016. Web. 07 Mar. 2017.

Rodowick, D.N. “The Virtual Life of Film.” Harvard University Press, 2007.

 

Leave the GoPros to the Pro Athletes

There is a different camera in the market. A camera that can survive scratches, tumbles, and even water. A camera that can keep up with extreme sports fanatics, from surfers to skydivers, capturing every moment and showing it as it would be– just like if you were really there yourself. The GoPro is a durable, almost indestructible camera capable of keeping up with surfers, skateboarders, and even skydivers. The newest model of the GoPro, the HERO5 , can shoot in 4K and is  underwater without a protective case up to 33ft.

Related image

With specs like this, the GoPro has made its way into the hands of extreme sports athletes and thrill seekers across the globe. One of the most notable stunts that has been captured on the GoPro was the highest jump ever taken by a skydiver, where Felix Baumgartner jumped from the edge of space at just over 128,000 feet in 2012. Up until 2014, this was the highest free fall ever to have taken place. In addition, Baumgartner became the first human to break the sound barrier without being in an aircraft.

1350238257750

With GoPro’s popularity among extreme sports, it seems like it should make an easy transition into  narrative film. Russian filmmaker Ilya Naishuller wanted to create an immersive film that was told from the point of view from the main character so audiences would get the feeling that they were in the movie. A movie like this had never been attempted. Sure, there was the 1947 film noir Lady In the Lake, by Robert Montgomery, but it did not have the intense fast-paced action that Naishuller was trying to achieve. For such a film, the crew gravitated towards the GoPro, after looking at different cameras, like the black magic camera, settling on the GoPro because of its unique ability to capture quick pans without excess motion blur. The entire movie is told from Henry’s point of view, and the result is akin to a first-person shooter video game—without controls for the user. You can get a sense of the style in the trailer here. (WARNING: It does get pretty violent.)

kvtaanv

Both of these films use the GoPro to capture moments that would be near-impossible by a regular camera. Despite the excitement that surrounded Hardcore Henry, it is viewed by many as a flop, stating that its point of view gimmick is just that—a gimmick, and loses its punch within the first few minutes of the film. As New York Daily writer Stephen Whitty put it, “You could go see “Hardcore Henry”  — or you could gulp down a pint of vodka, load in “Grand Theft Auto,” then strap the TV to your face and throw yourself down the stairs.” On the other hand, Red Bull Stratos was a phenomenal viewing success, and has been viewed more than 41 million times and has broken the record for the most watched livestream, garnering 8 million viewers at it’s peak (Li). Why then, did the GoPro do so well with Red Bull Stratos and not Hardcore Henry? In truth, the GoPro only works in extreme sports videos and experiences, and is unsuccessful when used as a storytelling device in movies.  

First is the technicalities of the mount itself. By replacing a tripod with a human body, there are complications that arise. One of them is the shakiness of the mount. For most GoPros, they are mounted to helmets, chests, or held in the hand. In Red Bull Stratos, three cameras were used on Baumgartner’s body, one on each thigh and one on his chest pack. In Hardcore Henry, the camera was mounted on the stuntman’s head using a special camera rig designed for the film, intended to give the viewer a chance to see the stuntman’s arms and legs.

Image result for hardcore henry mask

Upon first viewing, the GoPro (despite all of its stabilizing features) failed to deliver. The camera was too shaky and the P.O.V was nauseating. The camera mount that was used had to be redesigned and the crew was constantly creating new prototypes for the mask mount even after initial production had started. The motorcycle chase is particularly stomach-churning.

 

In the clip , it’s hard to understand what is really happening. The quick head turns create too much motion and lack of a focal point make the action hard to follow. On the other hand, Red Bull Stratos used three different GoPros, including more cameras that were attached to the balloon that took Baumgartner into the stratosphere, and used footage from all of them to show his free fall from the edge of space. If one camera was only capturing the blackness of space, there was another one that was aimed at the Earth, creating a more dynamic viewing experience. Although the camera still had its shakiness in Red Bull Stratos, it was forgivable, because audiences had the opportunity to see a man break the sound barrier and free fall from the stratosphere. With film audiences, the expectations are different. There should be a clear storyline and a strong narrative, and audiences do not expect to see world records shattered, but are again, there for the story. Thus, the technical failures of the GoPro negatively affect the audience’s enjoyment of Hardcore Henry.

Second, mounting the GoPro on a person removes that person from the story and replaces it with the audience. In Hardcore Henry, the audiences is put in Henry’s shoes, who wakes up with no memories of his past and an inability to speak. The effect is jarring. Not only is the audience seeing everything from Henry’s eyes, they have no idea of his thoughts or feelings. Further, they do not have the opportunity to influence anything in the film, despite seeing everything from his point of view. In essence, this removes the protagonist from the story, and the audience has trouble empathizing with the main character because all of the information about him comes from supporting characters. Without building a character in which audiences  In comparison, the use of this first-person perspective succeeds in videos like Red Bull Stratos because the focus is on the event itself, rather than the storytelling behind it. This perspective works best when the experience is the focus, rathern than a journey. Also, the length of these videos has an impact on audience enjoyment. Most extreme sports videos are less than 10 minutes long, and audiences get a view of what the athlete was seeing —  for example, Baumgartner’s spin in space, which only got more terrifying as he gained speed:

major-tom

And then, it was over. With extreme sports videos, there is one central event, whereas Hardcore Henry tries to accomplish what a feature-length film would, with several different action-based sequences. The singular focus is what the GoPro does best. Otherwise audiences are attacked by a barrage of already heightened imagery which leaves them feeling overwhelmed and lost, which is never a good sign.

Moreover, the mount in Hardcore Henry was extremely intrusive in comparison to the ones used in Red Bull Stratos. In order to get the proper point of view shot, a mask was created for Hardcore Henry, as the helmet mounts did not include the actor’s hands and feet in the shot, which were integral to the action sequences in the film. The final mask was heavy and had to be fitted to the actor’s head, and the camera was placed close to the actor’s mouth, essentially caging him inside. After putting on the apparatus, the actor then had to act through extremely specific stunt scenes and be aware of where he was looking at the right time.Image result for hardcore henry mask Typically this much responsibility is delegated to the director of photography, but in Hardcore Henry, the actor had to not only act, but move the camera using his head in just the right way. That may have influenced Naishuller’s approach to creating the film; he played Henry for much of the movie, having written and directed the movie, it seems that he had the best knowledge of how to realize his vision. Despite the careful planning that went into planning the stunts in each shot, Hardcore Henry still feels like a video game rather than a movie. In an interview, Naishuller mentioned that most of the stunts involved making contact: for example, punches would have to connect with someone’s face in order to make the shot.

thequint2f2016-032f1771c692-3abb-4839-b9d8-cf6246134ef62f10zeoc

Ouch.

Because the GoPro mount put the camera in the action, it was not possible to achieve a realistic effect without actually hitting someone. Of course, the stunts were still done in a controlled manner, but the result still misses the mark. It feels contrived and calculated, losing the organic integration of the audience that Naishuller sought to acheive. In contrast, the three cameras that were fitted to Baumgarten’s suit were unobtrusive. His main focus was on pulling off the stunt and managing himself and his own body, rather than trying to bend a knee or turn his head a certain way to get aerial shots of the Earth. His focus was on completing the jump successfully and  Thus, the experience that comes through Red Bull Stratos feels more truthful and real because audiences are seeing the raw, unfiltered view of the jump, rather than one that was calculated.

All in all, because of its technological limitations, removal of protagonist, and the intrusive mount, the first person point of view involving the GoPro was unsuccessful for Hardcore Henry. With extreme sports, all of these factors are forgivable, and even sometimes help to improve the viewer experience, like it did in Red Bull Stratos. Maybe next time, we should leave the GoPros to the professional athletes.

 

-Alison

 

Works Cited

  1. Li, Anita Final Numbers Are In: Space Jump Breaks YouTube Record. Mashable, 2012. Web.
  2. Whitty, Stephen ‘Hardcore Henry’ not worth point of viewing. New York Daily, 2016. Web.

The Difference Between Atmos Sound and 3D Binaural Sound and Why Films Won’t use Binaural Recording Technology

Meibei Liu, Media Rich Paper –

Filmmakers continue to test the limits of how they can improve picture, coming up with innovations such as IMAX cameras, 3D capture and 4K resolution. However, they are also working just as hard to improve film sound, aiming to make the moviegoing experience more real and impactful. Filmmakers agree that the most effective way to do this is with three-dimensional sound.

dolby-atmos-and-auro-3d-fig5-sm

Surround sound uses sets of speakers to create a multidimensional sonic atmosphere. Beginning in June 2012, Dolby introduced the more complex Atmos system, hoping to create a 3D aural experience in theaters. The biggest difference between this new system and traditional surround sound is the extra speakers on the ceiling, which help to create a better spacial sound atmosphere. Films like The Life of Pi, The Hobbit: The Battle of The Five Armies, Gravity,The Martian and many others all used this new sound system. Dolby’s official website introduces the new features of the Atmos system and shows some examples of how it sounds.

https://www.dolby.com/us/en/brands/dolby-atmos.html


Many people call Atmos a “3D sound experience”. But if one googles “3D sound”, the similar terms will appear in the results. This actually refers to another technology called “Binaural Recording”. This is a sound recording technology that came out over 100 years ago that was used to recreate the experience of watching a play or concert for people who could not actually be there in the theater. By recording the sound on set and then transferring the sound files to the clients, Binaural Recording aims to recreate the exact same sound experience for human ears. While it sounds complicated, the concept behind the recording technology is rather simple. One builds two microphones inside two fake ears and puts them on a model head. With this set of microphones, the slight difference in the waveforms created by the sound sources are captured by the left and right ears. This stimulates greatly a sense of distance and direction for listeners.

Screen Shot 2017-03-05 at 11.33.22 PM               Screen Shot 2017-03-06 at 10.39.27 PM

While it sounds complicated, the concept behind the recording technology is rather simple. One builds two microphones inside two fake ears and puts them on a model head. With this set of microphones, the slight difference in the waveforms created by the sound sources are captured by the left and right ears. This stimulates greatly a sense of distance and direction for listeners.

Here’s the example of what it sounds like:

Videos shows it is recorded. 


As innovative as this technology is, it was only used for a few experimental art forms and the recreation of some plays and concerts before it faded out. It is coming back, mostly as an element of VR and video games, but still is a rather minor component.

If Binaural Recording tends to create effects similar to those created by surround sound and Atmos sound, why are they treated so differently? Why was Binaural Recording never used in any theatrical feature films not even 3D or IMAX films? Imagine watching Life of Pi and hearing the fish come from different directions towards you. How realistic and impactful would that experience be. Here are the reasons of why has Binaural Recording technology been deemed not suitable for a story-emphasized film.

First, the Binaural Recording system costs more money and time while surround sound is easier to record and mix. Second, this technology is only effective when the user is wearing head phones, which leads to a series of issues, including financial issues for the theaters and social downside for the audience. It is hard for a film to find a balance way to approach within a scene or even a shot,  because of its listener-centered perspective. Finally Binaural Recording is a better fit for VR and video games as they have similar special approaches.


Before filmmakers decide to adapt binaural recording, they’ll likely ask themselves the following questions: “Is this easy to do on set?” “Will this cost more money and time?” “Is this controllable during pre-production, production and post-production?” If using binaural recording, the filmmakers would need to choose a perspective, considering what to record and where to put the mic. The sound technician would have to abandon their training and experience to come up with a brand new process. The sound picked up in a binaural microphone is different; it is not traditional sound that can be edited and mixed during post, but sound that has perspective and special 3D sound effects built in. They would need to think more delicately in terms of storytelling while recording the sound on set, as the Binaural Recording captures the distance and space so well. For example, imagine a simple scene with a character walking from a door to a sofa. In traditional sound recording, this would be recorded with a boom microphone. However, with binaural recording, the sound recorder would need to think about who is watching him walking inside the scene and when the perspective changes. This would require lots of additional planning before the shoot. If the director decides to makes a last-minute change of to how to approach the scene, the plan would need to be adjusted again. And during the post sound process, technicians will find that the binaural recordings are more difficult to adjust than traditional sound. Is binaural recording really worth the effort? By comparison, surround sound and Atmos sound offer much more control during post-production; technicians can simply pan a sound clip right or left to shift the direction, and then change the volume to manipulate the distance.

This video is of a sound designer talking about sound effects and sound mixing on Interstellar. From this video, one can see a sound element as a component could be recorded in an easy way and then mixed to create a desired atmosphere.


The limitation of “3D sound” is that it works better with a set of headphones than speakers. Research shows that with a speaker, human ears are tracking the source of the sound, but not the virtual sound source. This means one would need headphones in order to truly experience film sound in 3D.

Screen Shot 2017-03-06 at 11.25.46 PMScreen Shot 2017-03-05 at 11.10.10 PM

One might argue that theaters could simply install headphones. But let us consider: why do people go to the theater? They probably own Gravity on a Blu-Ray DVD, along with a large screen and a pair of good headsets. But the theater offers a different and unique experience. At the theater, people can enjoy Life of Pi on a larger-than-life screen and be immersed in high-quality surround sound, all while sharing the experience with your friends, your family and all the other strangers in the theater. Imagine watching a exciting action film in a theater with a headset. Wearing headphones puts you in your own world. It separates you from other people nearby and therefore takes away from the shared movie theater experience. The movie theater connects people. Many years ago, theaters were a place for people to meet and socialize. But going to a theater and watching a film with a headset makes theatergoing an individual activity rather than a social activity. Practically, the adaptation of a binaural sound system would likely involve a re-design and reconstruction of movie theaters, which would be financial burden for theaters. Furthermore, a change of sound disc to sound film playback system would require many years and discussions among studios to complete the transition. It’s unlikely the entertainment industry will make the change to binaural sound, due to its impracticalities.


Despite the financial, social, technical issues, binaural sound recording would also affect story telling in film. Unlike a VR film, in which the audience is part of the story and exploring the plot by themselves, a theatrical film usually has characters and is not interactive. Characters drive story and, therefore, the audience’s primary focus is on characters. Even though filmmakers sometimes use 3D to make the films come to life, the audience is still just an “outsider”. Surround sound helps the audience become more immersed in story. But 3D sound is more objective, and can impose limits on storytelling.

Example: WWI scene using the 3D sound.

In this clip, the viewer is part of the story. 3D sound helps the viewer feel as though he is in the story.

There are some art films that uses 3D sound as well. It is used effectively. However, this film isn’t a traditional narrative.

3D binaural sound strongly focuses on the listener’s point of view, creating a strong sense of “I” for the audience. If one wishes to put the 3D sound into a narrative film, then you have to think about what perspective you want to approach from, and it might be difficult and confusing. A simple scene like a conversation between three people recorded from the point of view from three different characters could be very confusing, and that’s before one considers additional sound effects. One needs to consider who is listening at each moment, who is the next center of the sound and how do they transit? Here’s an example:

This is a simple narrative piece about using the 3D binaural sound technology. And you will find the weakness of using it. The strong feeling of sound direction and the virtual location from where we are listening ruin the subjectivity of the audience while watching the video.


3D sound along with the Binaural Recording technology is more suitable for VR. When an audience member hears sound coming from behind them, they will want to turn around. With a narrative traditional film, the screen is fixed, the vision is limited to the rectangle area. Audiences won’t turn around or looks at the area that is outside the screen because there are nothing showing. But with VR, audiences actually can turn and face any direction. 3D sound can be used here to prompt and trick audiences to move a certain way and discover more information, at the same time getting better control of what they want the audience to focus. The same may apply to video games as well. While surround sound creates a rather subjective effect, it allows the audience to enjoy and experience the film and focus on the story without paying much attention to the sound.

In conclusion, 3D sound with Binaural Recording is an innovative technology that will no doubt be important in the future of entertainment, but it is more likely to be used in virtual reality than in movie theaters.


References:

Realistic Fantasy and Fantastical Reality: Computer Generated Bodies in Game of Thrones and Westworld

Since the days of Oz and The Sopranos, HBO has had a reputation for pushing the boundaries of television. Among their most ambitious current series are Game of Thrones and Westworld, both of which visually blur the lines between traditional television and film by featuring lavish sets, real locations, and impressive special effects. Indeed, Game of Thrones set the precedent for Westworld in many ways, so it is no wonder that both shows utilize CGI to construct believable creatures— dragons and robots, respectively— to populate their intricate worlds. Thus, while the use of CGI serves different thematic purposes in Game of Thrones and Westworld, both genre shows align with HBO’s brand identity by melding the production value of blockbuster films with nuanced, cutting-edge storylines.

With serialized stories, superb performances, and high budgets and production quality, television today is more cinematic than ever before (Gorgan). The premium network HBO played a significant role in the paradigm shift in television that has occurred over the past two decades or so and has since garnered a reputation for pushing the boundaries of television. While 24 (2001-2010) and Lost (2004-2010) brought the notion of quality, highly serialized stories on broadcast networks, HBO’s The Sopranos (1999-2007) jumpstarted this trend and made HBO “a household name for its daring, thrilling television” (“The Evolution…”). Many noteworthy series later, HBO is known for taking creative and visual risks on its shows, investing large sums of money on unique projects (Gorgan). In an article for Vanity Fair, Emma Stefansky noted that HBO’s “prestige TV model has both made and lost the network a lot of money,” with the success of shows like The Sopranos, Game of Thrones, and Westworld balancing out the failure of Vinyl (2016) and early cancellation of Rome (2005-2007).

One of the best examples of a cutting-edge, distinctly HBO show is Game of Thrones, the network’s current flagship series. Between its myriad locations, lavish production design, and impressive special effects, Game of Thrones undeniably blurs the lines between TV and film. It is no wonder that GOT is one of the “most expensive shows ever made” (Stefansky), averaging about $10 million per episode during Season 6 (Cuccinello). Considering all its awards and premiere records, certainly GOT was worth the investment, but adapting this series to the screen was a risky endeavor. While Game of Thrones incorporated the same complex, serialized storytelling that had begun to permeate television after The Sopranos, never before had these elements appeared in a “fantasy story” set in a brutal “fictional kingdom” (Gorgan). Thankfully, the network’s investment paid off, and the series continues to take risks with each new season, introducing complex storylines like Bran’s that extend beyond the source material and incorporating visually spectacular battle sequences like the onslaught of wights at Hardhome.

68da59e0-2a60-0133-777a-0aecee5a8273

The Reed siblings fight reanimated skeletons North of the Wall.

Among the most phenomenal aspects of Game of Thrones is its use of CGI, especially as it relates to bringing the fictional creatures of Westeros to life. Indeed, Game of Thrones relies on convincing, seamless CGI to create these fantastical beings so that audiences will buy into this universe. In the words of Bustle critic Leah Thomas, “We may know that dragons, armies of skeletons, and faceless men aren’t real- but the team behind Game of Thrones makes sure nothing sticks out.” Perhaps the most iconic creatures in Game of Thrones are Daenerysdragons. The dragons must appear convincing, as they are integral to Daenerys’ storyline and character growth; Daenerys constantly interacts with her “children,” cradling them, stroking them (pictured below), and eventually riding one of them to safety. The linked clip from Pixomondo illustrates the complex combination of rendering and compositing required daenerys-and-dragonfor each dragon sequence in Season Four. Of course, Daenerys’ dragons have only grown larger and more powerful as the series has gone on, evolving from near-helpless babies to massive beasts that can roast Daenerys’ enemies and whisk her to safety.  The linked clip depicts the evolution in dragon VFX from seasons one through three. Before GOT, this level of special effects was largely relegated to blockbuster films (Gorgan); thus, the use of CGI in Game of Thrones blurs the lines between cinema and television and reflects HBO’s cutting-edge brand identity.

HBO’s new series Westworld is similar to Game of Thrones in many ways; both are grandiose genre series with sky-high budgets, lavish production values, and jaw-dropping VFX. The first season cost around $100 million to produce, and, like GOT, much of the show was shot on location in Moab, Utah, the backdrop of many classical Western movies (Fehrman). Moreover, both shows offer “copious violence and sex” and infuse the “well-worn” fantasy and Western genres with “new life” (Bady). Many critics have noted these similarities, hypothesizing that Westworld will take over Game of Thrones’ position as the flagship HBO series after the latter’s final two seasons (Bradley).

Among the most notable similarities between the two shows is the use of VFX; Westworld, too, relies on impressive CGI to create the convincing fictional creatures— here, remarkably human-like robots— to populate its world. Much as Daenerys frequently interacts with her dragons in Game of Thrones, the major human characters of Westworld frequently interact with the robot “hosts,” and the hosts interact with each other. In this way, the linked scene in which Dr. Ford converses with an outdated model, Old Bill, mirrors the scene in which Daenerys fears her own dragons for the first time. The show’s VFX supervisor Jay Worth explained that Cosa VFX, the company behind CGI in Stranger Things and Agents of S.H.I.E.L.D. (Zakarin), used “jerking, mechanical-like motion” to emphasize the imperfection of older models like Bill (Failes). With the help of effects program Nuke, the team layered simple effects to make the actor’s big gestures appear as a series of smaller movements (Zakarin).

Of course, the idea in Westworld is that the robots in the park now are practically indistinguishable from humans. When William first arrives in the park, he asks one of the hosts if she is real, only for her to respond, “Well, if you can’t tell, does it matter?” As such, CGI is typically only used to show robots malfunctioning, as with the dangerously aware Peter Abernathy and the glitching Sheriff (pictured below). sheriff-2gif.gifFor this eerie effect, the special effects team made one of the Sheriff’s eyes track the fly crawling on his face, whereas Abernathy’s malfunction in the pilot episode was mostly the actor’s performance, as the linked clip demonstrates; all the VFX team did was manipulate his pupils and eyelids slightly to make him appear slightly “off” (Zakarin) and use 2D tricks to make him start and stop as Ford instructs him to move between “builds” (Failes). The result of these special effects is that Westworld, like Game of Thrones, is far more cinematic than the average TV show. Executive producer J.J. Abrams commented on Westworld’s cinematic qualities, observing, “The production value of this thing is preposterous… But it’s HBO. That’s what they do” (Stefansky). He went on to point out that the “writers and directors wanted to maximize the sense of a cinematic experience” (Stefansky). Thus, Westworld and Game of Thrones alike fit HBO’s cutting-edge mold by pushing the boundaries of television production value.

dolores fly

Yet, as much as Westworld resembles its predecessor, the use of CGI in Westworld is far more self-reflexive than in Game of Thrones. Whereas the VFX in GOT are intended to be seamless, the CGI used to bring robots to life in Westworld does the very opposite; instead of making something fictional appear realistic, Westworld digitally manipulates human performances to appear visibly unnatural. The robots appear purposefully artificial because the show thematizes human creation and raises questions about technology and voyeuristic media. Laura Bradley from Vanity Fair points out that, while “both shows feature plentiful gore and sex, Westworld has cleverly framed itself as a critique of the same sort of gratuitous moments that Game of Thrones seems to relish.” As such, the show constantly reminds audiences of its own construction, often pulling away from dramatic moments in the park to showcase the employees working behind the scenes. One such example occurs during the finale, when Dolores’ emotional death scene beside the ocean is interrupted when lights flash, the hosts freeze, and Ford steps up to accept applause for his new storyline. Likewise, VFX function in the series to create an “unnerving, unreal final product” in which “glitches, stalls,” and “inconsistent expressions” reveal “what happens when robots perfectly designed to imitate humans malfunction and artifice can no longer obstruct their nature” (Zakarin). In other words, the CGI in Westworld is intended to stand out and unnerve audiences, forcing viewers to acknowledge just how horrifying it is that humans can mimic reality and all the terrible things that could go wrong. For example, when the Sheriff glitches and his eyes start gazing in different directions in the linked scene, the audience feels as uncomfortable as the guests, and darkness lurking beneath this fantasy world reveals itself.

dolores death

Ford interrupts the emotional climax of Dolores’ storyline.

Thus, Game of Thrones and Westworld differ from one another in many ways, yet these distinctions, too, reflect HBO’s brand identity, since the network does not simply repeat its former successes. Westworld and Game of Thrones also share a great deal; both are both R-rated, high-budget genre shows with complex long-form narratives Yet, Westworld criticizes media-making itself, holding up a mirror to shows and movies like Game of Thrones that provide voyeuristic pleasure through sex, violence, and escapist fantasy. Whereas Game of Thrones was groundbreaking for bringing such high-tech visual elements to television, Westworld is innovative its self-reflexive use of complex narratives and visual splendor to critique storytelling and technology. So, while both shows boast similarly cinematic CGI, it is only logical that both shows should utilize VFX differently. After all, HBO’s brand requires that its flagship series be radically new in some respect, and both Game of Thrones and Westworld stand alone as uniquely innovative masterpieces.

Works Cited

‘Steve Jobs,’ ‘Love & Mercy’ and ‘Jackie’ – Juxtaposing Differing Visual Capture Formats to Transcend the Conventions of the Biopic

Examining the use of technology in film is a hobby and fascination of mine, and has been for years. In research, or even casual observation of the intersection of film and technology, much to-do is made about how technologies are applied to film. Before there can be discussion of technologies applied to a film, whether in post-production or otherwise, there is a decision made by the filmmakers that affects both the production workflow and the look and feel of a film enormously – this decision is how the film will be captured, or, instead of which technology will be applied to a film, which technology will be used to capture a film. With digital being the norm in today’s feature film landscape, shooting on film sets a given project apart from the rest of the pack, and results in a work that looks distinctly different from the majority of the films in the marketplace, as evidenced by the infographic below, from film researcher Stephen Follows (https://stephenfollows.com/film-vs-digital/). screen-shot-2017-03-03-at-5-55-01-pm

As of 2015, just 20% of the top 100 US-grossing films were shot on film, compared to near 90% being shot digitally. The key distinction here is that these are top grossing, Hollywood studio pictures – with directors the likes of J.J. Abrams, Christopher Nolan, and Steven Spielberg – they can demand film capture, and no one can deny them, but these are the biggest films, imagine a more dire outlook for indies that need to save pennies wherever they can. The cheaper workflow offered by digital leads to a much bigger disparity in the digital to film ratio of low to micro budget indies.

Moving beyond the decision to shoot on film, there is the matter of deciding which film stock to shoot on. 35mm film or 16mm film are the two options filmmakers tend to weigh when embarking on a project, and what I aim to do below is to explore what is gained in the selection of a shooting format, in Danny Boyle’s Steve Jobs (2015), Bill Pohlad’s Love & Mercy (2014), and Pablo Larraín’s Jackie (2016). Beyond the simple choice to shoot on film, these films use more than one capture format within each – this allows them to create juxtapositions using the visual textures of the mediums at hand. Each film is, in essence, a biopic. Boyle’s film is the story of the titular Apple genius, Pohlad’s film is the story of The Beach Boys’ lead, Brian Wilson, and Larraín’s film focuses on former First Lady, Jackie Kennedy. The juxtapositions created within the films by the usage of different capture formats allow for the films to transcend the narrative potential of a biopic, crisscrossing or skipping through periods within the films’ subjects’ lives, through a visual means. Shooting on film in the first place is a distinguishing aesthetic mark on a film, but it is these films’ combination of capture formats that imbues them with the ability to convey not just unconventional narratives, in a sense of literary design, but also in the sense of visual experience.

screen-shot-2017-03-03-at-6-40-24-pm

Fassbender as Jobs in three acts – source: https://www.wired.com/2015/10/steve-jobs-danny-boyle-interview/

Each film’s shooting formats have a transportive quality, one that moves the viewer through time, during each of the films. Beginning with Steve Jobs, Boyle and DP Alwin H. Küchler made the decision to mirror the structure of Aaron Sorkin’s screenplay for the film with their capture format decisions. The film is shot in three different formats, one for each distinct act of the screenplay, each act joining Jobs and those around him as they prepare for a product launch. The first, and earliest portion of the film, was shot on 16mm film – depicting 1984 before an event for the Macintosh. Act 2 was shot on 35mm film, depicting the 1988 NeXT Computer launch, and then act 3, leading up to the launch of the iMac, is shot digitally, on the Arri Alexa. This creates three distinct visual looks, and as the film progresses, the visuals get clearer and cleaner. Where aspect ratio is concerned, the film conforms the footage from each of its sections to scope, resulting in cropping of the 16mm material. The 16mm material is soft, gritty and grainy, while the 35 material has a much stronger, robust and uniform look. The Alexa material is, unsurprisingly, crystal clear. Examining this in tandem with the content of the film itself, Jobs is played by the same actor, Michael Fassbender, throughout. He ages before our eyes, via makeup and hair, this helps delineate which time period we are in visually, and is aided by the use of the three different capture formats. Underscoring the use of each format for each period is the notion that each format is actually period appropriate, linking the look of each section to technology that would be used to capture events of the time in reality.

With Pohlad’s film, Love & Mercy chronology is not conformed to in any way. Further separating it from Boyle’s film, two actors play the film’s subject, Brian Wilson – these two being Paul Dano, as the younger Wilson, and John Cusack, as the older Wilson. The film is shot with 16mm and 35mm film, 16 for the earlier portions and 35 for the latter portions. While the actors playing Brian Wilson make it easy to determine which time period the film is depicting, as it bounces around throughout, this is further aided by the vast difference in texture of both formats. Where Boyle’s film allows the viewer to adjust to a texture throughout an entire act of his film, Pohlad’s film abruptly moves between the two time periods, thus leading to jarring shifts in visual texture. Pohlad’s film also integrates period footage with originally shot footage, meant to mimic the period footage, thus making it appear as though Paul Dano and co. are actually the 60s boyband, in pseudo-documentary footage. The film’s opening credit sequence is a good example of this. See this clip from the film, starting in the film’s later period with Elizabeth Banks, then moving back in time with Paul Dano, then moving into Wilson’s mind as he puts together a song, and note the different visual textures within.

The actual footage from the period needs no manipulation, and the 16mm footage shot for the film is processed to make it look as though it fits within the period – separating these moments from the rest of the films 16mm material. There are also black & white, pillar-boxed moments, meant to simulate home movie footage. Combined with the film’s 35mm footage, there are five varieties of textures in the film, derived from two kinds of film stock, thus allowing for two time periods, flashbacks within the periods, the illusion of different sources, and hallucinatory sequences to each maintain a distinct visual flavor within one cohesive work. With the exception of the black and white material, the film’s aspect ratio is conformed to 1.78:1, or conventional 16×9. This aids the film in its ability to bridge gaps between times and sources of footage, as it abets the cohesiveness of the film’s visuals, via a consistent aspect ratio – when this is contradicted by the black & white material, it is for the purpose of distinguishing the home movie footage, shot for the film with Dano and the rest of the actors playing the band members. Like with Steve Jobs, Love & Mercy’s different textures and capture formats allow it to skip through time, but unlike Steve Jobs, in a non-chronological fashion. Thus, the use of the various textures and capture formats allow both films to transcend conventional approaches taken by the biopic – while the script may call for unique combinations or treatments of time periods in the narrative, these aesthetic choices amp up the films’ abilities to live up to and convey the narratives of these scripts.

Pablo Larraín’s film, Jackie, takes a different approach to non-chronological narrative. The film jumps around time the most of the three films covered herein, but the window of time it works within is smaller than Steve Jobs or Love & Mercy. Unlike the other films, Jackie does not switch shooting formats for its shifts in time, partly because its jumps through time encompass smaller time intervals. Like in Steve Jobs, the titular character of Jackie is played by the same person throughout, Natalie Portman. With this film, the shift in format is when the film depicts the shooting of A Tour of the White House with Mrs. John F. Kennedy, originally aired on February 14, 1962. The film was shot on 16mm, and retains that format’s aspect ratio of 1.66:1, allowing it to stand out sharply from other films shot in nearly every other format, those which retain the various permutations of scope or CinemaScope, bearing aspect ratios like 2.35:1 or 2.40:1. The texture and grain offered by 16mm, like in Steve Jobs and Love & Mercy allows it to feel as though it was actually captured within the period it depicts. It is useful that it looks like film shot in 60s, then, as some actual source footage from the period is integrated into the film, sometimes only in brief snippets. The film’s director, Larraín, narrates the clip below, explaining how actual shots from the tour were integrated into the film’s version of the tour.

NY Times – Anatomy of a Scene – ‘Jackie’

With the tour, we encounter the film’s main instance of shifts in visual capture format. The tour bits shot for the film were shot in a pillar-boxed frame, in black & white, on video. Per the film’s cinematographer, Stephen Fontaine, “in order to match the specific look of the [tour] footage we used an old tri-tube camera Pablo [Larraín] brought in from Chile” (James – http://nofilmschool.com/2016/12/cinematographer-stephane-fontaine-lenses-emotional-journey-jackie). The shift from color to black & white, 1.66:1 to 1.33:1, and film to video all clearly indicate what is tour footage and what is not. As Portman plays Jackie giving the tour, the cuts from what the tour camera is capturing to what the film’s main camera is capturing are certainly jarring and combine two distinct visuals, but the underlying effect achieved does what the visual choices made by Steve Jobs and Love & Mercy do – they amplify the core of the film’s narrative through a visual means, in this case, the tricky balance (or lack thereof) between private and public life, performance, persona and “reality” during JFK’s presidency, and time before, during, and after his assassination, all from the perspective of Jackie. During the White House tour, what is shown in color and not being filmed for the tour includes Jackie’s assistant and confidant, Nancy Tuckerman (Greta Gerwig), coaching Jackie as she at times struggles with maintaining her composed, on-camera demeanor. This places the viewer in the period, allowing us to experience both what television audiences saw then, and what they wouldn’t have seen. The juxtaposition offered by the two visual capture methods then underscores what is happening in the film’s screenplay. The function of this juxtaposition, while different in how it is implemented, is similar in its net effect, albeit more focused on public and private life, compared to traveling between time periods.

It says a lot about a filmmaker when he or she chooses to shot on film, as it is so against the grain in terms of the look of the majority of films made today, likewise, it alters the workflow. Away goes the instant playbacks at video village, and being able to beam dailies to iPads via apps like Pix – with film, dailies have to be processed and screened. Film processing in general takes more time, and with the need to purchase, carry, load and reload film mags, there is just more effort when it comes to shooting on film. While choosing to capture on film does say a lot about a filmmaker, it says even more when the choice is made to use more than one format for a film, and thus the creative teams behind Steve Jobs, Love & Mercy and Jackie all have a clear commitment to telling the stories of their films not just by what is put on screen, but by how the films are put onscreen, in how they are captured and presented. Using more than one format, any combination of 16mm and 35mm film, video, and digital capture allow each of the films to juxtapose time periods and aspects of life, thus allowing the films in question to transcend the conventional biopic, whether electing to present a portrait of one’s life chronologically, or non-chronologically. As Steve Jobs’ director Danny Boyle says, the objective in depicting Jobs in that film was to present “a portrait…rather than a photograph” (Zorthian – http://time.com/4012107/steve-jobs-movie-aaron-sorkin-interview/). Each film presents a portrait of their subjects, and in choosing the various visual textures within each work, numerous portraits of the films’ subjects are able to come through.

When TV Spectacle Becomes Reality

Though several long months lie between us and the seventh season of Game of Thrones, the GOT Live Concert Experience is giving fans the opportunity to relive their favorite scenes as never before. My own fangirling aside, the idea of melding television, music, and technology in this way is fascinating. For one thing, this live performance is a brilliant way to sustain the hype and generate more revenue during this especially long off-season. (Production began later than usual this year to achieve a more wintry look now that— spoiler alert— winter has finally arrived.) It translates the audience’s expectations of technical splendor by juxtaposing onscreen special effects with live lights, effects, and music. And what better way to relive the visual and auditory splendor of Game of Thrones than with live music led by the show’s composer and real life wildfire?

Yet, the most intriguing part of this production, aside from the sheer amount of machinery involved (including 807 linear feet of video wall and 255 lighting fixtures), is the fact that there even exists a television show that is technically spectacular enough to warrant this grandiose of an event. Until relatively recently, a television adaptation of a fantasy epic would never be taken this seriously, if it were even made at all. Yet, the stars seemed to have aligned just right for GOT, and it was time to incorporate the formal splendor and production value previously reserved for features films; it is undeniable that the quality of CGI and special effects in GOT was and remains revolutionary. Visually speaking, Game of Thrones is more like a 60-hour-long movie than any other major television series that came before it.

Game of Thrones is nearing its end, but its integration of CGI into a quality narrative leaves me hopeful for the future of television. If Game of Thrones could inspire such a spectacular live performance, who’s to say what live events will come out of Westworld or Stranger Things? Until then, I’ll be obsessively watching videos of the concert and begging someone to come with me.

 

A Tiny Camera From Blackmagic That Packs A Wallop

The introduction of digital capture to what was once a celluloid world has had numerous, far-reaching effects on what can be creatively and technically achieved by filmmakers. Digital video, upon first use, allowed significantly cheaper, more mobile productions to be undertaken, and along with them came a distinct look, often used to achieve a certain visual aesthetic. Anthony Dod Mantle was one of the earliest DPs to embrace this, on films with directors Thomas Vinterberg (Festen) and Danny Boyle (28 Days Later…). The real lynchpin of digital capture came when George Lucas made Star Wars: Episode I – The Phantom Menace in 1999.

His DP, David Tattersall, shot on 35mm, but Lucas was dissatisfied with the results. The film was comprised of innumerable digital VFX shots, which had to be integrated into traditionally shot bits of film. This led to the development of High Definition, digital cameras, on which an entire film could be shot. The resultant work was Star Wars: Episode II – Attack of the Clones. The digital effects would be married to digitally captured performances and sets, and the result was much more visually seamless, and efficient for the production. The trickle down effect of this is the mass proliferation of digital capture, eliminating the need to carry, load, and buy film mags, and with this, digital cameras have gotten smaller, cheaper, and are enabling more filmmakers to make things on their own terms over time.

With that, we come to one of the more recently emerged players in the digital capture market, the Blackmagic Pocket Cinema Camera, from video technology manufacturer Blackmagic. I only recently first heard about their system, when reading about its use by DP Slawomir Idziak, on the film which saw Natalie Portman make her feature directing and writing debut, while starring – A Tale of Love and Darkness. Idziak has a large body of work, consisting of films shot in his native Poland, and then there are the films I know him for, like David Yates’ Harry Potter and the Order of the Phoenix, and Andrew Niccol’s remarkable debut, Gattaca (Have you seen it? If the answer is anything but yes, see it now!). Idziak is not opposed to adapting to new technologies for capturing imagery, and “believes that utilizing modern storytelling techniques in contemporary filmmaking today is essential, particularly if a DP wants to get the best possible scene coverage, in the fewest possible shots.” – (http://www.moviescopemag.com/insiderspov/shooting-natalie-portmans-directorial-debut-on-the-pocket-cinema-camera/).

Idziak discusses how, even on a film with Portman’s star-power behind it, time and money was limited. Having a nimbler camera like the BMPCC on hand allowed for Idziak to get more coverage of scenes without needing to set up another take with the ARRI Alexas used as A cam. “The compact dimensions of the Blackmagic camera allowed us to set up a shot, and position cameras within said shot that were out of sight, giving us access to unique perspectives and angles. There are also several dynamic scenes in the film where the size and weight allowed us to use the camera in a way, which would have otherwise been difficult with traditional cameras” (http://www.moviescopemag.com/insiderspov/shooting-natalie-portmans-directorial-debut-on-the-pocket-cinema-camera/).

With a camera as compact as the BMPCC, quality will no doubt be of concern. The film also features lots of color manipulation, for dreary dream scenes and other moments. Idziak and his department were pleasantly surprised to find that, when comparing Alexa-shot footage to BMPCC footage, “no one in the room could distinguish what had been shot with the Blackmagic camera, and what hadn’t” (http://www.moviescopemag.com/insiderspov/shooting-natalie-portmans-directorial-debut-on-the-pocket-cinema-camera/).

The compact nature of the BMPCC has led to its use, increasingly so, in major Hollywood productions. It isn’t the first time such technology has been used on feature films, remember those GoPro shots in Peter Jackson’s The Hobbit: The Desolation of Smaug? Who could forget them, because the quality was abysmal. This was in stark contrast to the rest of the film’s vibrant, striking imagery. The key to the BMPCC working well on larger productions is that it doesn’t cause a drop-off in image quality, allowing productions like Joss Whedon’s Avengers: Age of Ultron, Timur Bekmambetov’s Ben-Hur remake, Paul Greengrass’ Jason Bourne, and the short-lived CBS TV remake of Rush Hour to make use of it, in conjunction with the high-end capture systems they are already running.

Ultron DP Ben Davis commended the BMPCC, saying that both the quality of its captured image, and small size, allowed it to be used in ways that would endanger normal cameras, without compromising the film’s lush visuals (I view Ultron as one of the visually strongest Marvel films). “There are two large battle sequences in particular during the film, the first is at the beginning and the second features in the third act, and we very much wanted these to be shot as a war correspondent would cover news in a conflict zone. What we needed was a lightweight camera that we could then distribute around the set during the filming of battle sequences that would give us more than twelve frames of good quality HD material that we could match with our main camera package” (http://nofilmschool.com/2015/05/avengers-age-ultron-was-shot-partly-blackmagic-pocket-cinema-cameras). The camera can, in essence, be crashed along with cars, and jostled about while handheld, enabling DPs to get exciting, dynamic, and unconventional shots during action-sequence havoc. This is no doubt what the BMPCC allowed for on the Barry Ackroyd-shot Jason Bourne, as well.

The BMPCC has been used to shoot shorts, even a feature – the Sundance film from 2015, Diego Ongaro’s Bob and the Trees. The BMPCC “allow[ed] them to plunge through snowdrifts and woodlots to capture the actors in the precarious process of cutting timber” (http://www.hollywoodreporter.com/review/bob-trees-sundance-review-764732). With further use in more and more films, and to shoot entire films, the BMPCC will no doubt continue to expand the possibilities of directors and cinematographers to achieve unique shots and sequences. With the quality of the captured image stacking up to ARRI and other systems, it’s safe to say the BMPCC will be a player in the digital capture game for a long time to come.