©2019 by Havsfjord.

Search
  • Havsfjord

On my way back towards Carrboro, we always drive past a little patch of green grass with several statues and installations placed what almost looks like random. Some of them are more complex, moving on their own. When we got the assignment of doing a 3D project in Agisoft Megashape (previously known as Agisoft PhotoScan), I decided instantly I wanted to try to do one of them. JJ advised us when photographing outside to a) not take photos when it was sunny, cloudy weather is preferable and b) do not photograph something too shiny. I had this advice in mind when going out to take my photos, but however in the end I managed to sort of do them both.

All the statues are made by Carrboro metal sculpturist Mike Roig, and the plot of land they’re on is adjacent to his studio and home. I chose the piece Looking up primarily based on its texture being more rugged and not as shiny as the other pieces. On my way to my first attempt the weather shifted pretty fast and I had to do it in sunshine, which meant that the back of the sculpture didn’t come out as well in the photographs. My first attempt I took 42 ( a little overkill) photos, and as you can see from down below, parts of the back didn’t really want to come out.





For my next attempt, the weather was cloudy, but even so the light didn’t really want to come out properly anyway, just as with my first attempt. Even though cloudy I think this one came out even more poorly than my first attempt. Here Metashape told me that out of my 29 photos, four of them could not be properly aligned and that they were cut from the rendering; it didn’t give me more information or clues as to why they couldn’t be aligned (I thought I was going pretty straight and slow but maybe not). My own guess would probably be that it also had to do with the light not coming out properly, making it difficult for Metashape to align them properly. This one however as you can see, did not turn out well as it couldn't properly render the back of it and turned out worse than the first one.



Just for fun, I did a version with all the 71 photos from both sessions that you can find below, hoping it may make it a bit clearer, but I was wrong. As you can see the back has a whole in it that I couldn't properly fill. I was surprised that it turned out well surface wise, as the lightning was very different in the two sets of photos, so the front part of it I would say still looks decent.




Apart from my models not turning out perfect, I thought the software with the workshop and guidelines provided were comprehensible and easy to understand. It was also very fun to see step by step how all these photos slowly turned into an actual model. I could see this being used in numerous ways and would be excited to try it out myself, even though I am not sure how I would use it just yet. I can imagine that trying to troubleshoot or go deeper into the possibilities of the software will require a lot of learning, but by simply following the software step by step (grateful that you can’t accidentally jump over a step in the process for example) and the guidelines is for now enough to continue on doing 3D models.

I have for a long time been very interested in the potential of 3D and virtual realities; both personally and academically. Video games is an interest of mine, both from an artistic perspective but also let’s face it; it is also a favourite activity with my step-kids. In “In the Eyes of the Beholder: Virtual Reality Re-Creations and Academia” (2006), Diane Favro writes about the idea of “edutainment” and history being used as an added attractive element of video games but more there for the dramatic effect rather than historical accuracy. Archeology, history (and treasure hunting) has been frequently used as themes within games; today we see popular series such as the reboot of Tomb Raider and the just finished series of Uncharted.  Some elements seen in the 3D visualizations and collections for our readings are remarkably similar in its design and interface. Here is a GIF I created from a walk through video made by PS4Trophies on Youtube, of the game Uncharted: Lost Legacy. You play the character of a historian/treasure hunter who along the journey find small artefacts that you can zoom in, twist and turn the three dimensional object you just picked up. The way of looking at the artefact in the game is remarkably similar to Smithsonian X3D collection, even though it is obviously more detailed and a minimal background. This may not be surprising, as The Naughty Dog, the makers of the Uncharted video games have disclosed in interviews that they put in extensive research into their game, including scholars, archeologists and historistians as consultants for most parts of the game. Historical themes within video games has often been portrayed as a "way in" to more serious history, which perhaps may be true but is certainly not the main reason to the historical elements and has not been properly studied.
















GIF from gameplay, Uncharted: Lost legacy and still photo from Smithsonian X3D.


As Foni et al. (2010) argues, historical themes in video games lacks the accuracy required, and if any it is just a by-product of the dramatized narration of the game. A new subgenre has however emerged of what they call “serious games” that implements video game components, but the focus is on education or training rather than the entertainment value. The virtual realities or interactive story games has come either from small indie producers such as Red Redemption that in 2011 released the video game Fate of the World where you are in charge of the earths limited resources and all scenarios are based on scientific research. 


Virtual realities and games has been used within studies of climate change communication for the last few years; Large claims have been made around scholarly projects, an example is this article from last year with the bombastic title from the media outlet Forbes; Stanford Scientists Use Virtual Reality To Save The Actual World. Using virtual realities is used as a way to visually communicate data and potential futures, in this project about the acidification of our oceans.


Stanford's project on ocean acidification using virtual realities.


Artists such as Marina Abramović with her piece Rising (2018), where visitors are invited into a VR world of the rapid rising sea levels. In Abramović artwork, the visitor is at the end of it emerged within an apocalyptic world, framed as a potential catastrophic future for the planet.



Both Favro and Johanson (2009) debates around the uncertainties of attempting to render 3D visuals of historical sites or architecture. Johanson begs to question what is actually meant when we talk about accuracy when it comes to reconstructions; to see them as knowledge representations rather than just reconstructions of the past and with that creating new ways of learning and immersing with the body of knowledge. This balance and question of accuracy is not only within historical renderings but is shared with the battling of uncertainties when creating future potential visualization. No matter how much data or calculations are produced, whatever is formed is still just a potential future and can not be known for certain until it is the present. The more scholarly and educational projects also have to battle the difficult line between being entertaining as a tool that may reach out to larger audiences, whilst still being scholarly and depict accurate scenarios. The apocalyptic scenarios has been a common video games setting since the creation of video games and I have always been fascinated by these dramatized depiction of what a world would look like after years without human interaction, or after anthropogenic catastrophe. Another one of Naughty Dog’s latest games are Last of Us, known for its stunning landscapes, that portrays a planet earth that has been without humans for more than twenty years. Other visualizations of anthropogenic climate change is Metro 2033, where the world has gone through drastic climate changes due to nuclear accidents. 


Landscape shot from the video game Last of Us

Landscape shot from the video game Metro 2033.


Visualizations of future climate and our surroundings are incredibly fascinating and it always makes my mind wander; what will the world look like without humans? What does these dramatized visuals from video games aimed at being part of entertainment tell us about how we visualize the apocalypse and our future? And as the scholarly and artistic projects, what other narratives are they portraying in contrast to the entertainment formats? As Favro argues, the “re-creations call for a theorization of historical experience” where re-creation models can further look into sensorial experiences not just focusing on sight. How could this idea be interpreted not in re-creations of historical sights, but in depicting our potential futures and future climate change? How will it smell, taste, sound? Could further sensorial experiences within virtual realities and visualizations aid scholars and artists attempt to create understanding or further knowledge production? Even though I find the possibilities thrilling, I also feel hesitations regarding who the educational projects and artworks are for, but that is probably a blogpost of itself and also even more outside the realm of the topic of this blogpost.  

  • Havsfjord


Above is timeline about Elsa Schiaparelli; her life, career and legacy, made in TimelineJS. Schiaparelli has always fascinated me (and I’m all but alone in my fascination), as she throughout her career walked the line between fashion and art, merging the two in amazing ways. For this timeline I have added tidbits about her life and career, mainly focus on collaborations and works that we can see being present in our times as well. I had difficulty adding several pictures in one slide, coming to realization that is was probably not possible (please correct me if I’m wrong!), as here it would have been really great if I could have added pictures side by side of Schiaparelli’s works and contemporary work where you can see her influence (even within her own revived fashion house). TimelineJS was easy to use as it works together with a google spreadsheet; making your posts within the timeline easy to organize. However, they advise against using more than 20 slides, as to make it easier for the viewer to grasp and go through the timeline. I would also argue that it is a good tip for the maker as well, as it got a bit confusing for me when I in the end of my timeline realized I mixed up a few of the slide chronologically, making a hassle to go back and work through to make sure it was correct. They also advise you to create a timeline which in some form goes chronologically, as it otherwise will be confusing for the reader. This made me instantly think of narrating a person’s life and career with the timeline, as it obviously brings an inherent chronological order.


The one thing I was apprehensive about is the lack of freedom when it comes to the aesthetic adjustments. TimelineJS enables you to change the colour of the background, as well as some fonts that you can choose between. I would have liked to have changed layout, add pictures more freely as well as deciding font color. However, whilst writing this, I do understand that part of the relatively easy interface would have lost part of the accessibility would have been lost if more changes were enabled.