Danny Dimian is a visual effects supervisor at Sony Pictures Imageworks, most recently supervising the Academy Award-winning SPIDER-MAN: INTO THE SPIDER-VERSE for Sony Pictures Animation.
His presentation was proof that daring art can happen in large institutions. Yes, it can be said that Sony wanted a new stylized look for the Spiderverse movie(s). However, many, many times this desire for a new look results in simply hiring a trendy director/VFX person who then spends the money on the flavor of the month gear/software. The result can look dated very quickly.
Danny's team went retro; they researched old comic books and found the techniques (and flaws) that make them so distinctive. Halftone dots and misaligned printing being two examples. They also collected the words, exclamation points and marks that add impact.
They then used these as the basis for experimentation and also brought in painters to create stylistic possibilities. In short, there was a lot trial and error involved with bringing the look of old printing into the age of 8K.
The Women in Animation panel was serious, yet insightful and lighthearted. Hopefully the unwanted challenges described will become nonexistent for the next generation of women, and the opportunities will increase.
It was unexpected, suddenly having a brief discussion about cancer treatments with Professor Daniel Zajfman, the President of the Weizmann Institute of Science. Unplanned as well; Professor Zajfman was being interviewed right in front of me. What I happened to overhear was a phrase that went something like this: “Historically, cancer was perceived as one disease. Eventually, we understood that there are many kinds of cancer. Now, we realize that cancer, just like every living being, seems to be unique.”
At first, the presence of Professor Zajfman at VIEW was surprising. The VIEW website is a striking collection of the latest animated movies, CG creations and award-winning Hollywood technical talents. Why would a physicist whose career focuses on atomic and molecular physics be making a presentation..He seeks to understand the astrophysical conditions found in star-forming regions, as well as working to solve the riddle of star formation. VIEW also focus on exploring the increasingly fluid boundary between real and digital worlds. Professor Zajfman listened attentively, gave me his card, and that was that. Unfortunately, I had no chance to attend his talk which was entitled What is Why and Why is it Important?
A woman warrior riding upon a flying dragon the size of a 747.
A rat who cooks in a three star restaurant in Paris.
A mouse whose best friends are a dog and a duck.
The tortured soul of a giant robot.
A captured clownfish.
Animation is the art form in which ideas like these become the foundations of global economic powerhouses.
Besides toys, the animation industry sells food products, books, health products, clothing, banking services and more. At VIEW 2019, I was very fortunate to listen to, and interact with, some of the most successful men and women in contemporary animation.
It is difficult to say who impressed me the most, but Thomas Schelesny would be in the top of the list. His presentation about the dragons created for Game of Thrones was extremely impressive.
However, after his talk he spoke to a few people, including students. He discussed the idea of being an artist. Egos were discussed. Mr. Schelesny did not mention his Emmy. What he did speak about was the professionalism needed to keep things moving. The dragons of GoT were an extremely expensive undertaking that required seamless interaction between a number of companies all over the world. These companies, for the most part, rarely work with other companies. The production schedule was dangerously short.
Mr. Schelesny exemplified the traits that are usually found in the best soldiers fighting the worst wars. He was a leader who stayed focused and thought of all members on his team.
Those watching Game of Thrones saw a beautifully choreographed and realistic battle of flying dragons. Those aware of what was happening behind the scenes , with Thomas and the Image Engine Design, saw a very different battle; one equally thrilling but real.
The second of a series about the espresso culture at Papi y Papi, a retail shop in Natchez, Mississippi specializing in coffee, cigars and chocolate. The first post is here.
Background of the water used at Papi y Papi
The water of Natchez is tested daily in regards to pH, chlorine and fluoride. In 2013, Natchez Water Works, owned by the City of Natchez, won an award for the Best Water in Mississippi. The water is from four wells in the Lower Catahoula Formation Aquifer and one well in the Catahoula Formation Aquifer. The water at Papi is naturally low in calcium, which minimizes scaling. Water is complex, and rarely possesses the same characteristics over long periods of time. It can change seasonally or because of man-made changes that may not be noticeable unless testing is in place. The water at Papi y Papi was tested and found to be of very good quality, although very slightly alkaline.
Regardless of water quality, every coffee machine requires a filter, and every water filter requires rinsing before it is used. Not doing this important step means that residue or loose fibers can damage the machine and void the warranty. Improper water treatment resulting from poor filtration is the most frequent cause of espresso machine damage.
Papi uses the SX2-21 Everpure water filter that reduces sediment down to 0.5 microns and reduces chlorine, taste and odor at the rate of 1.5 gpm for 15,000 gallons. Although the water of Natchez is noted for its low calcium-carbonate, Papi also uses a ScaleX2, is a chemical free technology that inhibits the formation of scale. Untreated scale can build up, making the inside of a machine look like a cave full of white stalagmites and stalactites!
The Slayer EP Steamer
From the Slayer website:
Unlike the classic paddle system featured in the current Slayer Steam X model to activate volumetrics, what Maico describes as “a programmed electronic dosing of espresso output”, the Steam EP features nine-bar pump extraction with push-button volumetric activation. This is activated with two push buttons that presents four programmable settings per group. Users can customize different volumetric selections, for example a double ristretto and double espresso, or a group head flush and manual extraction. It also includes the option to activate a pre-wetting function. With this setting, much like pre-infusion, users can pre-wet the coffee, like blooming, prior to extraction from zero to four seconds, and determine how long to delay before the extraction starts. This customizable feature allows the barista to control what specific characteristics of the coffee to extract. From what Dub has said to myself and to customers:
Espresso: the Coffee and the Dosing
Coffee: Honduran lightly roasted
Dosage: 30 grams (subject to change)
Prewet: 4 seconds
Extraction time: 35 seconds
The process and specs are constantly being checked and refined. Variables like barometric pressure and humidity require constant monitoring and adjustment.
The Slayer, with its programmable controls and consistent heat controls, allows a barista to finetune variables as efficiently as possible. With the Slayer, a barista can create, as much as possibly, a signature style, and most importantly, do so with consistency.
This is a collection of notes and technical information about a master coffee maker gaining experience with the Slayer EP Steam, one of the world’s finest espresso machines. This is part documentation of Dub Rogers, part research paper on how to make outstanding espresso. I am not a coffee expert, though I do enjoy researching coffee. I plan on having my own brand of coffee soon.
This is being written and researched at Papi y Papi, in Natchez, Mississippi.
Papi y Papi Background: Specializing in the highest quality coffee, chocolate and cigars, Papi y Papi is a retail store located in downtown Natchez, Mississippi. The barista at Papi is Dub Rogers. Papi y Papi features Steampunk Roasted Coffee. More than just a retail store, Papi y Papi celebrates and shares the cultures and traditions that gave the world coffee, cigars and chocolate.
Before the Slayer arrived, Steampunk’s espresso roasts were calibrated to be made with an Elektra Belle Epoque, a machine in a class all its own.
Background of Dub Rogers: Dub Rogers created, and then successfully sold, both Steampunk Espresso Bar, and Smoots Blues Lounge. Both became internationally recognized for their service, their unique and welcoming atmosphere and, of course, for their coffee.
Dub studied the art of espresso in Milan and Seattle, and has won various prizes. He was certified as America's Best Espresso Judge at the 2017 America's Best Espresso Competition.
Before returning to Natchez, his hometown, Dub was a world acclaimed photographer, based in New York City.
I met Jim Simons, the force behind Tranzient, at the November 2019 edition of the London Augmenting Reality Meetup. He did a live demo of the Tranzient VR/music making app. The app not only performed beautifully, Jim composed some great little beats; on the spot of course.
The Tranzient app allows musicians to collaborate in VR, in real time. Tranzient is impressive on many levels. I wondered if Jim would be consider adding Bubiko to his "band".
Jim was open to the idea!
What you see above is less than an hour's work. Now that we know Bubiko is functional within Tranzient, we can literally play around.
Tranzient is highly recommended https://www.aliveintech.com/ . I just met Jim, but he is genuinely enthusiastic about making music in VR. If you are a musician and working in VR, you should definitely move on Tranzient. Jim is a bit mysterious about his musical past, but I am sure he was involved with some great projects. He sure makes Bubiko look good!
This version on Bubiko was made by Novaby, art directed by Stephen Black and Sayuri Okayama.
Tony, Antony Vitillo, is the AR/VR consultant who runs Skarred Ghost, one of the top blogs for VR and AR. I met Tony online in 2016, and we have shared our problems and joys since then. I was invited to speak in Munich and Paris, and hoped that somehow I could finally meet Tony in person, in Europe.
However, Tony was going to be in China until he returned to Italy, to give a presentation at VIEW, in Torino, Italy. I had never heard of VIEW. When I saw it, I immediately wanted to go. But my trip was on a budget far below “low budget”. Perhaps I could volunteer, like I would do at AWE, in Munich. Tony said he would ask the organizers, and that was how we left it.
I arrived in Turin on the morning of Sunday, October 21. I’d taken an overnight bus from Munich; the city felt both friendly and alien. Two officers in a polizia car asked me if I knew where I was going.
Though the officers described the bus and train options very well, I didn’t pay too much attention. Bubiko and I were going to walk.
And, walk we did.
The address turned out to be the administrative office for VIEW, not the venue. I was relieved when someone answered the intercom; and happy when Ricardo, a handsome young man walked out. He attentively listened to my volunteering idea and seemed genuinely interested in my Bubiko demo. Like the polizie, Ricardo gave me directions on how to the venue, a place called OGR, which seemed to be near where the bus from Munich had dropped me off. I thanked Ricardo, slung my bag over my shoulder, and began rolling my little suitcase down the road.
A blonde young woman called out to me! She was the driver for OGR, and she was going to the venue. As she drove me to OGR, we talked. She lived outside of Turin, and told me how beautiful the golds and reds of Autumn were beneath the snow-capped peaks of the Alps.
We arrived at OGR, once a train station, now an art center.
I didn’t know what would happen inside, but I pretended that I did.
This post provides notes on the above Powerpoint. The actual presentation elaborated on the points much more than I do here. Please feel free to ask questions. Also, as always I recommend joining the Open Augmented Reality Cloud. They are working to create open, interoperable standards to encourage the growth, use and understanding of AR.
And yes, another resource is Bubiko Foodtour's Unusual Guide to Augmented Reality, which is reviewed here.
The following presentation uses references from the State of the AR Cloud Report published May 28, 2019 by the Open AR Cloud. All rights reserved. http://stateofthearcloud.com/ With the exception of Novaby, the 3D model making company, I have no relationships with any companies mentioned in this presentation. This research has been self-funded.
The above link exemplifies a lot of the ideas about the future of AR in general, as well as about the potential applications of AR and AV, Autonomous Vehicles.The entire article is good, but the embedded video is excellent.
I cannot recommend the work of the OARC highly enough. Visit their site for the download of the first OARC Symposium Report.
8. www.blacksteps.tv At least three posts about 'geopose'
Yes, on this blog, I presently have a few posts about 'geopose'.
9. From correspondence with Jan-Erik Vinje, Managing Director of Open Augmented Reality Cloud:
○ If we are successful in creating such a standard we will in effect have created the equivalent of the URL for real world spatial computing. Allowing the geopose of both real and virtual objects to be universally captured, stored, shared and understood.
This comment by Jan-Erik, and those that follow, are self-explanatory. However, one week ago, at the second OARC Symposium, the definition of 'geopost' was greatly discussed. In fact, the goal of the OARC for the next year is to fcus upon creating a final definition of 'geopost', and to work towards a working model.
10. ○ I liken it to URL because it can be seen as links between the physical world with the digital world. ○ URL links information to more information. Geopose links the world to digital information and digital information to the world.
11. ● It can even be used to create a "persistent portal" between a physical space and a virtual space. Where people from across the world who are experiencing a virtual space can go through such a portal to experience a real space (by streaming realtime reality capture data to all those in the virtual reality space).
12. ● At the same time people in the real world can walk into the virtual reality space or see virtual objects, scenes and avatars of virtual reality users projected into their physical space using AR.
14. To create a GEOPOSE 1.Physical space owner 2. AR data/ SLAM spatial location and mapping 3. AR Cloud space 4. End user(s)
This slide features a photo of the Fox Theatre in Detroit. The front of the theatre is copyrighted. Anyway planning to use the front of the Fox would need to obtain permission to do so.
This situation would be true for many buildings, copyrighted or not.
15. How to find the location? (the need for interoperability): Google maps Mapillary Military maps LADOT ebike monitoring maps(eg) Satellite-based maps monitoring floods, snow, ice, fire, crowds etc. City/State/Federal maps Company owned routing maps Bus routes Ride sharing maps Superworld Emergency routes Bike lanes Construction Traffic signals
The companies and organizations listed above would all benefit from an open AR Cloud. A comparison is sometimes made, comparing the present state of the AR Cloud with the early nonstandardized gauges of railroad tracks. Once the railroad tracks became standardized, society benefited.
Once there are interoperable standards for the AR Cloud, groups like those listed above will benefit, as will education, medicine, science and, all of society/
I arrived in New Orleans on the morning of October 1, 2019. The very air-conditioned overnight bus ride from Austin resulted in a bit of a raspy throat, but overall, I was excited. At 9AM, I was to meet with Charles Carriere, the President and CEO of Scandy, a company working with cutting edge 3D ideas, like scanning and volumetric video.
Charles was friendly and informative. Our conversation was stimulating. At one point he pulled out his iPhone, waved it around me, and then pushed a button. Voila: my first portrait in 3D styleee!
My takeaway: AR will be defined by 3D objects. Yes, I knew this before, but now I have internalized that fact. Scandy's work is hugely important.
The two previous posts on this blog are the result of trying to create a definition of GeoPose. I still have not come up with a twenty-word-or-less definition. However, the following is very helpful towards achieving that goal.
What you are about to read is a reply sent to me by Jan-Erik Vinje, the Managing Director of the Open Augmented Reality Cloud. Jan-Erik and I will both be giving presentations at the 2nd Open Augmented Reality Cloud Symposium in Munich on October 16th.
Download the First State of the AR Cloud report as well. You can find it here.
Jan-Erik Vinje's thoughts on the differences between GPS and GeoPose:
GPS is a specific technology. And it is mostly used for obtaining more or less accurate geospatial positions related to a geospatial coordinate reference system. Currently an ellipsoide that approximates the earth is used to as the reference for altitude and longitude. The ellipsoid is a bit crude but it is normally less than 100 meters incorrect as a representation of where the main ocean surface is or would have been.
GeoPose is intended to relate to the same type of geospatial reference but adds geospatial orientation to the geospatial position. All real objects on our planet could in principle be said to have both a position and orientation. GPS provides position. If you have a stream of GPS positions you could derive a direction vector that is almost an orientation but not quite. Imagine walking down the street with a smartphone and imagine collecting a GPS location ( lat, lng, alt) every second. You can create a path through space telling you the direction you have moved with phone. What the GPS locations can not give you is the orientation you have held your device. Did you point it towards the ground, to the sky or towards a Wall across the street, and importantly how was your device rotated around that direction?
For that you would need another system or set of systems to provide you with your orientation. AR Cloud visual positioning systems can provide both the position and the orientation of your device. This is what we call a pose. If that position and orientation is in a geospatial frame of reference equivalent to the one used for GPS one can call the pose a geopose.
(SB:Hmmm that is great, but the last three lines will require a bit of serious thought...I am not 100% clear I understand. When I do, I hope to make educational drawings, or better, 3D models-in AR.)
Jan-Erik Vinje, the Managing Director of the Open Augmented Reality Cloud, responded to my previous post about a definition of geopose. His reply is reproduced with his permission. Thanks to Jan-Erik and all of the team involved in this project. The following exemplifies all of the diligent hard work going on to create standards for AR.
Thanks 🙂 If you like to update the geopose page - we have made some headway with our partner Open Geospatial Consortium (A standards development organization in the geospatial sector)
Open AR Cloud put together a draft for a Standards Working Group for geopose at the Open Geospatial Consortium. It has been out on public hearing for a while and is currently being voted over by members.
If the vote succeds an official Standards Working Group will be created in Open Geospatial Consortium (OGC) to develop the geopose standard.
If we are successful in creating such a standard we will in effect have created the equivalent of the URL for real world spatial computing. Allowing the geopose of both real and virtual objects to be universally captured, stored, shared and understood.
I liken it to URL because it can be seen as links between the physical world with the digital world.
URL links information to more information. Geopose links the world to digital information and digital information to the world
It can even be used to create a "persistent portal" between a physical space and a virtual space. Where people from across the world who are experiencing a virtual space can go through such a portal to experience a real space (by streaming realtime reality capture data to all those in the virtual reality space). At the same time people in the real world can walk into the virtual reality space or see virtual objects, scenes and avatars of virtual reality users projected into their physical space using AR.
Stephen Black: There is so much information here, that I need to review it carefully. It will be a wonderful challenge to come up with a definition of 'geopose' that is 20 words or less!