Tony, Antony Vitillo, is the AR/VR consultant who runs Skarred Ghost, one of the top blogs for VR and AR. I met Tony online in 2016, and we have shared our problems and joys since then. I was invited to speak in Munich and Paris, and hoped that somehow I could finally meet Tony in person, in Europe.
However, Tony was going to be in China until he returned to Italy, to give a presentation at VIEW, in Torino, Italy. I had never heard of VIEW. When I saw it, I immediately wanted to go. But my trip was on a budget far below “low budget”. Perhaps I could volunteer, like I would do at AWE, in Munich. Tony said he would ask the organizers, and that was how we left it.
I arrived in Turin on the morning of Sunday, October 21. I’d taken an overnight bus from Munich; the city felt both friendly and alien. Two officers in a polizia car asked me if I knew where I was going.
Though the officers described the bus and train options very well, I didn’t pay too much attention. Bubiko and I were going to walk.
And, walk we did.
The address turned out to be the administrative office for VIEW, not the venue. I was relieved when someone answered the intercom; and happy when Ricardo, a handsome young man walked out. He attentively listened to my volunteering idea and seemed genuinely interested in my Bubiko demo. Like the polizie, Ricardo gave me directions on how to the venue, a place called OGR, which seemed to be near where the bus from Munich had dropped me off. I thanked Ricardo, slung my bag over my shoulder, and began rolling my little suitcase down the road.
A blonde young woman called out to me! She was the driver for OGR, and she was going to the venue. As she drove me to OGR, we talked. She lived outside of Turin, and told me how beautiful the golds and reds of Autumn were beneath the snow-capped peaks of the Alps.
We arrived at OGR, once a train station, now an art center.
I didn’t know what would happen inside, but I pretended that I did.
What will the relationship between WebXR and geospatial data be?
It seems that WebXR cannot entirely ignore geospatial positioning, as geospatial content will be a major use case for mobile AR (at least eventually).
The web already has a geolocation API, but it is not sufficient for these purposes: it gives position but not orientation, is of very poor quality and not synchronized with the WebXR frame data. The deviceorientation API cannot be relied on for orientation: it is of very poor quality, was never standardizes (and is potentially going to be removed from existing browsers) and is also not synchronized with the WebXR frame data.
ARKit offers the option to have it's local coordinate system be aligned with geospatial orientation (e.g., Y up, Z south, X east). This provides a possible direction for how geospatial might be handled: have the WebXR API expose a property that says if the coordinate frame can be aligned with geospatial EUS coordinates, and provide a way for the developer to request this. Crude/simple geospatial positional alignment (between the user and the local coordinates) is easier, if you are guaranteed to have the local device coordinates aligned with EUS: each time a geospatial value is received from the local coordinates, an estimate of the geospatial location of the origin of the local coordinate frame can be updated (based on error values, etc). It won't be any better than the error of the geolocation API, but can be stable (because the local coordinates are used for locating and rendering content, not the very-slowly-changing geolocation values).
Hi I am the founder of the https://open-arcloud.org/ . The TL DR AR-cloud description is a 1:1 digital map/twin of the physical world stored in the cloud that enables a shared programmable space attached directly to our physical suroundings that enables multiuser AR and persistent and universally consistent placement of virtual assets in the real worldOne of the things we hope can help bring this to reality is a standard definition of geographical position and orientation that can be understood across platforms and applications. What we call "GeoPose". Each AR-goggle, AR-smartphone or AR-content could have a GeoPose at any given moment.Obtaining GeoPose of an XR-device could be achieved by matching sensor data from the device with the 1:1 map in the cloud through something like a "GeoPose" cloud service. Once the device has its GeoPose it can display geospatial data and assets that are anchored to a GeoPose.A bit more about that here: https://open-arcloud.org/standardsplease join our slack channel and chime in on how you think a GeoPose should be defined. https://join.slack.com/t/open-arcloud/shared_invite/enQtMzE4MTc0MTY2NjYwLWIyN2E4YmYxOTA4MWNkZmI5OGQ4Mjg2MGYzNTc4OTRkN2RjZGUxOTc4YjJhOTQ0Nzc3OWMxYTA3ZDMxNGEzMGEMy hope of course is that WebXR will support using device GeoPose and do the proper transforms for assets that are ancored to GeoPoses or assets that are described by geospatial coordinates.
TITLE: Geopose Standards Working Group Charter [OGC 19-028]
Author Name (s): Jan-Erik Vinje, Christine Perey, Scott Simmons
CATEGORY: SWG Charter Template
All physical world objects inherently have a geographically-anchored pose. Unfortunately, there is not a standard for universally expressing the pose in a manner which can be interpreted and used by modern computing platforms. The main purpose of this SWG will be to develop and propose a standard for geographically-anchored pose (geopose) with 6 degrees of freedom referenced to one or more standardized Coordinate Reference Systems (CRSs).
Definition of geopose
A real object in space can have three components of translation – up and down (z), left and right (x) and forward and backward (y) and three components of rotation – Pitch, Roll and Yaw. Hence the real object has six degrees of freedom.
The combination of position and orientation with 6 degrees of freedom of objects in computer graphics and robotics are usually referred to as the object’s “pose.” Pose can be expressed as being in relation to other objects and/or to the user. When a pose is defined relative to a geographical frame of reference or coordinate system, it will be called a geographically-anchored pose, or geopose for short.
Uses for geopose
An object with geopose may be any real physical object. This includes object such as an AR display device (proxy for a user’s eyes), vehicle, robot, or a park bench. It may also be a digital object like a BIM model, a computer game asset, the origin and orientation of the local coordinate system of an AR device, or a point-cloud dataset.
When the geopose of both real and virtual objects include the current position and orientation of the objects in a way that is universally understood, the interactions between the objects and an object and its location can be put to many uses. It is also important to note that many objects move with respect to a common frame of reference (and one another). Their positions and orientations can vary over time.
The ability to specify the geopose of any object enables us to represent any object in a universally agreed upon way for real world 3D spatial computing systems, such as those under development for autonomous vehicles or those used by augmented reality (AR) or 3D map solutions. In addition, the pose of any object can be encoded consistently in a digital representation of the physical world or any part therein (i.e., digital twin).
The proposed standard will provide an interoperable way to seamlessly express, record, and share the geopose of objects in an entirely consistent manner across different applications, users, devices, services, and platforms which adopt the standard or are able to translate/exchange the geopose into another CRS.
One example of the benefit of a universally consistent geopose is in a traffic situation. The same real-time geopose of a vehicle could be shared and displayed in different systems including:
Open Geospatial Consortium
Geopose SWG CharterPage 2
- a traffic visualization on a screen in another car,
- shown directly at its physical location in the AR glasses of a pedestrian that is around the corner for the car, or
- in the real time world model used by a delivery robot to help it navigate the world autonomously.
This site contains documentation for web developers using Design Tools to add AR content to a web application.
Argon-aframe — Lesson 7: Geolocation
Key features of augmented reality include 1. the ability of associating data objects with places in the world and 2. displaying those objects at those places.
The <ar-geopose> primitive is Argon-aframe’s way of locating objects in physical space (of the planet). The <ar-geopose> primitive creates an entity with a referenceframe component. This component defines the position and/or rotation of the entity by an LLA (longitude, latitude, altitude), so you can locate your object anywhere on the earth relatively accurately by using this tag.
You can find the longitude and latitude through Google Maps or through a variety of mobile applications. Some apps will also give your the altitude of a location. .........................................................................................................................................
Reference Implementations for Conversion Between Geopose and Cartesian Coordinate Systems
There will be a need throughout many parts of the technological ecosystem to convert object poses between a geospatial coordinate-system and local ones (typically the Cartesian x,y,z in metric as used in most AR SDKs). This repository is the first step to create reference libraries in different programming languages, available for everyone to use for free.
The following video is a rehearsal. The starting point is a presentation I did in Detroit in July 2019. That presentation was about Autonomous Vehicles and Augmented Reality. Eventually I will have about eight videos based on that Detroit presentation. You can find the Powerpoint for that here.
It will be obvious that I am not the smoothest person to ever be on a stage. I am a photographer, a cinematographer, an artist and a writer. I am used to having time to think about what I am expressing. As I will soon be making presentations in Europe, I am trying to get through the learning curve of being onstage, asap.
In this presentation, I often refer to the Open Augmented Reality Cloud. If you are serious about AR, you must learn about their work, and download the free State of the Augmented Reality Cloud Report which is available on their website. (I am honored and humbled to say that I will be speaking about Bubiko at the OARC Symposium in Munich on October 16!)
The ARShow produced this informative podcast with Jan-Erik Vinje, Christine Perey, Jason Fox and Colin Steinman, the key cotributors to the OARC
The recent release of citywide 3D mapping technologies allows AR objects to be placed permanently and accurately. Two companies to watch : Scape and 6D.Ai.
With an established background in visual arts, music, the performing arts and AR, I have been waiting for the opportunity to create very large scale AR artworks. Years ago I developed plans for an AR sound artwork for Singapore's Tiong Bahru, an estate composed of historic Art Deco buildings. In 2018, in Shenzhen, a citywide light show rekindled my interest in large scale AR projects.
This year, while preparing for a presentation in Detroit, another idea: an Augmented Reality project to be viewed, and experienced, from inside Autonomous Vehicles; bROADWAY
Although the ideas of bROADWAY could work in any city, Detroit is associated with the automobile industry, including AV. Detroit is the home of legends like Motown, Aretha Franklin, Stevie Wonder, the Jacksons, Detroit techno (Juan Atkins, Kevin Saunderson and Derrick May +), Bob Seger, Jack White, Alice Cooper and Iggy Pop. These points, plus distinguished architecture and public spaces, mean there could be no better city for the world’s first AR+AV citywide experience!
AVs as performers!
If anyone in Detroit knows how to make this project happen, please get in touch.
(And, as AVs are becoming more common worldwide, the idea could happen anywhere.)
This blog post is a sample of the guide, which is available on Amazon.
This is a response to an urgent need to share
Influences: a. “make it fast, not
c. ash from chaos
d. Chungking Express
e. The Zone System by Ansel
Adams and Minor White
No person or company has paid me to have their
name or product included in this document.
I am now notifying the companies and individuals
mentioned. If your work is here, and I have not yet contacted you, I apologize.
If you prefer that I do not share your work, let me know and I will remove it immediately.
No one has received a promotional copy. If you
have bought this, and we meet, I will buy you a beverage. Or two. If you bought
this and it seems unlikely that we will meet, I will send you my other ebooks
or find a way to make sure this purchase is something you are very happy with.
Hint, hint, Bubiko and I are working on an app. (Many times in the past, I have given out free ebooks; only to be
surprised that people did not even open them; even my bestseller.)
If you use photos or text from this document,
please credit accordingly:
Stephen Black from Bubiko’s Unusual Guide
Stephen Black from Bubiko’s Unusual Guide to AR
Have a nice day.
9/3/2019 4th edition
Anchor The three points used to describe the
location, in the real world, where an AR object has been placed. The more one
understands about 3D geometry, the more one is prepared for AR. See also billboarding, geopost.
Augmented Driving Will Feel Like, a SXSW
presentation by Theo Calvin.
Avatar A digital creation used to
represent a person, possibly resembling a human, possibly not.
‘augment’ means to make something greater; to give it more power. The platform
called ‘augmented reality’ is a network that adds digital information to real
objects. Machines are needed to do this, and
to see the results. Thus, augmented reality is the real world and the
digital information added to it, as well as the machines enabling us to
experience both at the same time.See R3
information could be in the form of a 3D model of a real object made by a
computer, like the furniture in AR apps made by Ikea, Wayfair and other
companies. Pokemon Go and Snapchat are
other examples of AR. However, the digital information could also be live or
pre-recorded video, music, podcasts,medical imagery, industrial blueprints,
text information or many other types of content.
phone, tablet, HMD (head mounted device) or eyewear is needed to see/hear/feel the digital content.
change everything, even more than radio, television, computers and mobile
Autonomous Vehicles Vehicles capable of sensing their
environment, making decisions and navigating without human input. AVs require the safest, most efficient AR data networks possible.
Billboarding The term used to describe a common procedure when positioning models
in AR. Billboard means that the front view of the 3D model faces the viewer.
Most AR apps allow the user to rotate the model so that another side of the
model is presented to the viewer.
Bubiko in the starting position; ie
Rabbit The AR masterpiece of 2019.
Occlusion, light estimation, voice commands and
Patched Reality and 6d allow you to learn three
years worth of ARness, and see the future:
Foodtour AR’s first superstar, a character created by
Stephen Black and Sayuri Okayama. Bubiko is one of the results of a two year
food/AR research trip in Southeast Asia. Bubiko is a trailblazer who shares her
AR experiences with the general public as well as with AR practitioners. Bubiko
often forms partnerships, such as one with Green Bean Boy, a character made by
Dominique Wu at Hummingbirdsday Studios http://www.hummingbirdsday.com/.
Other collaborations are planned with the Dundercats, by Six Cat Studios https://www.sixcatstudios.com/journal/2018/4/24/the-dundercats,
and creations by David Severn http://david-severn.com/ . The 3D version of
Bubiko was created by Novaby. https://www.novaby.com/
Charlie Fink's Metaverse - An AR Enabled Guide to AR & VR
Cloud see R3
vision Computers use lenses, radar and many kinds of
sensors to learn about the world. These different ways of “seeing” are often
combined with Artificial Intelligence(AI). The end result is that computers
recognize objects as well as the many kinds of information connected to them.
Convergence author and
Forbes columnist Charlie Fink tells the story of Augmented Reality (AR), a new
technology that's already seeping into every smartphone and every workplace.
AR's merger with new 5G and AI technologies will unleash a wave of innovation
that will enable wearable, invisible, latency-free and ubiquitous computing.
The book uses a kind of mobile AR called "marker AR" to allow readers
to use their smartphone to bring pages to life, demonstrating with art and
entertainment how the world, and every person, place, and thing, will be
painted with data. https://www.amazon.com/Convergence-World-Will-Painted-Data/dp/0578460556/
sensing Recording scenes in 3D dimensions. See volumetric video
Events on Hi-Techs4Humans: Workshops, Seminars, Lectures, etc. (Facebook)
computing There are advantages to processing data as
close to the user as possible, especially in regards to the Internet of Things.
This means, to a large extent, a decentralized system. This
localized/decentralized approach is called edge computing.
Color correction was made, in both cases, with the 'auto' function. Spark is brighter than Camera.
Spark allows the camera person to move around the model, as though it were a physical object. Spark also allows the size of the model to become different sizes. With Facebook Camera, the position and size of the model are locked. Only the front of the model can be seen, like a 2D image. Like a postage stamp.
In this case, Dana positioned Bubiko on the bottom right. We checked this position so that it also works in landscape mode. However, landscape mode seems to be unstable, based on my experiences. Fb Camera often enlarges and turns the background scene, like this:
Note the size of the pictures in the last two examples. Spark uses the dimensions of a mobile phone, Camera produces a more conventional image.
Finally: saving images. Spark, in my case, saved images directly into the iPad photo album.
Facebook Camera, on the other hand... Unless you actively go into the Facebook Archive feature, and turn it on, the images are only stored for 24 hours! I am hoping there is a way to retrieve some images I made, but so far have not been able to find a way to bring them back. Frustrating, as I am 99.9% sure I pressed "save" on the Facebook Camera app.
All images were created with an iPad.
Happy to reply to any questions or comments. Thanks for stopping by!
I have to thank Dana Moreira S Hagan A LOT. For personal reasons, I unexpectedly found myself in Austin on a Tuesday, with the need to have the new version of Bubiko loaded in my tablet by 1PM the following Saturday.
On Wednesday, I went to Facebook, presented them with this letter:
Although I was told they would get back to me, they haven't. After constant searching, including meeting the very helpful PJ and Ramona at Impact Hub, I discovered Capital Factory. There were two events going on that night (Thursday), but two people, Brance Hudzietz and Erin Miller, put my request out on two different networks!
Finally, on Friday, about 6, I discovered Erin Ford at General Assembly. Thanks to Erin, about 7PM I started a phone conversation with Dana. Dana did not have the Mac-with-Mojave that was so desperately needed, but she did come through with the idea to use Facebook Camera which was an even better choice for testing.
The goal is to prepare for the possibilities of AR cinema.
This post documents a simple test. The Bubiko model used for the Tech in the Tenderloin event was used. Two locations: a garage and an open piece of land. The objective: to gain an understanding of what a “stage” can be in AR. AR is a new medium; to use the established techniques of theatre, television and movie is to fail to grasp the uniqueness of AR. Performance art and dance provide clues.
Notes: Spark used
Occlusion not a concern at this time
Ambient light a constant
Size and scaling of Bubiko purposely varied
Bubiko was created by Stephen Black and Sayuri Okayama
iPad used; no manual controls nor color correction