Hello. Stephen Black here. This is my personal, self-initiated, self-funded and unofficial project to create accurate and simplified definitions about GeoPose. I am doing this because:
I do (worldwide) presentations about AR, and need to be able to explain GeoPose clearly, and with technical accuracy. Short definitions increase comprehension, work well on Powerpoint, and save speaking time for other topics.
GeoPose and AR are topics for journalists, bloggers and reporters, not all of whom have technical backgrounds. Clear and simple definitions allow for the maximum flow of accurate information.
Although my ultimate goal is to create AR experiences/produce AR apps, I am now an AR educator/evangelist: speaking mainly to general audiences. It is clear that AR is not easily understood. GeoPost will be easy to use, but, for now, the concept will likely be intimidating to many people.
English is not always the first language within the tech community.
I'm a writer: I like using the fewest words to create most results.
So... all comments are welcome, and needed. By the time I give my presentation at ARIA/The MIT Media Lab, I hope to present the following ideas with the input and approval of those who are far more knowledgeable than I am.
Of course , once the following are clear, I hope the information can be shared by anyone, anywhere. I will also include the information in my ebook guide to AR project, which is updated as often as possible.
Thank you for your time and attention to this.
Please comment upon the italicized sentence. Thank you.
The two previous posts on this blog are the result of trying to create a definition of GeoPose. I still have not come up with a twenty-word-or-less definition. However, the following is very helpful towards achieving that goal.
What you are about to read is a reply sent to me by Jan-Erik Vinje, the Managing Director of the Open Augmented Reality Cloud. Jan-Erik and I will both be giving presentations at the 2nd Open Augmented Reality Cloud Symposium in Munich on October 16th.
Download the First State of the AR Cloud report as well. You can find it here.
Jan-Erik Vinje's thoughts on the differences between GPS and GeoPose:
GPS is a specific technology. And it is mostly used for obtaining more or less accurate geospatial positions related to a geospatial coordinate reference system. Currently an ellipsoide that approximates the earth is used to as the reference for altitude and longitude. The ellipsoid is a bit crude but it is normally less than 100 meters incorrect as a representation of where the main ocean surface is or would have been.
GeoPose is intended to relate to the same type of geospatial reference but adds geospatial orientation to the geospatial position. All real objects on our planet could in principle be said to have both a position and orientation. GPS provides position. If you have a stream of GPS positions you could derive a direction vector that is almost an orientation but not quite. Imagine walking down the street with a smartphone and imagine collecting a GPS location ( lat, lng, alt) every second. You can create a path through space telling you the direction you have moved with phone. What the GPS locations can not give you is the orientation you have held your device. Did you point it towards the ground, to the sky or towards a Wall across the street, and importantly how was your device rotated around that direction?
For that you would need another system or set of systems to provide you with your orientation. AR Cloud visual positioning systems can provide both the position and the orientation of your device. This is what we call a pose. If that position and orientation is in a geospatial frame of reference equivalent to the one used for GPS one can call the pose a geopose.
(SB:Hmmm that is great, but the last three lines will require a bit of serious thought...I am not 100% clear I understand. When I do, I hope to make educational drawings, or better, 3D models-in AR.)
What will the relationship between WebXR and geospatial data be?
It seems that WebXR cannot entirely ignore geospatial positioning, as geospatial content will be a major use case for mobile AR (at least eventually).
The web already has a geolocation API, but it is not sufficient for these purposes: it gives position but not orientation, is of very poor quality and not synchronized with the WebXR frame data. The deviceorientation API cannot be relied on for orientation: it is of very poor quality, was never standardizes (and is potentially going to be removed from existing browsers) and is also not synchronized with the WebXR frame data.
ARKit offers the option to have it's local coordinate system be aligned with geospatial orientation (e.g., Y up, Z south, X east). This provides a possible direction for how geospatial might be handled: have the WebXR API expose a property that says if the coordinate frame can be aligned with geospatial EUS coordinates, and provide a way for the developer to request this. Crude/simple geospatial positional alignment (between the user and the local coordinates) is easier, if you are guaranteed to have the local device coordinates aligned with EUS: each time a geospatial value is received from the local coordinates, an estimate of the geospatial location of the origin of the local coordinate frame can be updated (based on error values, etc). It won't be any better than the error of the geolocation API, but can be stable (because the local coordinates are used for locating and rendering content, not the very-slowly-changing geolocation values).
Hi I am the founder of the https://open-arcloud.org/ . The TL DR AR-cloud description is a 1:1 digital map/twin of the physical world stored in the cloud that enables a shared programmable space attached directly to our physical suroundings that enables multiuser AR and persistent and universally consistent placement of virtual assets in the real worldOne of the things we hope can help bring this to reality is a standard definition of geographical position and orientation that can be understood across platforms and applications. What we call "GeoPose". Each AR-goggle, AR-smartphone or AR-content could have a GeoPose at any given moment.Obtaining GeoPose of an XR-device could be achieved by matching sensor data from the device with the 1:1 map in the cloud through something like a "GeoPose" cloud service. Once the device has its GeoPose it can display geospatial data and assets that are anchored to a GeoPose.A bit more about that here: https://open-arcloud.org/standardsplease join our slack channel and chime in on how you think a GeoPose should be defined. https://join.slack.com/t/open-arcloud/shared_invite/enQtMzE4MTc0MTY2NjYwLWIyN2E4YmYxOTA4MWNkZmI5OGQ4Mjg2MGYzNTc4OTRkN2RjZGUxOTc4YjJhOTQ0Nzc3OWMxYTA3ZDMxNGEzMGEMy hope of course is that WebXR will support using device GeoPose and do the proper transforms for assets that are ancored to GeoPoses or assets that are described by geospatial coordinates.
TITLE: Geopose Standards Working Group Charter [OGC 19-028]
Author Name (s): Jan-Erik Vinje, Christine Perey, Scott Simmons
CATEGORY: SWG Charter Template
All physical world objects inherently have a geographically-anchored pose. Unfortunately, there is not a standard for universally expressing the pose in a manner which can be interpreted and used by modern computing platforms. The main purpose of this SWG will be to develop and propose a standard for geographically-anchored pose (geopose) with 6 degrees of freedom referenced to one or more standardized Coordinate Reference Systems (CRSs).
Definition of geopose
A real object in space can have three components of translation – up and down (z), left and right (x) and forward and backward (y) and three components of rotation – Pitch, Roll and Yaw. Hence the real object has six degrees of freedom.
The combination of position and orientation with 6 degrees of freedom of objects in computer graphics and robotics are usually referred to as the object’s “pose.” Pose can be expressed as being in relation to other objects and/or to the user. When a pose is defined relative to a geographical frame of reference or coordinate system, it will be called a geographically-anchored pose, or geopose for short.
Uses for geopose
An object with geopose may be any real physical object. This includes object such as an AR display device (proxy for a user’s eyes), vehicle, robot, or a park bench. It may also be a digital object like a BIM model, a computer game asset, the origin and orientation of the local coordinate system of an AR device, or a point-cloud dataset.
When the geopose of both real and virtual objects include the current position and orientation of the objects in a way that is universally understood, the interactions between the objects and an object and its location can be put to many uses. It is also important to note that many objects move with respect to a common frame of reference (and one another). Their positions and orientations can vary over time.
The ability to specify the geopose of any object enables us to represent any object in a universally agreed upon way for real world 3D spatial computing systems, such as those under development for autonomous vehicles or those used by augmented reality (AR) or 3D map solutions. In addition, the pose of any object can be encoded consistently in a digital representation of the physical world or any part therein (i.e., digital twin).
The proposed standard will provide an interoperable way to seamlessly express, record, and share the geopose of objects in an entirely consistent manner across different applications, users, devices, services, and platforms which adopt the standard or are able to translate/exchange the geopose into another CRS.
One example of the benefit of a universally consistent geopose is in a traffic situation. The same real-time geopose of a vehicle could be shared and displayed in different systems including:
Open Geospatial Consortium
Geopose SWG CharterPage 2
- a traffic visualization on a screen in another car,
- shown directly at its physical location in the AR glasses of a pedestrian that is around the corner for the car, or
- in the real time world model used by a delivery robot to help it navigate the world autonomously.
This site contains documentation for web developers using Design Tools to add AR content to a web application.
Argon-aframe — Lesson 7: Geolocation
Key features of augmented reality include 1. the ability of associating data objects with places in the world and 2. displaying those objects at those places.
The <ar-geopose> primitive is Argon-aframe’s way of locating objects in physical space (of the planet). The <ar-geopose> primitive creates an entity with a referenceframe component. This component defines the position and/or rotation of the entity by an LLA (longitude, latitude, altitude), so you can locate your object anywhere on the earth relatively accurately by using this tag.
You can find the longitude and latitude through Google Maps or through a variety of mobile applications. Some apps will also give your the altitude of a location. .........................................................................................................................................
Reference Implementations for Conversion Between Geopose and Cartesian Coordinate Systems
There will be a need throughout many parts of the technological ecosystem to convert object poses between a geospatial coordinate-system and local ones (typically the Cartesian x,y,z in metric as used in most AR SDKs). This repository is the first step to create reference libraries in different programming languages, available for everyone to use for free.