Would you like to support THE GREATEST AR TOUR IN HISTORY? Do you want to learn the latest news from AWE, Austin, Japan, Detroit, Turin, Munich, Shenzhen and Japan!?
Stephen Black and Bubiko Foodtour are about to go global: learning, educating and networking at some of the biggest AR events in the world. We just need a little financial fuel...we are movin’ and groovin, but our app isn’t out yet! Startup blues!
BONNETS! (My Powerpoint looks extremely low tech. As much as I like this look, I have to say I had no choice. I was using my Chromebook , and used the baked in slide show creator. Next time, I might actually have the time to do some design work.)
DC Rainmaker does his usually outstanding job of reviewing a bicycle-related product.Jump into the video at about the 4 minute mark to see some exemplary AR techniques. Click here to see more examples and ideas related to AR+ Bicycles.
Cannondale, for the win! A great example of functional AR in an everyday situation, as opposed to a factory or medical facility.
From a presentation I did on AR and bike safety. All of those concepts need to be unified and rethought for the age of AR.
An example of an organization that has information that would be useful for AR in Detroit. Geographic Information Services… How much of what is under the road do they have records of? This info would be necessary for many AR services,
How do they make pointclouds-or do they even do this yet?
One of Magic Leap's views on how the total AR world might look like.
The layers presented include IoT (the Internet of Things, as well as AI. Photo courtesy of Tony at the Skarred Ghost, another person I suggest serious AR/VR people support and follow.
Another presentation from Magic Leap on how the total AR Cloud might look. They use the term "Magicverse".
A representation by the OARC on what the layers of the cloud would be. My suggested terms would be REAL1 (R1), REAL2 (R2) and REAL3 (R3). R1=the physical world. R2; the layer with few changes; buildings, landmarks, geography. R3 being the part of the AR cloud that changes the most, and has the most segmentation.
This system also lends itself to further classification. R2C22 , for example, could refer to a specific block in Chicago for example, and R2C22E could be the collective "channel" for all organizations utilizing traffic emergency communications for that area.
Microsoft Azure Spatial anchor systems. 'Anchor systems' refers to geoposing. There can be an endless number of clouds. Interoperability... Machines reading the world for humans... A browser...AI assisted browsers.
This is from the State of the Open AR Cloud 2019 report by the Open Augmented Reality Cloud group. Already some governing bodies are being formed.
Shenzhen's celebration used the entire city as a canvas. Something like this could be done in Detroit with AR, for much less cost, and with greater detail. AVs could travel on programmed routes... a new form of musical is born!
My report on the maker culture of Shenzhen is here.
The Fox Theatre is one of Detroit's cultural assets that could be utilized. This slide shows some of the layers of creation and co-operation that need to be considered.
Back to bROADWAY… and we have "discovered" that the Fox Theater is geofenced. Geofencing is a blocking of AR access. Military bases, sensitive areas...the interior of homes. The Fox Theatre is copyrighted, I believe. In this fictional example, Fox may have geofenced their theater to prevent unauthorized AR usage.
This is outdated. For my startup plans, write to me directly at bubikofoodtour at mark gmail.com. Please look for more recent posts. This, for example, is an overview of the first three months of AR activities in 2019. Thank you. SB
Schedule now being determined, planned stops include: Boston MA (ARIA /MIT), NYC, Washington D.C., Rochester NY, Toledo OH, Detroit, Ann Arbor MI, Chicago, Natchez MS, Denver CO, Boulder CO, New Orleans, Austin (SXSW), Los Angeles, San Francisco(GDC)
Graduate of the Rochester Institute of Technology, director/writer/producer for Cartoon Network, Fox, Fuji TV and CNN, Stephen Black was involved with a $3.2M gaming startup, from 2002-7. He then became a writer/artist while waiting for mobile spatial computing(AR/VR) to mature.
Now is the time to produce AR content for industry, art and entertainment- and games.
I have been involved with games/spatial computing since 2002, and am now ready to launch an AR startup that involves games, filmmaking and video/content production. If you, or someone you know, is interested in investing, I hope we can talk.
I can share the successful pitch I used at TechCrunch Shenzhen a week ago. As positive as the response was, it never hurts to share a vision, especially if that vision includes a unicorn. onwARd, Stephen Black
PS The flyer below is a bit out of date; the startup idea mentioned below includes games.
This blog has many posts about AR and startup plans, all of which are outdated, but do provide insights.
Hello and welcome. My name is Stephen Black and I work with media, words and art.
Media: VR, computer-generated environments,video and photography.
Words: articles and books, including Bali Wave Ghost, I Ate Tiong Bahru (a national bestseller in Singapore), Tiong Bahru Mouth, Obama Search Words and a few others.
This post gives you some idea of my current projects.
Thank you for stopping by.
Ai Wewei, Yesim Agaoglu, Stephen Black, Eugene Soh collaboration in gallery.sg
Works created with ink
If one person can be said to symbolize the Tiong Bahru Market, it might be this guy in the hat...
THIS post was written before SPOKEN started. SPOKEN is a project I am doing with Eugene Soh, an experiment in which art, text, virtual reality and social media intersect. Learn about SPOKEN here.
To enter gallery.sg and experience SPOKEN, click here.
Here is the extremely short version: from 2002 until 2008 I was involved with a visionary virtual reality project that combined educational practices with gamemaking and multimedia. However, the spiritual captain of the project was part Disney, part Microsoft and part Sex Pistols. An unexpected death, treachery, incompetence, inexperience, bureaucratic boondoggles and more made my life "interesting"... I have pages of notes about the events and the spirit of the times. We were doing things like Youtube and Second Life before they started. The project was so close... and yet so far away. (Actually, now that mobile technology has settled down, I hope that the lessons and products of that experience can be revitalized. But that's yet another story.)
In 2006, as a way of showcasing our technology, I entered a gamemaking hackathon. A theme was given on a Thursday morning and the next day at five the results were judged.The theme was something about healthy eating, I think. Working with my programmer in Hong Kong, we made a game in which the viewer learned about the calorie count of certain foods. The player competed with an AI character. It was fun to do.
However, what I remember most was a team that made an incredible flash game. I don't remember if they won or not, but I do remember two guys on that team very well. George Parel and Eugene Soh were full of energy, knowledge and bursting with creative ideas.
They still are.
I've been lucky to work with George and Eugene on a few projects since then. In 2008 I finally had to put the educational gamemaking project on hold while I waited for a programmer and the mobile device situation to stabilize.I put more time into writing and art projects. George and Eugene (that dude from Singapore) however, have kept on doing remarkably creative things with IT, art, design and more.
Virtual reality may seem to be an artificial place, but the gallery Eugene has created fills me with memories and hope. I am honored and very thankful for I Ate Tiong Bahru to be on display in gallery.sg
This informal essay is my way of marking the end of a certain era in ebook history. It's part snapshot, part reference materials, part journal.At the end of this post are notes about me, my experiences and my books.
Thanks to Doug Rolph for his insights on economics, Eric Hellman for his input and my dad for having taken care of our family by selling books.
το πνεύμα του Ιανού
After I finish writing eight books, I will begin marketing. Until then, I'll probably study the ebook world less and hopefully do more writing, arting and engaging with Life. When it does comes time for me to contribute to the marketing conversation, I hope I have something to say. For now, I present the following notes, quotes and thoughts as a means of punctuating a phase in the development of ebooks as I have seen and experienced it.
This is an exciting time. The ebook delivery platforms are finally stable, self-publishing has proven to have great value and a number of services have recently appeared that shorten the distances between readers and authors. It seems to me that indie ebooks and ebook marketing are about to enter a new era.
This blog post makes little mention of traditional publishing. This is simply because, as much as I would like to enjoy the benefits of being a Big 5/6 author, that fruit is not now within my reach. I am however, considering joining the Author's Guild.
Although I've done almost no marketing, I have studied the environments in which ebooks are created, presented, bought and sold. Some observations:
1. Except for uploading, nothing about ebooks is easy.
Writing is the anti-social social media, full of long, long hours of pressure-filled solitude. Assembling an error-free book is never simple. The social part, finding an audience, is an immense challenge. I respect all of the authors mentioned in this post for they have successfully met these challenges and more.
Talent is cheaper than table salt. What separates the talented individual from the successful one is a lot of hard work.Stephen King
Based on my experiences, the work breakdown of an 88,000 word novel looks something like this: 1000 words a day (88 days) or, more likely, 500 words a day (176 days). Call it 200 days to prepare something for a proofreader. Two months for corrections, art, and ebook conversion. So, a book takes about 300 working days to finalize. About...
And then there are the thousands of actions needed to connect with readers... The title of Guy Kawasaki's excellent book says it all: APE, meaning Author, Publisher, EntrepreneurThis document, by Mark Coker, the founder of Smashwords, is a data-based analysis of the ebook market. Highly recommended, it covers topics important for newbies and veterans. It touches upon issues like word count, pricing, marketing and more. For instance, his research shows that the average bestseller on Smashwords is 100,000 words, and the average romance is 112,195 words. (There are more links to resources at the end of this post.)
Very few traditionally published authors became bestsellers; the same is true for ebook publishing. My goal is not to become a bestseller, but to connect with the largest possible community of people who enjoy the art of reading.
2. A great writer or a great marketer.......or, the frustration of being caught between not doing enough writing and not doing enough marketing. A writer writes, a salesman sells.
Self-publishing does not equal self-marketing. Spending money wisely on promotion money means income and time to write. (See the links below)
Twitter, Goodreads, FB, LinkedIn and blogging? All have their advantages and disadvantages...3. There are no independent, hugely successful ebook-only self-publishers.Note: Two days after this post went up, I became aware of this great piece by Dana Beth Weinberg on Digital Book World. Thank you Jacqueline Church!
Amazon is huge, Apple is huge, Kobo and Smashwords are very big. Unless you are selling from your own website or the back of your car, you're not truly independent.
OK, A bit of an attention grabber there...but the author's need for a partnership with Amazon and ebook distributors is a dependence that cannot be overlooked. These "automatic partners" will always protect their interests first. They call the shots. Amazon is a business, not an author.
Amanda Hocking is a hugely successful author. At one point, the average daily sales figure of her self-published ebooks was 9000. Again: average DAILY book sales: nine thousand! Her success was based on hard work, technological first mover advantage and an indirect tie-in with Hollywood.
-the successful and pioneering integration of ebook readers into tablets and mobile as well as the launch of the Kindle (2007) and the iPad(2010)
-the large demographic of young women who bought readers and tablets
- the fact that, having written many books, Hocking could quickly provide a new and large market with a variety of new titles
- writing books about the paranormal when Hollywood is pushing the same cannot hurt. Twilight, the hugely successful series of movies about teen vampires began in 2008. Hocking's first book, My Blood Approves, began selling in 2010.
E.L. James' book phenomenon began in the fan fiction chat rooms for Twilight. The characters in Fifty Shades of Grey were originally the characters from Twilight. Could Master of the Universe, as her series was originally called, have achieved its success without an existing network of thousands of Twilight fans?
These two women made their mark upon society in two different ways. As shared, collective book-based experiences: WOW!
However, the writing is..."not terrible" or worse
I remember SF/F authors complaining (back in 2011) that their readers hadn’t switched to e-books yet, casting jealous eyes at the outsized romance audience. But as readers did move across, we saw people like David Dalglish and BV Larson breaking out, and the rest of “genre” fiction soon followed.
There are "indie success stories" about authors who "rode into town" on the backs of traditional publishing. Funded by Big 6 money these "indies" were advertised and publicized, sent on book tours and given things like business cards. Possibly, audiobooks were made. Hundreds, if not thousands, of their books were given away, many to reviewers.
As the 'first mover'possibilities of the ebook market became clear and realistic, these authors, knighted by the Big 6 and armed with credibility and connections, rode onto a battlefield with little opposition... Undoubtedly hard work was involved, but to label them as indies brings to mind the quip about George Bush: "...was born on third base and thinks he hit a triple."
There certainly are "ebook only" indies connecting with many readers and enjoying sales. I just don't know of any. (FOUND SOMEONE: LINDSAY BUROKER Please tell me about others! them! Somewhat related to this, are there any "ebook only" awards?
Here, authors talk about their sales experiences.
4. The ebook world evolves to reward the reader; the prepared author benefits from this.
Fan fiction. Goodreads. Ebook readers on mobile phones. The mashup between big data and metadata. Entrepreneurs with vision who see ways to connect authors and readers in a new ways.
It is an exciting time.
Ebooks: Born to Click, Part 2 of 3visit www.blacksteps.tv for parts 2 and 3 of this post, as well as information on art, books and ebooks
The two previous posts on this blog are the result of trying to create a definition of geopose. I still have not come up with a twenty-word-or-less definition. However, the following is very helpful towards achieving that goal.
What you are about to read is a reply sent to me by Jan-Erik Vinje, the Managing Director of the Open Augmented Reality Cloud. Jan-Erik and I will both be giving presentations at the 2nd Open Augmented Reality Cloud Symposium in Munich on October 16th.
Download the First State of the AR Cloud report as well. You can find it here.
Jan-Erik Vinje's thoughts on the differences between GPS and Geopose:
GPS is a specific technology. And it is mostly used for obtaining more or less accurate geospatial positions related to a geospatial coordinate reference system. Currently an ellipsoide that approximates the earth is used to as the reference for altitude and longitude. The ellipsoid is a bit crude but it is normally less than 100 meters incorrect as a representation of where the main ocean surface is or would have been.
GeoPose is intended to relate to the same type of geospatial reference but adds geospatial orientation to the geospatial position. All real objects on our planet could in principle be said to have both a position and orientation. GPS provides position. If you have a stream of GPS positions you could derive a direction vector that is almost an orientation but not quite. Imagine walking down the street with a smartphone and imagine collecting a GPS location ( lat, lng, alt) every second. You can create a path through space telling you the direction you have moved with phone. What the GPS locations can not give you is the orientation you have held your device. Did you point it towards the ground, to the sky or towards a Wall across the street, and importantly how was your device rotated around that direction?
For that you would need another system or set of systems to provide you with your orientation. AR Cloud visual positioning systems can provide both the position and the orientation of your device. This is what we call a pose. If that position and orientation is in a geospatial frame of reference equivalent to the one used for GPS one can call the pose a geopose.
(SB:Hmmm that is great, but the last three lines will require a bit of serious thought...I am not 100% clear I understand. When I do, I hope to make educational drawings, or better, 3D models-in AR.)
Jan-Erik Vinje, the Managing Director of the Open Augmented Reality Cloud, responded to my previous post about a definition of geopose. His reply is reproduced with his permission. Thanks to Jan-Erik and all of the team involved in this project. The following exemplifies all of the diligent hard work going on to create standards for AR.
Thanks 🙂 If you like to update the geopose page - we have made some headway with our partner Open Geospatial Consortium (A standards development organization in the geospatial sector)
Open AR Cloud put together a draft for a Standards Working Group for geopose at the Open Geospatial Consortium. It has been out on public hearing for a while and is currently being voted over by members.
If the vote succeds an official Standards Working Group will be created in Open Geospatial Consortium (OGC) to develop the geopose standard.
If we are successful in creating such a standard we will in effect have created the equivalent of the URL for real world spatial computing. Allowing the geopose of both real and virtual objects to be universally captured, stored, shared and understood.
I liken it to URL because it can be seen as links between the physical world with the digital world.
URL links information to more information. Geopose links the world to digital information and digital information to the world
It can even be used to create a "persistant portal" between a physical space and a virtual space. Where people from across the world who are experiencing a virtual space can go through such a portal to experience a real space (by streaming realtime reality capture data to all those in the virtual reality space). At the same time people in the real world can walk into the virtual reality space or see virtual objects, scenes and avatars of virtual reality users projected into their physical space using AR.
Stephen Black: There is so much information here, tat need to review it carefully. It will be a wonderful challenge to come up with a definition of 'geopose' that is 20 words or less!
What will the relationship between WebXR and geospatial data be?
It seems that WebXR cannot entirely ignore geospatial positioning, as geospatial content will be a major use case for mobile AR (at least eventually).
The web already has a geolocation API, but it is not sufficient for these purposes: it gives position but not orientation, is of very poor quality and not synchronized with the WebXR frame data. The deviceorientation API cannot be relied on for orientation: it is of very poor quality, was never standardizes (and is potentially going to be removed from existing browsers) and is also not synchronized with the WebXR frame data.
ARKit offers the option to have it's local coordinate system be aligned with geospatial orientation (e.g., Y up, Z south, X east). This provides a possible direction for how geospatial might be handled: have the WebXR API expose a property that says if the coordinate frame can be aligned with geospatial EUS coordinates, and provide a way for the developer to request this. Crude/simple geospatial positional alignment (between the user and the local coordinates) is easier, if you are guaranteed to have the local device coordinates aligned with EUS: each time a geospatial value is received from the local coordinates, an estimate of the geospatial location of the origin of the local coordinate frame can be updated (based on error values, etc). It won't be any better than the error of the geolocation API, but can be stable (because the local coordinates are used for locating and rendering content, not the very-slowly-changing geolocation values).
Hi I am the founder of the https://open-arcloud.org/ . The TL DR AR-cloud description is a 1:1 digital map/twin of the physical world stored in the cloud that enables a shared programmable space attached directly to our physical suroundings that enables multiuser AR and persistent and universally consistent placement of virtual assets in the real worldOne of the things we hope can help bring this to reality is a standard definition of geographical position and orientation that can be understood across platforms and applications. What we call "GeoPose". Each AR-goggle, AR-smartphone or AR-content could have a GeoPose at any given moment.Obtaining GeoPose of an XR-device could be achieved by matching sensor data from the device with the 1:1 map in the cloud through something like a "GeoPose" cloud service. Once the device has its GeoPose it can display geospatial data and assets that are anchored to a GeoPose.A bit more about that here: https://open-arcloud.org/standardsplease join our slack channel and chime in on how you think a GeoPose should be defined. https://join.slack.com/t/open-arcloud/shared_invite/enQtMzE4MTc0MTY2NjYwLWIyN2E4YmYxOTA4MWNkZmI5OGQ4Mjg2MGYzNTc4OTRkN2RjZGUxOTc4YjJhOTQ0Nzc3OWMxYTA3ZDMxNGEzMGEMy hope of course is that WebXR will support using device GeoPose and do the proper transforms for assets that are ancored to GeoPoses or assets that are described by geospatial coordinates.
TITLE: Geopose Standards Working Group Charter [OGC 19-028]
Author Name (s): Jan-Erik Vinje, Christine Perey, Scott Simmons
CATEGORY: SWG Charter Template
All physical world objects inherently have a geographically-anchored pose. Unfortunately, there is not a standard for universally expressing the pose in a manner which can be interpreted and used by modern computing platforms. The main purpose of this SWG will be to develop and propose a standard for geographically-anchored pose (geopose) with 6 degrees of freedom referenced to one or more standardized Coordinate Reference Systems (CRSs).
Definition of geopose
A real object in space can have three components of translation – up and down (z), left and right (x) and forward and backward (y) and three components of rotation – Pitch, Roll and Yaw. Hence the real object has six degrees of freedom.
The combination of position and orientation with 6 degrees of freedom of objects in computer graphics and robotics are usually referred to as the object’s “pose.” Pose can be expressed as being in relation to other objects and/or to the user. When a pose is defined relative to a geographical frame of reference or coordinate system, it will be called a geographically-anchored pose, or geopose for short.
Uses for geopose
An object with geopose may be any real physical object. This includes object such as an AR display device (proxy for a user’s eyes), vehicle, robot, or a park bench. It may also be a digital object like a BIM model, a computer game asset, the origin and orientation of the local coordinate system of an AR device, or a point-cloud dataset.
When the geopose of both real and virtual objects include the current position and orientation of the objects in a way that is universally understood, the interactions between the objects and an object and its location can be put to many uses. It is also important to note that many objects move with respect to a common frame of reference (and one another). Their positions and orientations can vary over time.
The ability to specify the geopose of any object enables us to represent any object in a universally agreed upon way for real world 3D spatial computing systems, such as those under development for autonomous vehicles or those used by augmented reality (AR) or 3D map solutions. In addition, the pose of any object can be encoded consistently in a digital representation of the physical world or any part therein (i.e., digital twin).
The proposed standard will provide an interoperable way to seamlessly express, record, and share the geopose of objects in an entirely consistent manner across different applications, users, devices, services, and platforms which adopt the standard or are able to translate/exchange the geopose into another CRS.
One example of the benefit of a universally consistent geopose is in a traffic situation. The same real-time geopose of a vehicle could be shared and displayed in different systems including:
Open Geospatial Consortium
Geopose SWG CharterPage 2
- a traffic visualization on a screen in another car,
- shown directly at its physical location in the AR glasses of a pedestrian that is around the corner for the car, or
- in the real time world model used by a delivery robot to help it navigate the world autonomously.
This site contains documentation for web developers using Design Tools to add AR content to a web application.
Argon-aframe — Lesson 7: Geolocation
Key features of augmented reality include 1. the ability of associating data objects with places in the world and 2. displaying those objects at those places.
The <ar-geopose> primitive is Argon-aframe’s way of locating objects in physical space (of the planet). The <ar-geopose> primitive creates an entity with a referenceframe component. This component defines the position and/or rotation of the entity by an LLA (longitude, latitude, altitude), so you can locate your object anywhere on the earth relatively accurately by using this tag.
You can find the longitude and latitude through Google Maps or through a variety of mobile applications. Some apps will also give your the altitude of a location. .........................................................................................................................................
Reference Implementations for Conversion Between Geopose and Cartesian Coordinate Systems
There will be a need throughout many parts of the technological ecosystem to convert object poses between a geospatial coordinate-system and local ones (typically the Cartesian x,y,z in metric as used in most AR SDKs). This repository is the first step to create reference libraries in different programming languages, available for everyone to use for free.
The following video is a rehearsal. The starting point is a presentation I did in Detroit in July 2019. That presentation was about Autonomous Vehicles and Augmented Reality. Eventually I will have about eight videos based on that Detroit presentation. You can find the Powerpoint for that here.
It will be obvious that I am not the smoothest person to ever be on a stage. I am a photographer, a cinematographer, an artist and a writer. I am used to having time to think about what I am expressing. As I will soon be making presentations in Europe, I am trying to get through the learning curve of being onstage, asap.
In this presentation, I often refer to the Open Augmented Reality Cloud. If you are serious about AR, you must learn about their work, and download the free State of the Augmented Reality Cloud Report which is available on their website. (I am honored and humbled to say that I will be speaking about Bubiko at the OARC Symposium in Munich on October 16!)
The ARShow produced this informative podcast with Jan-Erik Vinje, Christine Perey, Jason Fox and Colin Steinman, the key cotributors to the OARC
From the start, Bubiko Foodtour was meant to be a cross between Hello Kitty and Anthony Bourdain, with a dash of Charlie Chaplin.
Bubiko is a little chef from northeastern Thailand, just outside of Udon Thani. There, growing up, she learned how to make Issan-style food. However, she is also familiar with Lanna cuisine and other regional cooking styles of Thailand. Her favorite dessert is mango sticky rice. (Visit Bubiko's Instagram account to see the many locations where she sampled mango sticky rice.)
Bubiko has researched the food cultures of Thailand, Malaysia, Singapore, Bali and Shenzhen, China. She has also spent time researching in Ventiane, Yangoon, Jakarta, Bandung and Hong Kong. Japan and the US as well. Again, you are invited to look at her Instagram, or search "Bubiko" on this blog.
As AR technology continues to improve, Bubiko will make educational AR projects for people of all ages about spices, vitamins, geography, cooking techniques and other concepts related to food.
Bubiko also really likes butterfly pea flowers, and one day will have an AR lesson about them.
Butterfly pea flower ( Clitoria ternatea ) is found all over Southeast Asia. It used to make many types of foods and is also great in drinks; by itself or mixed with other ingredients, like macha. It is also used in soap, shampoo and other health care products.
Here is a great recipe for "Blue Surf Cake", on the Unconventional Baker Blog. Additionally, the blog has a number of great resource on where to buy butterfly pea flowers, as well as other natural colorings. Recommended!
A recipe for Butterfly Pea Flower Crepe cake can be found here, on the Mario'z Eats blog. Also informative, and the instructions are clear and well illustrated.
Bubiko is a little chef who has been having some exciting adventures in AR, Augmented Reality. Bubiko was born at Novaby and made her public debut at Tech in the Tenderloin.
Stephen Black is a writer, a visual artist and marching along the path of Spoken Word.
This post is exploratory.
Stephen Black and Bubiko Foodtour are now looking for opportunities to make presentations, or do collaborations with musicians, artists, writers or AR practitioners in these cities.
(BLAM is also a reference to SLAM: Simultaneous Localization And Mapping, a process associated with AR.)
Stephen Black has given presentations at Sasin School of Business, MIT Media Lab, Hong Kong PolyU, TechCrunch Shenzhen and the Collider at Alitimetrik (Detroit). This post is about his projects and this post is about Bubiko's.
What we hope to do, are open to:
Find AR collaborators. Do you have an AR software that could use a cute little chef? (Bubiko has been in Facebook Spark and Facebook Camera projects, as well as an AR game demo made by Dominique Wu from Hummingbirdsday Studios. Two projects using Artive are here and here.
2. Consultations regarding AR, or projects needing a producer.
I am now working on a book about AR, roads and transportation. The following is from a section about AR and the Tour de France.
In Spring of 2019, I was excited to discover that the world of cycling had a growing number of AR apps and products. Excited because, previous to these discoveries, I could only find three successful examples of AR.
The first was Pokemon Go but, as huge as it was, it created the misleading impression that AR was only for games. Next: 19 Crimes, an Australian wine company that achieved massive success largely due to their series of AR "living wine labels”. The third example was Ikea, whose AR app allowed buyers to "insert" digital models of furniture into their real homes, so as to visualise the ideal purchase.
Also, there were many exciting AR projects in medicine and industry, but these uses required expensive viewing devices, like the Hololens.
So, when I discovered cycling goggles with AR functionality? Great! An AR app that allowed users to customize a bike and then order it? Yes! An AR bike repair manual? Yes, again! These simple uses were practical, but with hints of openness, adventure and excitement. Vitality!
And then I thought I had a good idea. To break up all of the technical ideas, and to inject some excitement into the book, I decided to write about futuristic uses of AR in the Tour de France. I came up with ideas like “fatigue hawks”, “blood hawks” and “wind hawks", terms that would be applied to specialized data scientists who coached cycling teams.
Although the NTT website doesn’t use exciting names like “blood hawks”, they do use data in exciting ways, especially in the areas of data collection/processing, AR, 3D mapping and AI. NTT is making media history. Pioneers, they are leaving the safe continent of broadcast television to venture into the uncharted islands of billions of mobile devices.
And no, NTT is not sponsoring this (nor is anyone else). Having said that I would be happy to write about NTT's London headquarters, or their facility dedicated to the Tour de France, in Mulhouse, in eastern France.
The recent release of citywide 3D mapping technologies allows AR objects to be placed permanently and accurately. Two companies to watch : Scape and 6D.Ai.
With an established background in visual arts, music, the performing arts and AR, I have been waiting for the opportunity to create very large scale AR artworks. Years ago I developed plans for an AR sound artwork for Singapore's Tiong Bahru, an estate composed of historic Art Deco buildings. In 2018, in Shenzhen, a citywide light show rekindled my interest in large scale AR projects.
This year, while preparing for a presentation in Detroit, another idea: an Augmented Reality project to be viewed, and experienced, from inside Autonomous Vehicles; bROADWAY
Although the ideas of bROADWAY could work in any city, Detroit is associated with the automobile industry, including AV. Detroit is the home of legends like Motown, Aretha Franklin, Stevie Wonder, the Jacksons, Detroit techno (Juan Atkins, Kevin Saunderson and Derrick May +), Bob Seger, Jack White, Alice Cooper and Iggy Pop. These points, plus distinguished architecture and public spaces, mean there could be no better city for the world’s first AR+AV citywide experience!
AVs as performers!
If anyone in Detroit knows how to make this project happen, please get in touch.
(And, as AVs are becoming more common worldwide, the idea could happen anywhere.)
This blog post is a sample of the guide, which is available on Amazon.
This is a response to an urgent need to share
Influences: a. “make it fast, not
c. ash from chaos
d. Chungking Express
e. The Zone System by Ansel
Adams and Minor White
No person or company has paid me to have their
name or product included in this document.
I am now notifying the companies and individuals
mentioned. If your work is here, and I have not yet contacted you, I apologize.
If you prefer that I do not share your work, let me know and I will remove it immediately.
No one has received a promotional copy. If you
have bought this, and we meet, I will buy you a beverage. Or two. If you bought
this and it seems unlikely that we will meet, I will send you my other ebooks
or find a way to make sure this purchase is something you are very happy with.
Hint, hint, Bubiko and I are working on an app. (Many times in the past, I have given out free ebooks; only to be
surprised that people did not even open them; even my bestseller.)
If you use photos or text from this document,
please credit accordingly:
Stephen Black from Bubiko’s Unusual Guide
Stephen Black from Bubiko’s Unusual Guide to AR
Have a nice day.
9/3/2019 4th edition
Anchor The three points used to describe the
location, in the real world, where an AR object has been placed. The more one
understands about 3D geometry, the more one is prepared for AR. See also billboarding, geopost.
Augmented Driving Will Feel Like, a SXSW
presentation by Theo Calvin.
Avatar A digital creation used to
represent a person, possibly resembling a human, possibly not.
‘augment’ means to make something greater; to give it more power. The platform
called ‘augmented reality’ is a network that adds digital information to real
objects. Machines are needed to do this, and
to see the results. Thus, augmented reality is the real world and the
digital information added to it, as well as the machines enabling us to
experience both at the same time.See R3
information could be in the form of a 3D model of a real object made by a
computer, like the furniture in AR apps made by Ikea, Wayfair and other
companies. Pokemon Go and Snapchat are
other examples of AR. However, the digital information could also be live or
pre-recorded video, music, podcasts,medical imagery, industrial blueprints,
text information or many other types of content.
phone, tablet, HMD (head mounted device) or eyewear is needed to see/hear/feel the digital content.
change everything, even more than radio, television, computers and mobile
Autonomous Vehicles Vehicles capable of sensing their
environment, making decisions and navigating without human input. AVs require the safest, most efficient AR data networks possible.
Billboarding The term used to describe a common procedure when positioning models
in AR. Billboard means that the front view of the 3D model faces the viewer.
Most AR apps allow the user to rotate the model so that another side of the
model is presented to the viewer.
Bubiko in the starting position; ie
Rabbit The AR masterpiece of 2019.
Occlusion, light estimation, voice commands and
Patched Reality and 6d allow you to learn three
years worth of ARness, and see the future:
Foodtour AR’s first superstar, a character created by
Stephen Black and Sayuri Okayama. Bubiko is one of the results of a two year
food/AR research trip in Southeast Asia. Bubiko is a trailblazer who shares her
AR experiences with the general public as well as with AR practitioners. Bubiko
often forms partnerships, such as one with Green Bean Boy, a character made by
Dominique Wu at Hummingbirdsday Studios http://www.hummingbirdsday.com/.
Other collaborations are planned with the Dundercats, by Six Cat Studios https://www.sixcatstudios.com/journal/2018/4/24/the-dundercats,
and creations by David Severn http://david-severn.com/ . The 3D version of
Bubiko was created by Novaby. https://www.novaby.com/
Charlie Fink's Metaverse - An AR Enabled Guide to AR & VR
Cloud see R3
vision Computers use lenses, radar and many kinds of
sensors to learn about the world. These different ways of “seeing” are often
combined with Artificial Intelligence(AI). The end result is that computers
recognize objects as well as the many kinds of information connected to them.
Convergence author and
Forbes columnist Charlie Fink tells the story of Augmented Reality (AR), a new
technology that's already seeping into every smartphone and every workplace.
AR's merger with new 5G and AI technologies will unleash a wave of innovation
that will enable wearable, invisible, latency-free and ubiquitous computing.
The book uses a kind of mobile AR called "marker AR" to allow readers
to use their smartphone to bring pages to life, demonstrating with art and
entertainment how the world, and every person, place, and thing, will be
painted with data. https://www.amazon.com/Convergence-World-Will-Painted-Data/dp/0578460556/
sensing Recording scenes in 3D dimensions. See volumetric video
Events on Hi-Techs4Humans: Workshops, Seminars, Lectures, etc. (Facebook)
computing There are advantages to processing data as
close to the user as possible, especially in regards to the Internet of Things.
This means, to a large extent, a decentralized system. This
localized/decentralized approach is called edge computing.