mARp, Thiel

“mARp” – (map with Augmented Reality – general concept)
By Tamiko Thiel

Proposal for a large public view mARp

“mARp” is a word I invented from “map with AR” – augmented reality cartography. It is a general interface for browsing and viewing city-wide collections of augments on an interactive map.  It codifies existing Manifest.AR practice for depicting collections of augments spread throughout a city by marking their locations with icons on a googlemap.

  • This proposal would extend the concept to become a comprehensive interface for viewing augments and their documentation both while at FACT and at sites around the city, and for visitors to add text commentary to the documentation while using the mARp at FACT.
  • The mARp concept provides a conceptual framework for FACT staff to work with community groups to collectively augment the city, thus “crowd-sourcing” stories related to particular topics of interest. These could be for instance walking tours relating to different historical communities in Liverpool (the first Irish Potatoe Famine immigrants, the descendents of the slave trade, the Chinese community, personal histories of individuals etc.).
  • There would be mARps for various levels of content. The Liverpool Meta-mARp would have all augments throughout the city. Visitors could go for instance to a specific square in the city and view all the augments located at this one site. A mARp of a specific project would show a modified map with only the augments from the project. For projects with a narrative structure or for instance a city tour, a route could indicate the order in which augments should be viewed.
  • A desired technical development would give participants the ability to add their own augmented reality content and documentation on site using smartphones in locations around Liverpool.

.

ARtSense testbed:

A mARp on a large screen, with a sufficient depth of content, can be a testbed for an incremental implementation of the ARtSense system.

  • The mARp can be used now as is for group viewing on a large screen, with a mouse providing pan, zoom and selection functionality. Everybody will see the same content on the screen.
  • If it is not too divergent from the ARtSense usage, hand-tracking and gesture recognition could be implemented soon to take over the mouse functions for pan, zoom and selection functions in group viewing. (To be discussed with Jan, as his system uses eye-tracking to determine focus of interest.)
  • The various ARtSense component subsystems (hand tracking, eye tracking, bio-sensing to determine interest levels, 3D sound, etc.) can be integrated as each of them becomes ready for use with the public. For a personalized ARtSense experience with more than one person using the mARp at the same time, the mARp would have to be zoomed out so all augments are visible, and zoom and pan disabled. Each ARtSense user would see their individual selection of story texts, documentary images etc. as augments superimposed over the screen in the AR glasses.

.

mARp now: Viewing and launching augments with existing technology

A mARp is, in its simplest current incarnation, a Google My Map on which the positions of AR works are marked with interactive icons.

Current google map interface. Google My Map with list of all augments on left. Infobox contains text, doc. image and launch link for one augment.

.

Using the basic mARp:

  • At FACT this mARp could be displayed on a large screen and used with a mouse, or potentially with a hand/gesture tracking system.
  • Viewers can click on an AR icon to get an information box with:
    • a description of this work or stories associated with it at this location,
    • screenshots of it taken at this location,
    • links to further content.
  • If viewers are using a mobile device they can also use the googlemap to walk directly to the location of the ARtwork. Once at the site they can launch and view the actual AR artwork in live camera view, superimposed over their physical environment.

Creating content for the basic mARp:

  • If FACT is working with a community group to create a special topic mARp, a registered user can by hand add icons to the map for all the new augments, and then texts, images and links in their respective info boxes.

.

Simplified mARp visual interface

I hope to be able to simplify a Google My Map to present a cleaner, more modern interface that focuses on the functionality that we actually need for the mARp.

Visualization of simplified visual interface for mARp

.

On site AR content creation directly to the mARp

My vision for the mARp goes beyond a simple Google My Map to create an interface for participants to create and place augments at their location and orientation. This ability to create augments could be open to everybody, or restricted to participants who have registered with FACT and sign in with their user name when they want to add content.

Upload content on site as an augment in the mARp:

At least some of this functionality is needed for other Manifest.AR ARtSense projects, but all of it is subject to time, money and implementability constraints by Pace U’s developer team.

That said, it would be good to have the same functionality for location-based AR that Layar’s stiktu (http://www.stiktu.com/) provides for image marker based AR:

  • Participants go to a location that they want to augment.
  • They can then do one of the following on their smartphone:
    • choose an image on their smartphone,
    • make a freehand drawing,
    • type in a text,
    • make a short audio recording.
  • The audio recording can be uploaded directly to the mARp at that location.
  • The three visible augments will appear as an overlay on the live camera view. The participant adjusts their location and orientation to give them the composition they want and then press “upload” to add the content as an augment to the mARp. The server then places that augment at the appropriate distance and orientation in front of them.

Upload content on site as documentation to the mARp:

Once an augment is created, anyone at that site can call it up and take a screenshot of it. They can then upload the screenshots as documentation attached to this site on the mARp.

The mARp could automatically send a message to other participants notifying them about the new augment.

They could then go either to the site to view it “live,” or if they are not on site they could view the new documentation as part of the mARp on their mobile device, on a PC on the Internet, or at FACT on a large screen.

——————————————————————————————-

PHYSIOLOGICAL SENSOR to PHP AUGMENTATION INTERFACE:
SENSOR METHOD?: Jan Hammer at Fraunhofer IOSB has said that another group they are is working with Google to develop a hand tracking interface for Google maps. Tamiko is attempting to setup contact to this group. The interface would use hand tracking to do zoom, pan and select/click movements to navigate the googlemap. Once there is content in place, and the ARtSense software for detecting interest is ready for public use, GSR sensors could be used to detect viewer interest and navigate through the content (as in the Valencia Kitchen case study).
RECORD WHAT?: Initially just hand tracking, later interest levels via GSR sensor
RESULT OF RECORDING TO AUGMENTS THROUGH PHP?:
The augments are not actually affected by the sensors – this is actually a navigational interface design akin to the Valencia Kitchen case study. It can be used for navigating any mARp presented on a large screen at FACT.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: