FACT Exhibition Proposal v1 Temporary

1) Collective Exhibition Proposal:

• threads and -themes

—–to be developed—-

-original title: Invisible ARtaffects
-the real is affected by the virtual and the affects of the audience affect the world

• exhibition prospectus:

The exhibition would consist of 7 to 15 projects created by the core members of ManifestAR and a smaller circle of their collaborators.

Several entry points to each of these projects would engage the audience locally in FACT and eventually for the audience who comes and goes from the building, or who through outreach hears about how to use or “see” the projects in greater Liverpool.

-Lab/Personal Interface/Calibration Area (one of the galleries)
In this location FACT visitors would enter a room with different stations which form the direct use of biometric equipment for individuals engaging in different projects. In some projects these stations will biometrically generate works for the exhibition (see —-). In other cases interactions in a station area with the equipment will inform, calibrate and launch participation in a given project (see —-). This environment should be understood as part performance art, part workshop, part bio-technology lab, part participatory social space, and part aesthetic installation.

– Large Screen Display Google Map InterfacemARp Liverpool” (from Tamiko Thiel) – This would be a general Augmented Reality cARtography map interface for all city-wide AR projects created by Manifest.AR, or created later by FACT. It systematizes working methods common among many Manifest.AR artists, and the mARper would essentially be the AR database and graphical interface that we create at FACT.

Whether looking at a large screen display in FACT, or their own smartphone displays whilst standing on a street corner in Liverpool, what viewers see when they look at the mARper is a googlemap of Liverpool on which there are AR icons – miniature versions of the augment – marking the locations of augments in the city.

  • Viewers can click on an AR icon to get a description of this work, and see a screenshot of it taken at its location in the city.
  • If they are using a mobile device they can also follow the googlemap to walk directly to the location of the ARtwork.
  • Once at the site they can launch and view the actual AR artwork in live camera view, superimposed over their physical environment.

The mARper would have options to look at various levels of content:

  • Liverpool Meta-mARp:
    The most general level could show all augments throughout the city. Visitors could go for instance to a specific square in the city and view all the augments located at this one site.
  • mARp of specific project:
    Visitors could select a specific project or artist, and then see a modified map only showing augments from this project or this artist.
  • Narrative mARps:
    If the augments should be viewed in a specific order to create a narrative structure, a path would be drawn on the map to indicate the route and order to follow. This could be used for instance to create a tour through the city and its history seen through the viewpoint of a specific artist or group.

ManifestAR “Pod” Interface
Artist/architect Manifest.AR member John Cleater is working on an alternative “interface” to the information and visualization of the projects that would involve the building of some kind of video object/shape/enclosure that interacts with its audience to provide an entrance to or status of the various works in the exhibition.  It would also provide documentation of the works outside FACT. This idea is in its early stages. It might require one of the galleries as a whole.

ManifestAR Video Documents and Prints-
In one of the galleries or in a public area of FACT Manifest.AR would display its best video documentation, both past and created during the exhibit, as well as visualization imagery in framed inkjet prints. The goal would be to provide multiple ways for the audience to appreciate the work even if they might not actually engage with it on cell phones or through direct participation.

-Exhibition Events, Performances and Tours
Each of the artists’ projects would plan a performance/speaking event to invite the audience to be introduce and engage with their work(s). This event might involve  discussions with members of the community, organizations related to issues in the works, walking tours of augments both existing and to be created, workshops, one-on-one consultations, etc.

• technologies:

Galvant skin response (GSR)  2 finger electrodes
heart rate monitor (belt)
EEG headset, NeuroSky
Motion tracking
Eye tracking

cell phones ( iPhones and Android-based OS)
video projectors wtith audio
headphones for audio
CRT  eyeglasses
LED color light system

• database, software and hardware development :

Specialized Mobile Device Applications- Parts of the FACT-Manifest.AR exhibition call for the development of specialized mobile device, applications that incorporate the Layar API into a format that supports the particular functionality of different artist projects.

—–more to be developed—-

• relevance to ARtSense projects:

In this exhibition, the basic directive aiding the progress of the ARtSENSE initiative is to address and experiment with the personalization of augmentation. This thread is reflected in the original proposal by the title, Invisible ARtaffects, which refers to the notion that the real is affected by the virtual and the affects of the audience affect the world. With this focus in mind, each of the proposed projects uses biometric or personalized information to generate or modify augmentation. At the same time, and of paramount importance, each artwork addresses issues and aesthetics related to the content, context and intentions of the various artists’ works.

The second directive, discussed at and just after the January ARtSENSE conference in Paris, was the incorporation of the free Layar application to allow personalized augments to be accessed and viewed easily on a variety of mobile devices that can be used both inside and outside a given institution such as FACT. The degree to which personally modified augments can be created, modified and re-modified is a matter of the experimental dimension of Manifest.AR’s proposed artworks. Each of the works proposed can be understood as a test of different aspects or components of what might be modified to become an ARtSENSE end-user system. Significantly, this exhibition will test both the one-on-one as well as the pervasive use of fundamental personalized augmentation systems in a fully functioning museum exhibition.

Affected Augments– most of the projects proposed will utilize at least one of three bio-sensing technologies (see below) in conjunction with audience participation to initiate, determine and modify augments. Various proposed artists’ works call for much of the full range of sensors explored in the ARtSENSE Consortium. However, the feasibility, reliability, developmental level and distributional capability of these various technologies may determine the simplest and most inexpensive routes. Financing and ARtSENSE Consortium member support capability is likely to influence what is realistically possible. In the context of this exhibition, personalized choice or affect can be understood as the more qualitative interpretation of quantitative data feeds originating from biosensors. In other words, at an early or intermediate stage of the ARtSENSE initiative Manifest.AR and FACT would hope to test, in a live exhibition context, the indicators of mood swings, basic recognition and areas of specific interest, first individually and then perhaps in some cases collectively. The exhibit will also experiment with fundamental audience feedback loops to see how audiences are affected or learn how to control their bio influences. This two-way interaction will be vital to any ARtSENSE end-user system that would need not just one interaction, but multiple interactions or feedback loops, for an audience to feel that it was meaningfully interacting with an artwork.

Augmentation as Visual Response– up until the participation of FACT and Manifest.AR in the ARtSENSE initiative, instantiation and interaction of augmentation was considered primarily in the form of informational exchange. The FACT-ManifestAR project, as a primarily artistic visual exploration of augmentation interaction, brings the important step of investigating a visual response to audience input. The importance here is that in some ARtSENSE end-user systems it will be interesting and important to provide a visual demonstration or extrapolation of what an object’s meaning or history involves. In the case of the creation of artworks for the FACT-Manifest.AR exhibition this visual response will be based on a range of artistic decisions from more explicit representations to aesthetic or conceptual associations of content. In the ARtSENSE end-user context, as was expressed by CNAM for example, the visual animation of augmented objects to dramatically show use, function and context, was an important goal of this particular ARtSENSE Consortium participant. This visual response to the augmented interaction is a primary dimension explored in various ways in all of the projects for the FACT-ManifestAR exhibition.

Inside/Outside the institution– a number of works (expl-) encourage the creation or modification of personalized augmentation not just within the building of FACT, but, given the GPS dimension of this technology, above, around and in the entire area of Liverpool (not to mention some possibilities of intercontinental placement and/or exchange). These works expand the notion of the augmented institution to the augmented institution area with a new set of possibilities for audience interaction and expanded content context. Tamiko Thiel has suggested that this be visualized or accessed as live “cARtography” (see-). Thus a number of the works (expl-) relate the artwork directly to the map as “canvas”. Works which aggregate many augments through audience participation can be understood as the record of the personal affect dimensions of: choice of placement, the record of movement, the record of bio-level mood, memory mapping, gameplay incentives, social media communication, and community history.

Bio Sensing Calibration and information for Visitors– At the Paris ARtSENSE conference in January we discussed the importance of calibrating bio sensing devices to new incoming users. In an institutional setting, no two users will be likely to have the same levels of response and the equipment needs to gauge this in advance. We might call this process individual user calibration. In addition there might be some basic instructions for wearing our operating the devices. The FACT-ManifestAR exhibition would create imaginative metaphors for carrying out this process such that they represent part of the artist project and exhibition rather than a difficult set of instructions or delays before being able to encounter the artwork. A number of the ManifestAR projects employ this process to also generate AR content, and administration would occur as a performative element of the artwork. These projects would be tested as to the feasibility of carrying out the simplified introduction calibrations with fewer Museum staff while the artists are not actually at the exhibition. All of these aspects of this process would be important experience in precedent for any robust use of ARtSENSE end-users goals in the context of a large public institution.

Specialized Mobile Device Applications– Parts of the FACT-Manifest.AR exhibition call for the development of specialized mobile device applications that incorporate the Layar API into a format that supports the particular functionality of different artist projects. These applications might create, modify and place augmentation and/or exchange information relative to the directives of specific artworks. This would address another expressed ARtSENSE interest in experimenting with modified (Layar) applications more suited to ARtSENSE goals in overlaying more interactive information. Not all these mobile device applications would necessarily need to have the Layar API embedded since the relevant Layar could be launched just after user input to view results. However an integrated Layar API would no doubt be preferable. ManifestAR would ultimately like to develop its own application, which, based on its experience with exhibition, would be tailored towards the smoothest, most versatile interaction with augmented artwork.

Marker-based Augmentation– A couple of the projects proposed would explore the capabilities of marker-based augmentation in conjunction with Layar or Junaio. This capability of the mobile device camera to recognize a geometric shape or image and then to generate a variety of augmented media. This technology has received wide publicity through its use in magazines, such as Esquire. (Tamiko Thiel demonstrated the basics of this technology at the Paris ARtSENSE conference in January.) It would be useful as a possible enhancement for the iSTAR glasses and the ways they were planned to be used with the Valencia Kitchen at MNAD. In a couple of the works possible for the FACT-ManifestAR exhibition, marker-based recognition for AR would be employed with contemporary artworks as the base images. Again, while in this case the content generation might be more associative than informational, it still gives the opportunity to test the functionality of this type of image recognition.

Display and Documentation – The FACT-Manifest.AR exhibition will benefit the ARtSENSE initiative maximally if the visual display and documentation of works is professionally carried out. Display refers to the proper equipment for viewing augmented reality projects in process. This includes a range of displays from LCD screens, to projectors to iPads. Documentation refers to ample video documentation of the projects in process. Good video footage of works in progress is hard for participatory projects in which the artist is preoccupied with audience interaction. Therefore attention to capturing a record of events will be important both for artists, FACT and ARtSENSE for its subsequent evaluation and promotion.

—–more currently in development -WP—-

• exhibition activities description:

—–to be developed—-


-exhibition design


-public engagement

-long-term maintenance

• space needs:

—–to be developed—-

galleries, inbetween spaces, walls, surrounding outdoor space

floor plans

• specific equipment needs:

—–to be developed—-

•  staffing needs:

—–to be developed—-

Specific Project Ideas:


EEG AR: Things We Have Lost, John Craig Freeman and Scott Kildall

The goal of “EEG AR: Things We Have Lost,” is to develop a user interface that will allow participants to conger up augments by simply thinking of them existing at a specific location using biometric sensor technology. A database of augments will be generated base around the broad theme of “Things We Have Lost,” things such as pensions, empires or dodo birds. Participants, or test subject, will be outfitted with EEG brainwave sensors and asked to think of a specific object or idea from the database. Once a measurable and consistent pattern is detected, a database call will be issued which will instantiate an augmentation just in front of the participants current GPS location. The person will then be taken out into the city to see if it is possible to create and place augments just by thinking them into existence. These augments will remain at the location where they were produced and be visible on any iPhone or Android device.


Mike’s Sky Museum (or FACT Sky Museum) Will Pappenheimer

Mike Stubbs, the director FACT, has claimed a large portion of the sky above the FACT building and Liverpool for a new FACT Sky Museum. Participants are invited to come to the museum to make and contribute works.

Bio-Expressionist Art– The participant uses the two finger skin conductance bio sensing system to draw pictures recorded as graph fluctuations and simultaneously rotated in process to form complex line drawings while listening to headphones with a narrator guiding them through a series of emotional states. As the levels go up and down, LED RGB lights in the room change through different associated colors. Visitors learn to affect their emotional bio output which creates a bio-Expressionist line drawing.

Drawings in the sky– This line drawing will appear as a trail of airplane exhaust in front of the participant, as the composition is drawn or updated directly into an augmented drawing in the sky. Sky drawings appear helter-skelter all over the sky above FACT. The time spent by the participant making the drawing is recorded and assigned as the emotional investment value of each work. This the participant is paid for this contribution to the FACT Sky Museum.

Curators take control– The sky fills up with participant drawings. Periodically, the director Mike Stubbs and the curator Aneta Krzemien look at the drawings in the sky and decide which ones are good and which ones should be excluded. They are easily able to retire these rejected artworks to the back trash area of the current FACT Museum. Rejected participants are notified or informed through website statistics. They then have the opportunity to come back and try again to create and insert a new work.

FACT TOWERS Thanks You For your Time  Will Pappenheimer


When visitors come to FACT they will be see a wall of large text or talk to the entrance attendant about donating their time for an additional AR floor on the FACT building. Into a computer screen or Layar APP they enter their choice of dedicated floor name and the augment adds a floor stacked up on top of the AR FACT building. Anyone can see this tower now outside the building or from miles away in the city of Liverpool.


I must Be Seeing THINGS’ By John Cleater Feb 2012

 ‘I Must Be Seeing THINGS’ is an Augmented Reality experience that provides guidance on what to  (not) look for in Abstract Art (for beginner, intermediate, advanced, and  expert observers). What you  (don’t) see will depend on many factors such as mood, weather, health, wealth, religion, or the color of your  eyes.

Resources: The Rorschach Test, The Psychedelic Experience (LSD), and Max Ernst’s illustrated book ‘The Hundred Headless Woman (La Femme 100 téte)’.

It’s not about what you think you see as much as it is about what you are looking at.
It’s less of a puzzle if you disregard the variations in pressure and precision of the lines.
Keep in mind that the shape of the THING may not be as obvious as it appears.
Each THING serves its own purpose, however, your function is as good as mine.
The familiar elements of the THING are put into question by calling attention to the unfamiliar obstacles.
The discipline applied to the method of revealing enough familiarity to allow for a balanced misconception is gone astray after counting to 3.
What you can’t see is behind a known reference. What you can see is not as interesting as it is interested.
Squinting may reveal deeper meaning in the THING but will disturb its current state.
The unraveling may be intended but the buckle appears to be latched either one notch too tight or one notch too loose.
The THING is at once animal, vegetable, and mineral.
If you follow the signs you will never find your way to the same place from any given point of departure.
“Public garbage dump, or: all the pauses are equally worthwhile” Max Ernst
“Germinal of the invisible eyes, the moon, and Loplop describe ovals with their heads so as to call up the seventh age following the ninth birth.” Max Ernst “
All doors look alike” Max Ernst 


mARp Liverpool: “Crowds of Stories”
Bio-sensing for a participatory storytelling mARp. By Tamiko Thiel

Two episodes from Bear's story mARp

[pdf with extended concept for mARping Liverpool]

The extended proposal linked above suggests several ways to create participatory story mARps with different communities in Liverpool. Just one is listed here as an example.

“Crowds of Stories” – (working title)
Visitors would be asked to donate  objects, toys, images to be used to generate participatory, crowd-sourced stories set in Liverpool. The staff creates simple 2D cutout .pngs of the objects to be used as augments. Then visitors use the ARtSense system to generate stories as in the following example:

  • A visitor looks at a plush Liver Bird, a chipped piggy bank and a teddy bear.
  • The ARtSense system senses the visitor is not a football fan and is more interested in the teddy bear. It starts telling stories of the bear, showing a photo of the cut-out teddy bear augment in front of a serving of fish and chips at Chris’s in Rose Lane.
  • At some point ARtSense senses the visitor is very agitated, and suggests adding a new story. The visitor adds a story about how the bear is actually the real Banksy, and places its augment at the site of a graffiti.
  • The augment is added automatically to the mARp. Other contributors receive an automatic message: “please go there and take a picture for us!”
  • Contributors see the new augment on the mARp, go to the site and can upload screenshots of the new augment, also adding their own commentary to the story mARp.


“mARp Me” – AR as a communication and messaging medium.
By Tamiko Thiel

Example AR notes - Lisbon & Istanbul

AR “messages” as social media:  augments as a way of communicating with friends and contacts world-wide. If you will pardon the analogy, think of dogs sniffing the streetpoles where other dogs have left their messages, and then adding their own “notes” to the pole.

I published a concept sketch for this idea at Ars Electronica 2011.

I will be working with a group of young adults in Munich in summer 2012 to implement such a project, and potentially have similar projects in other locations as well. If the groups are still running in March 2013 we could set up a similar group in Liverpool and try an international communication experiment.

How to use the mARp for communication:

  • The mARp would be added to a social network structure, probably Google+.
  • Augments could therefore be designated private, shared to a group or public, and appear on the corresponding mARp.
  • I can drop an augment on a friend wherever they are as a greeting card, and it appears wherever they are, superimposed on their current environment.
  • I can leave an augment for a group of friends at a specific place, augmented that location with a special visual note.
  • I can “leave my mark” for all to see at a specific location, and it appears on a public mARp.
  • Private and group augments would not be generally viewable, but over time as  augments accumulate on a public Liverpool mARp, bio-sensors at the AR Lab at FACT could be used as the interface for visitors to peruse the Liverpool mARp, with the system showing them locations or content based on their interest levels.


English landscape/ rune painting in city spaces – Mark Skwarek

Full Project Description Page:

This project would be a work which generates nature in city spaces. The project will dove tail with my show in the 2012 Beijing Art Biennial in which I am transforming the city space of Beijing China. It creates an AR Shan Shui idealized landscape that will complement the stunning architecture of Beijing. The goal is to create a harmony between nature and city space. A vision of the future which allows the city public to experience the city and nature as one. I wish to do the same thing in Liverpool using traditional English landscape/rune painting. The project will generate traditional stylized AR motifs based on the users position, brain waves and skin sensor and finger sensors.
Crossing the Brook by J. M. W. Turner        Thomas Gainsborough
Augmented Reality wearable- Mark Skwarek


The wearable device will do 2 things. 1st it will change the users appearance according to thought and physical condition. 2nd it will update the users surrounding environment [this taps into the technology of the English landscape/ rune painting in city spaces project]. The project will be based off of subversive wearable augmentation and research with AR and wear-ables. Some work examples at NYU Polytech as a researcher in residence are at http://www.youtube.com/watch?v=ube9z40ad-E and I wish to build on my previous work at the Wall Street protest and the Island of Hope project.

I would like to develop a system which stream the content according to will / thought + skin and finger. The wearable would be reactive to brainwaves skin heart and finger sensors. The project would stream different appearances in real time to the users according to the input device. The user could change appearance at will. A subversive activist approach would be explored.


Granting People’s Wishes- Mark Skwarek

granting a wish at the Dumbo Arts Festival

The core idea of this project would be to create what people were wishing for-

This project will combine the brainwaves, skin, heart, and finger sensors technologies into an AR performance at Liverpool where I grant peoples wishes while using the AR wearable. The ritual will draw from Liverpool and my Island of Hope performances. I had an ARparade that went to the idea of hope.  The work I did at Venice where I created wishes from the global communities twitter #hope to try to find what people were hoping for . This was part of the Venice Biennial.


“Invisibly experiencing invisible things”   Sander Veenhof

In some contexts or at some occasions, it will not be possible or not appropriate to wave around the air with your mobile to interact with the local virtual environment around you. Think of sitting in the audience of a conference, or waiting at a crowded bus-stop. People at the conference might take it as a signal that you’re not interested, people at the busstop might think you’re chasing things that aren’t there. They are right – and wrong, at the same time. But instead of explaining to people about the new reality which is partly intangible and for some (or most) of them still invisible, it is better to interact with it unobtrusively. As with the conference, you might be unobtrusively playing an arcade game in the virtual multi-user space above your heads with other people in the audience, but you might as well be taking notes in a futuristic augmented reality way, drawing sketches based on gathered inspiration, or checking out the program in the room nextdoors. In each of these cases, it would be helpful to do this while acting ‘normal’, in a physical sense.

Outdoor: “Unobtrusive Arcade Action”
Conference: “Unobtrusive conference efficiency tools”

Step 1: invisibly controlling the controls

In a first research step, it is important to design a mechanism of control, powered by the mind, using various kind of body sensors. Visual feedback on the interaction is viewable through AR goggles. The interfaces should be flexible or switchable, depending on the requirements of the context.

Step 2: interactions in the multi-user environment

In most cases, the virtual escapade will be a collective experience in the multi-user virtual space. Arcade games are the most obvious option, and a telling way to accompany the launch of the concept, but to develop mechanisms with a benefit for a user in a professional context, will be a more challenging part of the concept development.


Bio(s) Feedback – Geoffrey Alan Rhodes

BIO(S) FEEDBACK revisits and reminisces psychedelic experimentation in bio-feedback machines, sensory deprivation hallucination, group gestalt, and LSD. The goal is to literally create and manifest a “collective mood” of the participants. In a set space, participants wearing heart rate monitors or EEGs will contribute to a collective hallucination: a virtual object pulsing in the center of the space. It’s changing shape, colors, and luminosity will be direct analogues to the continuous stream of output of all those within a set radius, reacting to their proximity, count of individuals, and affects both psychic and physical (heart and brain feedback). When surrounded, the animated object becomes a collective mood manifest hovering between the participants, capable of reflecting synchronization and resonant attributes in heart-rate and brainwave, or inversely to express dissonance in the affects of those present. The feedback object presents itself as both a game object to be manipulated, and as a reflection of those present.


Biomer Skelters  Will Pappenheimer and Tamiko Thiel

Biomer Skelters (“biome” + “helter-skelter“) is a personal wild growth forest-to-rainforest AR ecosystem propagator designed initially for the Liverpool area. The system is designed to speed up the process of forestation the County of Merseyside. Participants become “Biomer Skelters*” as they start to spread vegetation. The propagation starts with trees native to the County, such as Beeches and English Oaks, but anticipating warmer climates, begins to generate plant species more native to a tropical rainforest. The idea is that if we don’t do something about global warming, this is the kind of transformation of zones that might happen. Increased propagation causes increased helter-skelter and dense plant growth. The goal of the project is to completely cover the Liverpool area with rainforest.

This project could be combined with Mark Skwarek’s “English landscape” project to become a battle of “native plants” versus “invasive species,” in which different teams try to take over the city space of Liverpool. Hmm, or is this politically too inflamatory?!

 Propagation is personalized through two methods:
– Within FACT, “Biomer Skelters” are equipped with a heart rate sensitive device and stand in front of a large monitor showing either the mARp or a Google Earth map of Liverpool with the existing vegetation of previous participants. After starting the program, they can click on an area of the map that will become their starting point. A tracking device above, or a FACT GPS system, tracks their walking lines of their movements in the building scaled on to the Googlemap of Liverpool, and as they move they leave a trail of vegetation. The speed of their heartbeat, measuring excitation, determines how much vegetation is propagated.

– Outside of FACT in the city, participants can carry their smartphone and wear a chestband to track their movement through GPS and leave a trail of vegetation in their path. This path will be visible on the mARp or a Google Earth map of Liverpool back at FACT, but off-site participants will also be able to launch Layar and see the results of their and others’ propagation as augmented vegetation objects.


Incomplete Paintings Will Pappenheimer

This project would be composed of one or more paintings I would make with the idea that they are markers or images for marker-based AR. The primary image shape or multiple shapes would trigger a 3-D augments (perhaps with accompanying sound) that would function as overlay. What would appear as augmented juxtaposition would relate to or “complete” the painting, as if it were an alternate other half.

There would be a number of ideas for content and content triggering. One possibility would be to access the “lost objects” database generated by John Craig Freeman and Scott Kildall’s project “Things We Have Lost.” This reflects the idea of paintings as conjuring a viewers memory. in this case the painting finds what viewers have collectively lost. An alternate method of triggering would be to use iSTAR glasses with eye tracking to trigger a are objects in various parts of the painted composition.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: