Our first group of artists and hackers have completed the inaugural season of Art-A-Hack. We’ve had some exciting twists and turns, and the teams will be demonstrating their work at the Livestream Public building (formerly 3rd Ward) on Saturday.
We’ve posted a full one-page write-up of all the team’s activities, or you can flick through the summaries below and click to individual team pages.
Where does Big Data meet fortune-telling? This team used public social media profiles to stitch together information about unsuspecting participants. With internet-enabled wearables like Google Glass, this kind of information can be accessed real-time by wearers. What does this mean for the level of trust we have in society?
There is an intimate connection between constructed virtual reality, and the reality that depth sensors reconstruct of the world. This team combined a Microsoft Kinect with an Oculus Rift, to see what the limitations and sweet spots were. They constructed their own kind of ‘extended reality’.
This team explored full-body interaction with virtual spaces. This means walking from a virtual sunny beach, to a city at night. All the time, a shadowy silhouette follows along. You can try to approach the mystery figure, but she or he was always one step ahead.
What innovations can be made on immersive Virtual Reality experiences like Oculus Rift? How can the experience become more physically interactive? This team investigated nuances of gesture, focusing on using the hands, how the body orientates in space and use of torque, or bodyweight.
How do we communicate in public space? This team came to be captivated by the Whispering Gallery at Grand Central Station. Their concept is that these ‘hidden conversations’ need not be lost to time. They can be captured and refreshed with a small, portable microphone, speaker, and small battery-powered computing device.
Eva Lee applied to Art-A-Hack proposing visualization of electroencephalographic (EEG) brain data culled from neuroscientist Dr. Jose Raul Muradas’ research on prayers for self compared to prayers for others. The results were process-oriented prototypes that included sonic and animated sketches.
This project was a collaboration, investigating how we encode, decode and simulate time and space. The team developed their ideas in two parallel tracks. Ast created a physical installation which subverted new display hardware called ‘Octolively’. Levenbach delved into flocking algorithms, viewing process as an alternative metaphor for time.
How can the ubiquity of smartphones change the way we think of the environments around us? This team proposed an adventure, similar to a scavenger hunt. This would use a technique called geocaching to allow smartphone users to leave and discover treasures in marked physical locations for other users to interact with.
In what ways can the haptic possibilities of smartphones be channeled into dance? How can custom-designed wearables enhance or change the nature of a performance? This team set out to answer these questions, and this resulted in a chest-mounted performance wearable which allows a dancer’s body to control sound.