« February 2007 | Main | April 2007 »


14. 03. 2007  14:46 | 14_Reblogs_Publications_Expos

Re-blog of Rolling microfunctions

wired_stirling_m1.jpg
-
The Rolling microfunctions project (an "autonomous dot matrix printer for functions") has been rebloged and shortly commented two weeks ago by Bruce Sterling on his Beyond the Beyond blog (WIRED Magazine).
The Rolling microfunctions project was a result of Variable_environment's workshop#4 with fabric | ch and SWIS-EPFL laboratory.

Posted by patrick keller at 14:46

14. 03. 2007  14:03 | 09_EPFL

User-driven distributed control of collective assembly using mobile, networked miniature robots

Following the "Rolling microfunctions" workshop between ECAL, fabric | ch and SWIS-EPFL laboratory about swarm networked robotics, Dr Julien Nembrini (in charge of the project for Swiss Federal Institute of Technology) made the following post on SWIS laboratory research webpages:
------------------------
swis-post1.jpg swis-post2.jpg
-
"User-driven distributed control of collective assembly using mobile, networked miniature robots
If swarm systems are to be used in human environments, a protocol for human-swarm interaction has to be defined to enable direct and intuitive control by users. Up to now research has concentrated on the difficult problem of controlling the swarm, whereas the interaction problem has been largely overlooked.
-
In collaboration with interaction design researchers, the project developed a demonstrator consisting in a fleet of mobile "lighting" robots moving on a large table, such that the swarm of robots form a "distributed table light". In the presence of human users, the group of robots quickly aggregates to form together a lamp whose shape and function depends on the users' positions and behaviors.
-
The whole set-up consists in a collection of small robots moving and interacting on a table, a camera position tracker and a human-computer interface. The swarm-user interaction is designed as follows: the target configuration of the aggregate is controlled through crude position and attitude tracking of users around the table. User tracking and robot tracking are integrated in software and configuration information as well as positioning information are then sent to the robots.
-
Considering the specific task of ordered aggregation as benchmark, the project studies a simple algorithm able to control the geometry of an aggregate consisting of embedded, real-time, self-locomoted robotic units endowed with limited computational and communication capabilities. The robots only use infrared proximity sensors and wireless communication.
-
In this case, the aggregation problem being human centered, time-to-completion becomes critical. The user cannot accept to wait too long for the robots to aggregate. The system has to react quickly to changes in users attitude. This is the reason for defining a hybrid algorithm where global positionnal information is sent to the robots, which then accordingly choose their actions in an autonomous manner.

The project results from a collaboration with designers from Ecole Cantonale d'Art de Lausanne"

Posted by patrick keller at 14:03

14. 03. 2007  12:46 | 01_Mobility_&_Mashup_Situations , 02_Project_Links_&_Ressources_4 , 12_Curated_posts

Will we "learn" again from Las Vegas (in Google Earth)?

gearth_4.jpg
-
gearth_6.jpg
-
The title of this post is of course a bit of a joke, but it is especially a wink to the famous book by Robert Venturi & Denise Scott Brown, Learning from Las Vegas published in 1972.
As I was following with distance some GE blogs, I noticed a few weeks ago a certain quantity of posts announcing new "buildings" with good realistic qualities in "Las Vegas". This sounds like a comment on the real city!
As Las Vegas is mostly already the transposition of existing buildings or monuments into an entertainment city, an "island" in the middle of the "desert", as it is a "faked reality made real", a conditioned space, it has already a lot in common with a "2nd world": the duplication of an existing situation with all the problems we can see in Las Vegas. It hardly generates more than an amusement park for grown up...
So to say and by extension, we could tell that Google Earth is becoming a kind of global Las Vegas (even if it is still not an entertainment park at the moment we write these lines)) but that Las Vegas into Google Earth has the hidden potential to become something much more conceptual, intriguing and complicated: a Las Vegas located both as an island into the desert and into "Global Las Vegas"... An "überpop" city.

gearth_2.jpg
-
gearth_1.jpg
-
gearth_3.jpg
-
It looks now pretty clear to everyone that Google is building a "geo-mapped - search everything from anywhere" system (even locations in books will be mapped) with references potentially working in all directions: from digital to real, real to digital, fiction to real, digital to fiction, real to real, digital to digital, etc.
With Sketchup as a free modeling tool and therefore the possibility for everyone to create new Google Earth layers or objects (as well as any geo web content), Google just need to turn GE into a multi-user / 2nd Life like universe to get a kind of ultimate 2nd world like project and connect everything: get geo-referenced data on you future cellphone, hybridize digital and real life, play with your avatars on the GE layer you want or you just have created, fake your real home, etc.
Those GE images gives a good idea of the complex mediated and layered world we could possibly live in the next decades. This could become the "second learning from Las Vegas": an hybrid fake, highly layered and mediated space, that if it gets combined with powerful visual AR mobile devices/softwares, such as the Nokia MARA one, could produce an instant real-soft city/earth.
You just miss now the concrete building of some Google Earth icons into real Las Vegas, of a Casino-hotel based on a successful ego-shooter game and the mish-mash mapping of everything into everything to get the look of the potential future city. A city where you won't be able to tell what was first, second or third, what is physical or digital, real of fake, etc.
We can now discuss if this is good or bad (I mean, do you really like to eat a deep-dish pizza on the Piazza San Marco in Las Vegas!!?), but the process looks to be on its way...

Posted by patrick keller at 12:46

14. 03. 2007  10:38 | 02_Project_Links_&_Ressources_4 , 12_Curated_posts

Technology Review 10: Augmented Reality

nokiaAR.jpg
-
Nokia research is still working around its MARA (Marker-less Augmented Reality) project. Mostly for location based services.
The MIT magazine Technology Review speaks about it as one of the "significant emerging technologies" in its annual 10 emerging technologies report. Read the article about it here.

Posted by patrick keller at 10:38

03. 03. 2007  15:26 | 04_Workshop_4 , 10_Partners , 12_Curated_posts

fabric | ch

fabric | ch, a swiss based "architecture & research" studio who's works link world wide networks and local space, materiality and immateriality, ... (and that is therefore specialized in the creation of experimental spaces and architectures in close connections to emerging technologies, information and communication technologies in particular) has joined the project since its start back in June 2005 and worked around it up to now.
Their work was mainly to develop spatial propositions, produce a spatial tracking software for Workshop_4 and will work on the extensions of the AR Toolkit software, linking it with some of their open source spatial softwares (Rhizoreality.mu) as well as with the development done by the EPFL.
-
fabric | ch is composed of two EPFL's architects (Christophe Guignard & Patrick Keller), a telecommunication engineer (Stéphane Carion) and a computer scientist (Dr. Christian Babski).
Their works have been presented or exhibited mainly in Europe and Americas (Siggraph, ISEA, File Rio & Sao Paulo, Centre Culturel Suisse - Paris, PixelAche, ART | Basel, DIS Boston, Architectural Association - London, Festival Lyon Lumières, MAMCO, EPFL, ICA - London, etc.)
-
fabric_s.jpg
-
rhizoreality.jpg

Posted by patrick keller at 15:26

03. 03. 2007  15:10 | 04_Workshop_4 , 09_EPFL , 10_Partners , 12_Curated_posts

EPFL - SWIS - Swarm-Intelligent Systems Group

The Swarm-Intelligent Systems Group of the EPFL (Swiss Federal Institute of Technology), Prof. Alcherio Martinoli and Post-doctorate assistant Julien Nembrini have joined the Variable environment/ project and worked around Workshop#4 between June and December 2006.
Prof. Martinoli's laboratory belongs to the School of Computer and Communication Sciences, Institute of Communication Systems of the EPFL.
-
i&C_swarm_s.jpg

SWIS laboratory is a member of the Mobile Information & Communication Systems, a National Center of Competences in Research.
-
epfl_mics_s.jpg

The area of interest of the laboratory is in swarm-intelligent collective robotics.
Within the context of this collaboration, we will work with the E-PUCK (see img below) platform. The goal is to develope a human - swarm bots collaboration with a minimal spatial & lighting function for the bots.
-
epuck.jpg

Posted by patrick keller at 15:10

03. 03. 2007  14:25 | 04_Workshop_4 , 12_Curated_posts

Rolling microfunctions / A scenario

ROLLING MICRO-FUNCTIONS (for SOHOs)
Variable environment's Workshop by fabric | ch and SWIS-EPFL.
Design brief & AD by fabric | ch
Object design by Laurent Soldini & Julien Ayer
-
What would happen if you were living, inviting your friend(s) and working into one single room? In a space that would therefore nearly naturally evolve between very private functions and public ones, where the shape of space wouldn’t change but where functions slowly migrate from one into another without the “user(s)” really even noticing it (the status of space would be movement).
“Rolling micro-functions” is an attempt to illustrate and develop working propositions around this prospective theme at a micro-space scale (a long table): our scenario for the workshop is one room equipped with a long table and several chairs (it could ideally be something looking like the Bouroullec’s Joyn table from Vitra or even a hybrid bed-table –see link 1 or link 2–, had it been further developed) where those evolving functions would occur (working, eating, relaxing and even sleeping).
In this room, a tracking system that give information about user(s) activities/configurations will be necessary as well as a set of robotic micro-functions that can reconfigure themselves according to those captured user(s)’ information, so as maybe other information or invisible layers as well (networked information, digital world, stock quotes, dynamic data, electromagnetic fields, live data from air and biological tracking or from weather stations, news, etc.)
While today most of our architectural spaces are structured aggregation of mono-functional separate rooms, usually partitioned (a room for sleeping, a room for cooking, a room for watching TV or eating, a room for bathing, etc.), which is a functional approach inherited from the modern period (that consumes a lot of space and that btw also contributes to energy consumption problems), this project tries to suggest a different and speculative approach with the densification (urban room?), multiplication and variation of functions within one space and therefore its evolutionary and continuous nature over time (from private to public and return, etc.).
-
Note1: this workshop took place during several months between July and November 2006. As it implied software developments as well as design proposals, it was necessary to take more time than for some of the other workshops (we wanted with this project to reach “working demo” level as well as “fiction demo”), even if it was not full-time work.
As one part of the necessary technology pre-exists to the workshop (E-Puck robots for education) and has some clear constraints (size, shape and topology, movement, computing capacities, type of sensors, etc.), the attitude here is rather to illustrate certain principles at small scale about our understanding of contemporary space (continuous, layered, variable rather than binary, partitioned and fixed) and to propose speculative artifacts for it. We also limited the information sent to the robots to local, user and camera based tracking.
We’ll therefore work with micro-spaces and micro-functions here that are not so convincing as a potential future product (who would pay 600$ for a rolling ashtray?). But this was not really the purpose in this context, we wanted rather to experiment around (micro-)architectural agent/behaviors and the results could/should be extrapolated to bigger scales.
-
Note2: the project implies the development of a software (“tracking of spatial configuration”) to pass the information about spatial usage to the “E-Puck” robots. The development of this architectural software (webcam based) has been undertaken by fabric | ch.
The robots are also tracked by another camera and its software that is under development at the EPFL.
Finally, a set of rules (behaviors) for the positioning of the robots according to user(s) configurations is also necessary. It acts as a kind of grouping language for the robots. The overall system resembles therefore something in between a kind of “dot-matrix printer for micro-functions” and an "autonomous system" (swarm-intelligent). The idea is not that the robots “brings you an ashtray when you need one”, which would be uninteresting, but rather that they illustrate through functional propositions and configurations their understanding of what is going on in the room.
-
00_NewGammeNB_m1.jpg

Posted by patrick keller at 14:25

03. 03. 2007  14:23 | 04_Workshop_4 , 12_Curated_posts

E-Puck technology

The E-Puck is an existing technology developed at the EPFL for education purposes. It has limited built-in sensors, displacement and communication capacities. It can be extended like a sandwich can have more layers... In our case, for our working demo and algorithm development, we will build a "lighting" robot only.
Has you can see on the image below, lots of epucks have been built already so to be able to implement "intelligent" & swarm robotic behaviors.
-
Link to E-Puck resources online.


epuck_01.jpg
-
epuck_03_s.jpg epuck_02_s.jpg

Posted by patrick keller at 14:23

03. 03. 2007  14:18 | 04_Workshop_4 , 12_Curated_posts

Sources...

This set of images (quickly googled, flickred, ...) might look a little bit unfocused to act as a "source of inspiration" for this particular Rolling microfunctions project. But I can give short explanations...
-
First image is of course an historic one, a reference that we already mentioned in the context of Variable_environment/: the Walking city utopia of the English experimental architecture studio Archigram. It's in fact rather a small tribute or a wink to their work than a reference as we are working at a totally different scale... (but this is not without reasons). "Sushi on the go" was also an idea we had in mind while working on this project: this way of eating where all the sushis pass in front of your eyes and you pick one when you need or like one. We had a similar idea for the functions, functions like sushis...
Then their are 4 images about dot-matrix fonts (one on paper --FF dot matrix regular by Cornel Windlin & Stephan Muller, 1995-- the other on led screen) and old dot-matrix printers: our small rolling robots could act as simple dot-matrix functional patterns and aggregate themselves under certain rules, like dot-matrix fonts and printers based on a grid system as well.
Finally, ants, curling and snooker have something to do with the behavior and kinetics aspect of the robots: ants for the redundancy of the system to achieve a goal (but this has more to do with the scientific research of Prof. Alcherio Martinoli's SWIS/EPFL laboratory) and curling or snooker for their movements. In particular their relation to dot-matrix systems and the way a configuration can quickly change into another.

walking_city.jpg sushi_on_the_go.jpg
-
FF_Dot_Matrix_TwoRegular.gif
dotmatrix_printer.gif dot_matrix_printer2.jpg
dotmatrix.jpg
-
ant_party.jpg curling_2.jpg
curling_1.jpg snook2.jpg
snook1.jpg

Posted by patrick keller at 14:18

03. 03. 2007  14:15 | 04_Workshop_4

Other related works & sketches

RIMG_0001_s.jpg RIMG_0002_s.jpg
-
We first had in mind to work with flying flocks of robots (see sketches above from last spring 2006), still with the same kind of functional/spatial variability in mind. This in part because Prof. Alcherio Martinoli and Dr. Julien Nembrini already worked on a similar project.
It would have been probably more convincing as an architectural scale. But it revealed to be far too expensive and out of our time frame perspective. That's why we switched on the E-Puck solution. The two sketches shown here were linked to that first approach.
At the time of the project, the Mascarillons project and the Flying flock (to compare to the Instant city project of Archigram, again!) was something we had saw. Since then, Ruairi Glynn and the Bartlett School of Architecture did a research work in that direction so as the Art Center Pasadena. We were also interested in the LIS-EPFL's blimp because it's using vision to fly, it therefore needs a pretty straight visual environment...

Posted by patrick keller at 14:15

03. 03. 2007  14:14 | 04_Workshop_4

Sketches 1

It started with this sketch by fabric | ch of an "automatic candlestick lighting configuration" if two persons where sitting in front of each other (thats' why we choose the "light" function for the demo robots btw), then other configurations and functions naturally came into the loop. We then worked on the design of the robots themselves as a multi-functional modular and moving system.

RIMG_0008_s.jpg RIMG_0007_s.jpg
RIMG_0004_s.jpg RIMG_0012_s.jpg
-
RIMG_0017_s.jpg
RIMG_0015_m.jpg
RIMG_0014_m.jpg
-
RIMG_0032_s.jpg RIMG_0031_s.jpg
RIMG_0030_m.JPG

Posted by patrick keller at 14:14

03. 03. 2007  14:10 | 04_Workshop_4

Behaviors, rules and grid based basic patterns

We finally used a 160cm x 160cm table for our tests. Much smaller than what we initially wanted... but this was both due to the size of the room we had for tests and as well as for development comfort.
We produced a webcamera tracking configuration for this particular table (8 camera, 2 mini macs for the users tracking) and a set of organisational rules for the robots. Robots did not have to reach those specific configurations if such or such users' configuration was activated, but rather tend to its achievement.

RIMG_0020_m.jpg
-
RIMG_0021_m.jpg
-
RIMG_0023_m.jpg
-
RIMG_0022_s.jpg RIMG_0023_s.jpg
RIMG_0025_m.jpg RIMG_0026_s.jpg

Posted by patrick keller at 14:10

03. 03. 2007  14:07 | 04_Workshop_4 , 12_Curated_posts

Workshop#4 result: VTSC (Visual Tracking of Spatial Configuration)

Visual Tracking of Spatial Configuration Software was developed by Christian Babski from fabric | ch for Workshop#4. It is a server/client based tracking software where you can add any number of tracking camera and computers (4 camera per computer).
The principle is that you can draw any number of "zones" in any camera view and that these zones can then be occupied (by a user, an object, etc.) or not (on/off status). Combining 2 camera that look at the same part of space from different points of view (top and side for example) let you know if this part of space is occupied of course, but also if a person is standing or sitting in the zone for example. The server centralises the on/off status of the different camera's zones, match it to a configuration file (i.e. this zone + this zone "on" equal "a user is sitting on this chair") and passes this information to other applications (in our case the robots, tanks to bluetooth communication).
-
You can follow the development of the architectural software in this blog:
_ VTSC - Tech. Review
_ Exemple d'utilisation du Tracking Vidéo (VTSC)
_ Video Tracking System of Spatial Configurations
_ VTSC System - Testing
_ VTSC in use


Below, a couple of VTSC's screenshots. Camera views are from the four ceiling camera. Zone in red = "off", green = "on"...

vtsc_01_m.jpg
vtsc_02_m.jpg
vtsc_03_m.jpg
vtsc_04_m.jpg

Posted by patrick keller at 14:07

03. 03. 2007  13:30 | 04_Workshop_4 , 12_Curated_posts

Workshop#4 result: Robots behaviors with lighting function

A couple of images of the working demo "in action", with led lighting robots. A video will be added here as soon as...

RFIMG_0220_m.jpg


The developer's team within the "crash tests" set: Christian Babski supervising the robots, Julien Nembrini, Babski again, all and then Clément Hongler (student assistant). Finally, the lighting E-Pucks waiting to go on set.

RFIMG_0224_m.jpg
-
RFIMG_0213_s.jpg RFIMG_0214_s.jpg
RFIMG_0215_s.jpg RFIMG_0212_s.jpg
RFIMG_0234_m.jpg


Rolling session under camera's eyes...

RFIMG_0219_s.jpg RFIMG_0221_s.jpg
RFIMG_0211_s.jpg RFIMG_0230_s.jpg
RFIMG_0231_m.jpg
-
RFIMG_0233_m.jpg

Posted by patrick keller at 13:30

03. 03. 2007  12:35 | 04_Workshop_4 , 12_Curated_posts

Workshop#4 results: Microfunctions robots

How all the evolving micro-functions for table would finally look like on the E-Pucks (no material indication yet). We developed this rolling & modular approach under our AD with two object designers, ECAL's Ra&D assistants: Laurent Soldini and Julien Ayer.
Colors are to show the different modular parts of the "Rolling microfunctions" and all of the related objects possibilities. Then four different functional aggregations among many possible (a candlestick, a plate, an "eating sushi and a "reading" configurations).

00_NewGammeNB_m1.jpg
01_NewGammeCouleur_m1.jpg
-
02_EPuckRendu_det_m1.jpg
-
03_EPuckRendu_Chandelier_s.jpg 04_EPuckRendu_Plateau_s.jpg
05_EPuckRendu_Bureau_s.jpg 06_EPuckRendu_Sushis_s.jpg

Posted by patrick keller at 12:35

01. 03. 2007  12:09 | 07_Diverse

The "AR" Boss!

Private post for the ones who know him: even our boss (ECAL's Director Pierre Keller) likes to live in our annoying room!
There was no naked-eye visible AR in those pictures until yet, but we could possibly now say "yes there is!" while Pierre Keller stands in the middle of that space.

PiK_0002_m.jpg
-
PiK_0005_m.jpg

Posted by patrick keller at 12:09

01. 03. 2007  11:53 | 03_Sketches_&_Projects_4 , 12_Curated_posts

Variable_environment: prototypes photo shoot process

1_DSCN4202_m.jpg
-
We have produced a set of prototypes for the artifacts we were working on for some time now. Things like the "AR-ready patterns & objects", the "Webcamera - mirrors (& light)", the "Rolling micro-functions (for Sohos)", etc. Objects that can use or interact with softwares like XjARToolkit or VTSC (and others --skype, etc.--).
-
We are now in the process of shooting pictures of them. The overall set of pictures will describe a sort of (little bit visually) annoying space, made of artifacts that look traditional on first sight (wallpapers, mirrors, table, etc.) but that have a discrete second or third function. A marker based room which functions can potentially evolve over time, be (inter-)/re/active and/or endlessly customizable, evolve from a private atmosphere to a more public one. A space where also most of its content remains invisible to the human eye but can be seen through (cellphone, handheld, fixed) camera. It could be a "one room somewhere -including a film set-- where somebody live and work".
-
Here are some "making of" pictures of the photo shoot to give a first impression. We expect to finalize the variable_environment research project (phase 1) for the end of the month. Hopefully...

2_DSCN4211_s.jpg 3_DSCN4210_s.jpg
-
4_MMFC4196_s.jpg 11_DSCN4213_s.jpg
-
5_MMFC4192_s.jpg 6_MMFC4190_s.jpg
-
7_MMFC4184_s.jpg 8_MMFC4186_s.jpg
-
9_MMFC4166_m.jpg
-
10_MMFC4188_s.jpg

Posted by patrick keller at 11:53