Is the answer to our feeble human minds needing to grapple with increasing quantities of big data to stand in a purpose built room immersed in complex data visualizations while wearing an array of sensors that track our physiological reactions? A group of European Commission-backed scientists believe so.
They’re attempting to quantify — and, they claim, enhance — cognition by building a sensor-based data visualization system that dynamically changes the complexity level of the data on display in response to human triggers, such as gestures, eye movements and heart rate.
The basic concept underpinning this research, which has attracted €6.5 million in European Union funding under the Future and Emerging Technologies Scheme, is that a data display system can be more effective if it is sensitive to the human interacting with it, enabling it to modify what’s on display based on tracking and reacting to human stress signifiers.
The project is called CEEDS — aka Collective Experience of Empathetic Data Systems — and involves a consortium of 16 different research partners across nine European countries: Finland, France, Germany, Greece, Hungary, Italy, Spain, the Netherlands and the UK. The “immersive multi-modal environment” where the data sets are displayed — called an eXperience Induction Machine (XIM) — is located at Pompeu Fabra University, Barcelona.
On the cognition enhancement side, the system can apparently respond to the viewer’s engagement and stress levels by guiding them to areas of data that are potentially more interesting to them, based on tracking their physiological signals, and signposting them to click through to particular parts of a data set.
Again, the core concept driving the research is that as data sets become more complex, new tools are required to help us navigate and pin down the bits and bytes we do want to lock on to. Potential use cases envisaged for the XIM technology include helping students study more efficiently.
Early interest in the tech is coming from museums, with the XIM concept offering a way to provide museum visitors with a personalized interactive environment for learning. Indeed, CEEDs’ tech has been used at the Bergen-Belsen memorial site in Germany for two years. The team says now that discussions are ongoing with museums in the Netherlands, the UK and the US ahead of the 2015 commemorations of the end of World War II.
It also says it’s in talks with public, charity and commercial organizations to further customize “a range of CEEDs systems” to their needs — with potential applications including a virtual retail store environment in an international airport and the visualization of soil quality and climate in Africa to help local farmers optimize crop yields.
The concept of an information system watching and reacting to the person absorbing information from it is interesting (plus somewhat creepy, given it is necessarily invasive), although more so if the system does not have to be room-sized — and requires the wearing of an entire uniform of sensors — to function.
It’s conceivable to imagine a lighter weight version of this concept, which — for instance — could track what a mobile user is looking at via cameras on the front of their device and monitor additional physiological reactions by syncing with any connected wearables they have on their person, combining those inputs to make judgements on how engaged they are in particular content, for instance.
Whether that sort of tech will be used to generally aid human understanding remains to be seen. It seems more likely it will be leveraged by advertisers in an attempt to make their content more sticky.
Indeed, Amazon has already released a phone that has four cameras on the front — ostensibly to power a head-tracking 3D effect on the interface of its Fire Phone but well positioned to watch the reactions of the person using the device as they look at things they might be thinking of buying.
So, as we devise and introduce more systems that are designed to watch and monitor us and our reactions, it’s worth remembering that any complex system with eyes is only as impartial as the algorithmic entity powering it.
Article originally published in Techcrunch