Sunday, February 14, 2010

Invasive... non invasive... and all the politics in between

Right, where were we? Yes, invasive and non invasive BCI. What are they and what’s the fuss about them. In my last blog I briefly described the ideas behind brain computer interfaces and promised to give an overview of a quite important distinction in the BCI research world. Now let me be slightly more specific (and pedantic) about what I mean by important. In research, importance is something determined quite a few years after the actual research has taken place and most possible outcomes have come to bear. During the period of actual research no scientist worth his/her mantle would be presumptuous enough to declare a line of research more important to another one. Of course it goes without saying that the above rule does not hold when one talks to potential funding bodies. There the idea that any research path can lead to unforeseeable results doesn’t go down very well. The managers of these bodies usually can’t see how when they seem to predict the future so well, scientist are quite reluctant to do so. That means they will fund the researchers that predict success at 110% (can someone pinpoint for me the time of death of 100% as the maximum possible percentage?). So when faced with a funding body asking about the importance of your work in comparison to others, well, then you lie. Well, not exactly lie. You sort of extend the truth... a bit. But amongst scientist that extension of truth does not hold water. If you have two or three research avenues open to you as a scientific community then you try your best to follow all of them for a very simple reason... One NEVER knows (well, except of course for the majority of managers, bankers, journalists, politicians, etc). Why have I taken this detour though to explain this? Well, because I want to make a point here that the distinction between invasive and non invasive brain computer interface research is significantly less important amongst the people who actually do the research than amongst the people who view it from the outside. That said, let move on, shall we?

The names invasive and non invasive should (hopefully) be to most readers of this blog rather self explanatory. Crudely put, invasive is any process that requires the delivery of foreign substances inside the body of the subject whose brain is meant to be interfaced, that being a human or an animal subject. These substances can range from large electrode setups to chemical molecules delivered through injection (an injected drug does count as an invasive process). On the other hand, non invasive procedures do not require any kind of implantation and the subject gets to interface with the machine through wearable items, kind of the haute couture of the BCI world. Of course the implantation processes of an invasive procedure can take the form of anything between a pinprick to an 8 hour brain surgery, and unfortunately at the current level of research most BCI invasive methods lean towards the later. Because of this it is also understandable that the majority of the subjects that take part in this type of research are animals. Two species are predominately relevant in this research, the rat and the monkey, while a few others like the fly are not as widely used but still not uncommon.

As expected, both research pathways present their own merits and problems (otherwise people wouldn’t bother, would they). The non invasive methods are easy to implement and can be directly tested on the actual target group (humans) without the need of extrapolating results obtained in other species. They are currently also a driving force behind a research effort into how a computer can understand important signals lost in an ocean of noise and irrelevancies known as machine learning (whose results seem to find their way into stock market prediction as soon as they appear on paper... but that is a story for another blog). This last positive effect though is the direct consequence of the non invasive method’s most serious drawback, that of trying to detect tiny electrical signals generated by trillions of cells over bone and skin. Imagine trying to appreciate the intricacies of an Escher painting seen through a couple of centimetres of frosted glass. Good luck. On the other hand invasive methods present a much more accurate and clean signal for scientists to analyse but do so at the huge expense of difficulty in implementation and a long and tedious road to potential human use (mention the words ‘necessary evil’ to an invasive BCI scientist and see if the acronym FDA doesn’t immediately spring to their mind).

But the reason I wanted to explain what these two techniques are all about is not only to give a scientific overview of BCI but also to use this discrimination to do a bit of sociological analysis. On 2008 a book was published by Springer called ‘Brain Computer Interfaces: An international Assessment of Research and Development Trends’. It was co authored by a group of scientists working for an American organization called WTEC (World Technology Evaluation Centre). The ‘World’ in the above title doesn’t mean that the group is international; it means that the Centre evaluates technologies as presented around the globe. I must admit I found the book quite an interesting read (what that says about my level of geekiness is now not a good time to dwell upon). What I want to share with you here are a couple of the conclusions drawn in it. The first thing that made me stand up and take notice was the following quote:

“the WTEC panel found that the focus of BCI research throughout the world was decidedly uneven, with invasive BCIs almost exclusively centered in North America, noninvasive BCI systems evolving primarily from European and Asian efforts, and the integration of BCIs and robotics systems championed by Asian research programs.”

which was then followed by this quote:

“Virtually all BCI research in Europe is noninvasive, attributable in large part to constraints and intimidations imposed by animal rights organizations. BCI research in China appears to be almost exclusively noninvasive, though this reflects the relatively early stage of development of BCI research in that country.”

And here is where the important part of the above distinction comes into place. You see, invasive BCI is a medical technology, and like most medical technologies it requires animal research in order to be developed and eventually safely used by humans. But Europe seems to be focusing only on the non invasive part of BCI and not because we are still developing the knowhow to extend into the invasive procedures (like China according to the WTEC) but because social pressures are strong enough to stop funding bodies from funding invasive type of research. Now if you are reading this expecting me to throw a tantrum about animal activists, well know that I have come close, but will not. What I will try and do is explain parts of my personal position not only as a scientist but as a human being too.

I am currently a researcher in BCI working in one of the very few labs in Europe doing invasive work (there are thousands of labs in Europe doing science using invasive techniques, but in very few that science is BCI related). The reasons I am working on the invasive part of the technology are a combination of my own interests and the chances presented to me during the times I was looking for my next placement. I am trying hard to keep on top of both fields of research. I know fully well both that in a field so young nothing has been decided as to which technology will eventually become the norm and that one research informs the other in very important ways. But as time goes by and my experience in the invasive field becomes significantly greater than my knowledge of the non invasive techniques I seem to be presented with three options. I will spend my carrier in Europe trying to do invasive research. This means I will have to convince funding bodies of its merits over the political pain it can cause them. At the same time I will have to hide my home and lab addresses in fear of humans that are friends to animals but will happily turn criminal against me. Nice. Or I will change fields into non invasive research where for quite a few years I will have to go through yet another steep learning curve to get the experience required. And of course my currently accumulating experience will become almost irrelevant. Fantastic. Finally I can always say goodbye to Europe and move to America. That seems like the better of the above options, doesn’t it, considering the accepted view that science and technology in the US is most often than not a notch better than that of the European equivalent. Well, yes, until you consider another quote from the above mentioned book:

“Consistent with the large, multidisciplinary BCI teams found in Europe, the
scale of European BCI research funding is substantial. Only NSF Engineering Research Centers (e.g., Biomimetic Microelectronic Systems Center at USC) and the largest DARPA programs (e.g., Revolutionizing Prosthetics) compete with EU programs.”

The book talks a lot about how the US lead BCI approaches are mainly DARPA (Defense Advanced Research Projects Agency, it wasn’t me who missed a red curly line, they actually spell Defence with an s) funded and how the European BCI is funded in a more mature and intergraded way, not as an army engineering project but as part of larger multidisciplinary scientific efforts. And the question remains. Do I stay in Europe and do science under the significant shadow of animal activists or go to the US and do engineering for Uncle Sam’s army (building brain controlled bombers and tanks that will assure 0 losses for the Americans and hundreds of thousands for whoever doesn’t play ball anymore). And what happened in all this mess of politics and conflicting interests to the simple idea of using one’s abilities (in my case curiosity, willingness to keep on studying and being quite decent at putting stuff together) for the good of your fellow humans? All of them, not just a small elite.

If any of you out there are reading this and are either an animal activist yourself or know some, would you be so kind as to provide me with a couple of your views. I intent to spend a couple more blogs trying to portrait my point of view on the subject of using animals to help and empower humans and minimize future animal research. Before I do so I would like a few opinions on the matter so that I can have a spring board to develop my ideas. Lacking this I will just write it as I see it.

Thanks for reading

Sunday, January 31, 2010

Intro to the ideas behind Brain Machine Interfaces ... and oh... Hello There

Hello there, and welcome to my blog. This is the beginning of what I hope will be a series of texts on ideas revolving around my line of work, making brain computer interface devices, oh and maybe of a beautiful friendship also. These texts are meant to be addressed to people with no, little or totally different scientific background. You see I strongly feel that technologies that read or manipulate the human mind (which is another description of brain computer interfaces) are both going to become part of life sooner or later and because the ethical issues with them need to be tackled towards the sooner side and by as many people as possible. So if you are wondering what a brain computer interface is, why it can eventually be both a wonderful and a terrible technology and why the forms it is going to take in the future shouldn’t be left to the whims of managers of large co-operations, then keep reading and more importantly, enjoy.

The problem of interfacing the human brain with a computer (Brain Computer Interface or Brain Machine Interface) is a scientific idea that you’ll be happy or dismayed to know is currently trying to transform itself into a technology. Most people – those at least not steeped into sci-fi literature– have probably never heard the term. If you are one of them I would love to know the search terms you used to unearth this text. The ideas though behind the name have been trickling into the public cultural sphere some decades now mainly through – you guessed it – the sparkling world of Hollywood.

If my limited capacity for movie trivia bits isn’t letting me down (imdb isn’t a bookmark of mine) there have been films touching the subject since the mid 70s (‘The Brain Machine’ – 1977 comes to mind). A couple slightly higher calibre efforts took place just before the turn of the century with ‘The Lawnmower Man’ (‘92) and ‘Johnny Mnemonic’ (‘97). In the later, a brain augmented Keanu Reaves manages to save the world from BCI/social manipulation gone horribly bad. All the above efforts had two things in common. They were addressed to hard core sci-fi enthusiasts, so if you’ve never heard of them it’s probably ok, and they were depicting BCI as the area of expertise of the mad scientist bound to fail. A failure as expected accompanied by spectacular (for the era) visual effects. If you don’t get annoyed by previous century special effects these films are worth seeing (unless you are a romantic comedy fun so maybe not).

The BCI’s cultural breakthrough though came with the Matrix trilogy (yes there were three of them, but if you missed that you really needn’t worry). This series brought along a couple of changes as far as BCI is concerned, even if the term was never mentioned in them. I will agree that the blue pill red pill nomenclature can be easier on the tongue. The first thing they did was to set Mr. Reaves up for nomination for having the first truly two way BCI machine named after him (not too long now Keanu). I mean 4 movies in less than 20 years out of a total of, I don’t know, maybe 10 on the subject is surely worth the appreciation of the scientific community. More importantly though, these movies managed to pretty much take the idea that human brains can directly connect to machines and slam dunk it into the western population’s collective cultural consciousness. The effect was almost as impressive as our introduction by sweet old ‘I keep coming back’ – Arny to the concepts of AI and autonomous robotics (these are the scientific terms of that evil Skynet’s game – don’t you just love to hate AI?).

As I am writing this, Mr. Ridley Scott has just released a marvel of cinematic technology which seems to show us how the world of Pocahontas would look when humanity achieves space flight and brain computer interfaces. Now I am the last person to talk about space flight or the inherent disrespect of western civilization(s) to any other. Brain computer interfaces on the other hand is what I have chosen to do for a living so this is the beginning of a series of texts transforming my take on the subject into a mirror that you may choose to use to view your thoughts of this emerging technology in.

For the sake of those readers who have joined us without realising that what they have seen in these movies has a name (and my apologies to the rest of you, but keep reading it may be not as boring) allow me a brief explanation and maybe the introduction of a few examples. Currently brain machine interfaces is a rather active branch of one of the busiest scientific questions of our time, the brain mind connection, or to put it crudely the making of the theory of how humans (and other animals) think. BCI can be thought of as the technological fruit behind the scientific theories of neuroscience, much like the computer and the cell phone are the products of electromagnetism and information theory. But what is it that we are trying to do exactly and how close are we to it at the moment? Before I move on, a small note of caution. The following ideas are mine and mine alone, meaning that scientists in the field will most probably have different opinions on the subjects I am touching and even the subjects they will find important will come in a range of flavours. You have been warned.

BCI is trying to connect the human and animal brain directly to a machine (usually but not necessarily a computer). This connection, like most interesting information exchanges, e.g. a phone conversation, can have two paths. One connects the brain to the machine and the other the machine to the brain. In the first instance we are trying to make machines that read the brain and do something using this information. This is very much like your hand muscles read signals from your brain and move to produce let’s say obscene gesturing during traffic time or your tongue and throat muscles move to produce speech (during the same hot episode for example). Devices that use this connection from the brain to the machine are electric wheel chairs that can be driven directly from the brain, communication devices where words can be formed without any physical movement on behalf of the user (usually a fully paralyzed – locked in patient) and computer games where on top of the joystick the gamer can control parts of the game directly from his or her brain.

The second information pathway, from machine to brain is focusing on making machines sending signals to our brains in such a way that those can be understood and utilized by us. Here the technology has advanced enough to actually have an example that today is a used medical technology, called the cochlear implant. This is a small implantable electronic device that like a microphone turns sound into electric signals. Yet unlike anything else existing today it is transmitting these signals straight to the cochlear nerve. That nerve in humans is responsible for sending the electric signals formed as the ear drum vibrates when forced to by sound waves to the right part of the brain where the sense of sound is formed. In people though that have their ears damaged but their nerves and brain centres intact the cochlear implant takes the place of the damaged ear and connects straight to the nerve as a functional ear would do. Thousands of people all around the world that would otherwise be considered deaf are currently able to have conversations even over the phone. Other examples still under development are artificial sight where the above idea will be applied to the eye and artificial senses of touch and pressure that will allow artificial limps to be controlled like the biological ones. You will find more info on the last technology if you search for research on ‘haptics’.

In my next entry I am planning to have a go at a second and very important distinction in the world of BCI. This is one between invasive and non invasive technologies. Here the distinction appears to also show another very interesting aspect, this time not based on engineering but on sociological ideas. It seems countries currently involved in BCI research seem to be concentrating almost exclusively on either one or the other of the two technologies (invasive or non invasive). When you also start taking under consideration the discrepancies as far as funding is concerned (yes some of these post will deal with the follow the money issue) this is just food for though. More to come.


Thanks for reading