Quantcast
Channel: Neuroelectrics Blog – Latest news about EEG & Brain Stimulation
Viewing all 97 articles
Browse latest View live

Consumer brain-computer interfaces, are we there yet?

0
0

©neuroelectrics34

In the last few weeks I’ve attended two very different events related to consumer BCIs; the “Wearables” session at MWC in Barcelona and the “BCI for the masses” panel at IBT in Tel Aviv.

The wearables session at MWC was, as you might expect, mainly about fitness trackers and watches. Pebble, Fitbit and Misfit/Swarovski presented their latest devices while SAP and Telefonica discussed enterprise applications.

The surprise of the afternoon though was Ariel Garten from Interaxon who presented Muse. Muse is the first real bet on a consumer BCI and it was a big moment for BCI to be part of this line-up. There are other consumer priced BCIs out there but in terms of concept and design this seems to be the front runner right now. The reaction from the audience was very interesting (real-time tweet walls tell you a lot). It was obvious that many had never heard of the technology before and jumped straight to “I never need to type again!” which is where most people go just before “can they read my mind??”. In the BCI community we have been discussing public perception for a long time and from the reaction of the crowd it seems that some of the typical misconceptions are still as likely as ever. More importantly though, it was very surprising to me that the concept of BCI is so far from mainstream that a MWC audience (a group of early adopters if ever there was one) was not fully aware of it. The application that Muse has chosen is (wisely) simple and very far from either mind reading or a user interface device that will replace your keyboard/mouse. It’s essentially a brain fitness/neurofeedback app that will appeal to many quantified-self/mindfulness enthusiasts.

Captura de pantalla 2015-03-19 a las 23.27.01

 

This is an interesting start but is it a killer app? Obviously it’s hard to say but there is certainly growing interest in brain health and mindfulness.

A couple of weeks later I found myself on the “BCI for the masses” panel at IBT with Ariel from Interaxon,  Conor Russomanno from OpenBCI and Hamutal Meridor from Brainihack. As the name of the panel suggests we really got into the issue of consumer BCI and it was surprising how similar our visions are. For our part, through Neuroelectrics and Neurokai, we are very focussed on medical devices, clinical applications and professional services. OpenBCI is working with hackers and makers and building an amazing educational platform, and as we know Muse is really aiming at consumer. Between us we cover the whole spectrum of commercial BCI with very different business models yet during the panel and after it was clear that we are coming from the same place in terms of what we think the technology can do.

Following the conference we all took part in Brainihack and watched as 14 teams competed for this year’s title. The winners included a video chat app that displays your true emotions as you chat and a system that detects whether the wearer recognises a person or not (think lie detector). Both really well done and very interesting but with, let’s be honest, a few ethical question marks. Assuming you could get them to work operationally (not trivial) it’s quite hard to imagine large numbers of consumers wanting this type of application.

Now, developing consumer apps is not the point of a hackathon but it highlights the fact that we are reaching the point where the technology is good enough for consumer but have yet to discover what the application might be.

We still have many years or research and development ahead of us in the medical and clinical domains but we will be watching closely to see what happens next in BCI for the masses.

file-2328627683


Visual Perception

0
0

©neuroelectrics14

The most important component of a digital camera is the image sensor. Nowadays CCD and CMOS sensors monopolise the digital market. Those sensors capture a continuous signal—the light—which is converted through a filtering process into discrete signals. Within this process, simplified here in one line, the element that controls the white balance level is key. The same scenario under the same conditions but captured using different cameras, even with the same settings, may yield different images (luminance, colour, etc.) due in part to the white balance corrector. This problem is a known fact in different fields (Computer Vision, Photography or Painting). For instance, in object recognition, when a classifier has been trained using a set of images from one specific camera, and we want to evaluate other images that have been captured with a different camera using the same classifier, the performance obtained by the classifiers usually drops. Basically, the images used during training and the ones used during testing are no longer the same, even if the content of both sets of images was practically the same (background and foreground). In the literature, this problem can be addressed using techniques such as active learning, where the classifier is retrained using a few new images coming from the new camera. However, this issue does not only come from using different cameras, but also different scenarios. If we were training a human classifier using people captured in winter and testing it using images acquired during the summer, we would probably be facing the same problem.

 

©neuroelectrics14

For humans, our eyes are our image sensors, although our brains are far more complex than the software and hardware used by cameras. Similar to these, we also face one problem which is white balancing. However, differing from cameras, our brains have the capacity of interpreting an image using previous knowledge. Hence, we may have a different perception of the same image we saw just a few minutes ago just because our brain analysed it in a different way. This fascinating fact together with the madness of Internet became viral not long ago.

The contest was known as “White and gold or blue and black dress?” I am sure you remember it. Some people wrote blue and black while others saw it in white and gold. Some people on the Internet analysed the image by getting the pixel colour of each part of the dress. Others were arguing over the fact that it could be perceived in a different colour. This phenomenon went viral in a matter of a few hours. Although some posts explained the reasons behind these different perceptions, people continued arguing.

 

Is this the only example we can find where perception can play with our mind? Of course not! There many examples, just do a quick search on the Internet, you will find plenty of them.

In conclusion, there are many factors—visual experience, neural system, information coming from other sense organs—that lead us as humans to interpret a simple picture differently. Thus, there is no need for arguments but fascination. As for robots, unless there is a processing stage involved which tries to emulate the human brain, a pixel colour or intensity will remain always the same.

file-2328627683

Neuroelectrics Asian Tour

0
0

singapore

During the last 3 weeks, I had the chance to travel to 3 amazing cities: Singapore, Hong Kong and Japan.

The main objective was to attend the “1st International Brain Stimulation Conference”

http://www.brainstimconference.com/

in Singapore and then the “2015 International Workshop on Clinical Brain Neural-Machine Interface Systems”

http://plaza.umin.ac.jp/cbmi2015/

As there were some days between both conferences, I stopped in Hong Kong to meet-up with very interesting partners and organize a workshop. Overall, this trip offered me a great opportunity to get to know with my own eyes all the nice research, clinical applications and strong neuroscience teams working in these asian countries.

 

Brain Stimulation Conference, Singapore

With over 550 participants, including the leading scientists in the Brain Stimulation research area, no wonder this conference was really interesting. Among the plenary speakers there was Josep Valls-Solé

http://www.brainstimconference.com/bio-valls-sole.html

, one of the first Starstim users. Among the session speakers, there was also Michael Nitsche

http://www.brainstimconference.com/bio-nitsche.html

, member of Neuroelectrics Scientific Advisory board.

Beside setting up a Neuroelectrics booth together with my colleague Uri Fligil, I had also the chance to present a poster about the first brain-to-brain communication experiment

http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0105225

. It was very funny the meet Michel Berg in the conference. He is the one who decoded the brain-to-brain message. I was the one who sent it. First time ever we were together, after communicating through e-mail and through … brain-to-brain.

singapore

 

Starstim Workshop, Hong Kong

In Hong Kong I met the wonderful team of Healthlink Holdings

http://www.healthlinkholdings.com/

, our partner in Hong Kong and Macao.

It has been one of the most prolific trips I have ever made. We visited 5 different hospitals that are very advanced and active in rehabilitation therapies. We also organized a 2 hours workshop with more than 60 attendees. Quite a lot of interest indeed.

Next time I go to Hong Kong, I will save some time to visit the beautiful city and its surroundings!

hongkong

 

Clinical Brain Machine Interfaces, Tokyo

Although the this conference was a little bit smaller, the quality of the talks was impressive.  One of the keynote presenter was Niels Birbaumer

http://www.cin.uni-tuebingen.de/research/cin-members-detail.php?tx_pxemployee[employee]=85

, also member of the scientific advisory board of Neuroelectrics.

Besides also setting up a booth in the conference, I had the change to make a lot of visits to very interesting research centers and hospitals, together with Nihon Binary

http://www.nihonbinary.co.jp/

, our Japanese partner.

Among others I met-up with Riken Brain Science Institute (BSI)

http://www.riken.jp/en/research/labs/bsi/

in Tokyo, the Center for Information and Neural Networks (CiNet)

http://cinet.jp/english/

, a division of NICT in Osaka, the Human Brain Research Center from the Kyoto University, the department of Neurology from the Osaka University and much more…

tokyo

As a last word, I would like to mention that I am very happy to see so much neuroscience going on in Asia. It has been very interesting to have met with so many good researchers and clinicians very active in the EEG, BCI, Neurofeedback and Neurostimulation field. I am sure this is only the starting point and that every day more and more people will get into this research and clinical field both in Asia and abroad. From my side, I am already looking forward to visit Asia again very soon.

See you next time!

file-2328627683

10 best data sources for neuroinformatics research

0
0

Brain to brain neuroelectrics

- Neuro… what? What on earth is neuroinformatics… you mean, bioinformatics?

Indeed neuroinformatics takes inspiration from bioinformatics, which refers to the combination of omics data (genomic, transcriptomic, proteomic…) and its understanding with machine learning tools (well, and other computer science disciplines). Bioinformatics has brought enormous advances for instance in cancer research, which was itself one of the driving forces of this further development. Probably Bioinformatics was a discipline crystallizing during the Human Genome Project. Genetic-related data sets are large. In several cases they are also public, an important point related to what we are discussing today.

But let us start with the size. The data size made several techniques and computer science technology obsolete so a new research field was born. Analogously the combination of neuroimaging data including EEG, which is large as well, and informatics has been denoted as neuroinformatics. I came first in contact with the neuroinformatics community through the conference of the International Neuroinformatics Coordinating Facility (INCF) in Leiden last August. I had the impression the community is building around the concept of Computer Science for Neuroimaging Big Data.

Brain to brain neuroelectrics

I attended there a very interesting talk by Michael Milham, Director of the Center for the Developing Brain at the Child Mind Institute. It dealt with the small translational value of machine learning methods as described in the literature when trying to bring them into the clinical domain. He was mostly referring to the study and discovery of biomarkers based on fMRI data, but I think this could be applied to any neuroimaging modality. For instance Milham has authored some papers on how to transform machine learning performance measures, which we have discussed in older posts, into useful measures for clinicians, e.g. by incorporating disease prevalence to sensitivity and specificity. Particularly he underscored the importance of the effect size in any conclusion derived from a data analysis approach. As you may know effect size is related with sample size, i.e. with the number of data records you are including in your study.

Once you have the data … Jim Bezdek has defined pattern recognition as the discovery of structure in data. Pattern recognition was the precursor of data mining and therefore of Big Data. Let your data talk! This is the old mantra of Computational Intelligence practitioners, who let their models be driven by data. This data-driven approach is opposite to the model-driven approach, where you start with an a priori model and try to validate its predictions with the data. Therefore working with Computational Intelligence requires you to get data to drive your models, but where to find it? Data is fundamental for this type of approaches. The Neuroinformatics (as well as other communities not only in research but also in innovation) have recognized the huge value of data. There is an ongoing effort on neuroimaging data sharing.

The US government launched through the NIH already in 2006 the Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC), which includes several data sets. On the other hand ICNF, which is though a private organization, is building such a repository.Other health related databases are included in Open Data initiatives all over the world: the USEurope, and the United Kingdom. In my opinion open data initiatives are important for epidemiological studies. More related data is originated in public health agencies, which are committing more and more to an Open Data approach like the US Department of Health and Human Services, the Health and Social Care Information Centre in the UK, and the World Health Organization. Even private companies are targeting data sharing platforms like Quandl, or visualization platforms like GapMinder, which includes some interesting plots of health data as well.

But you might be wondering, are these data repositories really BigData? Let us leave the discussion for another post, or, well if you like you can start commenting on this issue below …

 

file-2328627683

Music from the brain: a new way of communication for patients with severe speech disabilities.

0
0

eerr

In this post I would like to introduce the concept of BrainPolyphony, a CRG Awards-founded project in which  Centre de Regulació Genòmica (CRG), Universitat de Barcelona, Mobilitylab, ASDI and Starlab will explore new ways of communication for patients suffering from severe speech disabilities. Patients with neurodegenerative diseases, such as cerebral palsy, often experience important communication problems. At present the tailor-made devices and protocols that facilitate communication are expensive and neither accurate nor accessible to all patients.

The main goal of BrainPolyphony is to use the sonification of brain signals in real time to develop accessible tools for communication. This alternative communication tool system will allow to overcome the communication difficulties in cerebral palsy patients, which would help neurorehabilitation, and provide a tool for neurofeedback. We truly believe this new communication paradigm will make a huge difference in patients with severe communication problems allowing them new ways of interaction with other people and their environment.

Enobio, a non-invasive, CE Medical, wireless electrophysiology sensor, will be in charge of measuring the electrical brain activity (EEG) from the surface of the patient’s scalp, and the heart activity (ECG) from his wrist, for subsequent sonification. Sonification is the process that will consist of acoustically presenting these EEG and ECG time-series, as well as their characteristic patterns, in real time. The electrophysiological patterns we aim to extract will reflect the patient’s current emotional state.

eerr2

An EEG/ECG-based system for tracking emotions in real time will be built. The application will be developed using Enobio’s API and based on the valence-arousal 2-dimensional emotional mapping, a representation in which the arousal dimension measures how dynamic the emotional state is, and the valence is a global measure of the positive or negative feeling associated with the state. The system will also continuously monitor the heart rate and heart beats.

The developed online electrophysiological data analysis platform will stream in real time the EEG\ECG time-series and the calculated emotional levels to a sonification software in charge of transforming them into music. This software will consist of Pure Data based algorithms that will either create music out of the incoming information or modify the parameters of currently played tracks. Just imagine that each time your heart beats a drum is played, the pitch of your favourite song increases as your heart rate goes up, and when the system detects that you are feeling good and relaxed a nice cheerful tune is played. Artistic possibilities are huge for BrainPolyphony composers.

Once the entire system is up and running it will be tested on 8 adult participants diagnosed with different cognitive or motor disabilities. Although the severity of the comorbidities varies in the different patients, all selected participants will present restricted speech or communication capabilities and motor impairments but do not have audition problems. The patients’ acceptability and interaction possibilities will show if the project is successful or not.

The BrainPolyphony team’s motivation is therefore to develop an easy-to-use, affordable, and widely accessible therapeutic and communication platform aimed at improving communication in cerebral palsy patients. The project may result in a new conceptual framework for understanding brain activity, and new communication methods. However, it will also provide a new rehabilitation tool for patients with motor and cognitive impairment.

The project just started – stay tuned if you are interested; will keep you posted of with the project’s news and updates.

 

eerr

file-2328627683

Get your brain fit

0
0

unnamed112As I have explained in a previous post, we loose about 100 000 neurons per day. But this is not something to be really worried about, since the thing that really matters is the quality of their connections, not their number. As we know, the brain is plastic so we can create new connections and reinforce the existing ones.

In this post I will explain some brain exercises we can do to keep our brain fit.

 

Positive thinking

As explained in this nice article,

“When we complain, our brain releases stress hormones that harm neural connections”.

Try not to complain, try to have a positive thinking, try to stick to happy and positive people and avoid moaning and groaning people. This will not only reduce stress hormones but will also increase your levels of serotonin. Having a good mood is the best exercise you can offer to your brain.

 

Take a trip, explore and get out of your comfort area

Every single new experience will “wake up” your dormant neurons. Go to places you have never been before, try new experiences, break-up the routine. You do not need to go the the other side of the world, you can also visit new museums, go for a walk in new parks or just go spend a day in the country side. This will also make you breath fresh air, see new landscapes and have a healthy walk.

Getting out of your comfort area (i.e. less laying down in the sofa and more exploring) will with no doubt generate new challenges and new situations you have never had to face before. This will make your brain quite awake and active, and that indeed is the best medicine your brain needs.

 

Sport yourself and have a healthy life-style

Exercise will not only improve your heart, muscles, bones and immune system, it will also oxygenate your brain and release serotonin, so your mood will appreciate that! It has also been found that regular exercise increases blood flow, promote new cell formation in the brain and improve cognitive functions.

Besides exercise, it is also very good for your brain to sleep well. As explained in this previous post, we still do not know exactly why we need to sleep, but we do know for sure it is very important for your brain!

Finally, keep a healthy diet and avoid smoking and drinking, or at least be very moderated. Your brain will thank you.

 

Get a new hobby

From learning a new language, making crosswords and sudokus, or making a small session of mental fitness everyday will really improve your mental skills! There are even a lot of websites with plenty of brainteasers.

Any new input for your brain, any new challenge you undertake is very beneficial and engaging for your brain. Keep it fitted! And actually you can even have lots of fun doing so!

 

Meditate

The effects of mediation on your health and brain are impressive. This has been know for thousands of years, but only recently science has started to study the benefits of meditation.

Not only you will manage to reduce stress (and thus avoid releasing harmful stress hormones) but you will also be more happy, improve your memory and your sense of self. Ultimately you will physically alter your brain’s grey matter!

 

Neurofeedback

People go to the gym to get their muscles fit. We can also get our brain fit by performing Neurofeedback sessions.

The idea is to record EEG and display the results in a screen in real time. For instance, if we want to learn how to relax, we can modify a computer image (i.e. a beach) as a function of alpha. The more alpha (i.e. the more relaxed), then the image will turn into a cooler, calm and beautiful relaxing beach. If alpha is very low, then the image will turn into a dark beach with huge waves and high winds.

By performing this, and as our brain is plastic, we can actually learn how to relax. The possibilities os Neurofeedback are really very promising.

 

unnamed

 

Brain Stimulation

Finally,  new techniques of non invasive brain stimulation are being studied with very promising results such a cognitive enhancement.

Several companies are offering this type of technologies for different applications such as pain relief, post-stroke rehabilitation and now also for cognitive enhancement.

For instance at Coen Kadosh Lab, they “conduct research on the psychological and biological factors that shape learning and cognitive achievement, numerical cognition, synaesthesia, and time perception”.

Also in this paper we see how a tDCS session made participants able to solve the 9 dot problem. They could not solve the problem before!

So brain stimulation can really improve your brain fitness, and the good think of it is that you do not need to do anything at all, just wear a tDCS device for a little while even when you are zapping at your sofa. A perfect solution for lazy people.

Thanks for reading and see you next time!

 

file-2328627683

Support Vector Machines: What they do and their best ‘trick’

0
0

©neuroelectrics36

In the past years machine learning has been used to address many different problems in neuroscience. When we talk about classification, several algorithms have been proposed for specific tasks. For instance, in Starlab we have used discriminative learning algorithms for EEG/ECG classification (e.g. neurodegenerative diseases) or time-series classification. This type of algorithms model the dependence of an unobserved variable (label) on an observed variable (features). Among these learning algorithms, probably, the one most extensively used is the Support Vector Machine (SVM). This algorithm around two decades ago started outperforming the state-of-the-art and since then it has been used in many different research fields. However, it is worth mentioning that deep learning algorithms have recently outperformed SVM-based ones in certain tasks.

Despite the fact that researchers are familiar with SVMs, in many cases this algorithm is usually used as a black box, specially when we talk about how the learning process is precisely carried out. The idea behind this algorithm consists in finding a linear basis form, called hyperplane, that optimally divides the training data you are providing splitting the feature space into two parts.

©neuroelectrics35

Regarding the training data used to fed a SVM classifier, each training example is represented by a set of features (e.g. height, age, country) together with a label that specifies its class. Then, an optimisation process is carried out to find out the aforementioned hyperplane. Finally, depending on the place a new element lies regarding this hyperplane, we will classify it as one class or the other (binary classification).

When training a discriminative learning algorithm such as the SVM we can always apply a non-linear transformation of the training data. This approach permits us performing the training in a different space but using the same learning algorithm. Usually, this transformation is carried out when a linear algorithm is not able to accurately separate the training data in the original space. Thus, if we map the training data in a space, normally in a higher dimensional one, we may have more chances of linearly separate the data, and therefore having a better model. However, the main drawback of using this technique is that, computationally, it tends to be very expensive. But, what if I told you that for certain algorithms can overcome this problem?

Kernel methods have the capability of working in a high-dimensional space without performing any explicit transformation of the data. Basically, given a higher dimensional space, where we will have more chances of linearly splitting the data, the objective consists in finding the expression of an inner product that only operates using the original training data. This ‘modified’ inner product is then used to define the kernel or similarity function used by the kernel method. Any kind of linear approach can be converted into a non-linear version by applying the kernel trick.

One of the most interesting things regarding SVM is their capability of working with kernels. This way, if we identify that the kernel (e.g. linear) we are currently using is not able to properly divide the training data, we can always apply the kernel trick. In this link https://www.youtube.com/watch?v=3liCbRZPrZA you will find a nice visual example. But of course, I remind the reader that before using it, we will need to figure out if the current features we are using are discriminative enough or if the optimisation parameters selected in the training stage are the appropriate ones. This task known as feature/parameter selection has been also addressed for many years in machine learning. For instance, there are some learning algorithms that internally select the best features. In the next post, we will talk about these algorithms.

If you have any question regarding this post, or you want to learn more about learning algorithms, please do not hesitate to send us a message.

file-2328627683

BCI Evolution-Exploring Opportunities

0
0

©neuroelectrics43

Brain-Computer Interaction (BCI) based technology is thriving and has the potential of spreading into society by addressing the needs of various user groups under different application scenarios. Wolpaw and Wolpaw (2012) identified the five principal BCI scenarios or application types that may have evolved so far, namely, replace, restore, improve, supplement and enhance.

 

©neuroelectrics43

Communication is perhaps the most fulfilling immediate use of BCI systems for a patient, her family and caregivers when no intelligible interaction can otherwise take place (Birbaumer et al., 1999). In this case, the BCI output clearly replaces the natural patient’s communication function lost as a result of injury or disease. Even simple interactions to make needs known, answer questions with a simple yes or no, and select among a small matrix of choices may reintegrate the isolated patient with others. Similarly, a person may wish to replace lost limb function using BCIs as wheelchair controllers in real (Philips et al., 2007) or virtual environments (Leeb et al., 2007), or as appliance adjusters, altering body position in an electric bed for comfort as well as to decrease the chance for developing a bed sore. Additionally, BCIs can be used to operate prosthetic or functional electrical stimulation devices in invasive (Hochberg et al., 2006) or non-invasive (Pfurtscheller et al., 2003) restoration of lost natural outputs, such as motor or bladder function in paralyzed humans.

 

Interestingly, BCIs can now be considered as neurorehabilitation tools to improve muscular activation and limb movements in impaired post-stroke patients in clinical settings, for example. Generally, BCIs could help stimulate cortical plasticity leading to the recovery of some lost functions (Carabalona et al., 2009). Following this approach, BCI based cognitive rehabilitation may be among the most outstanding applications that could benefit a large number of patients ranging from completely locked-in patients (Kübler and Birbaumer, 2008) to patients with cognitive impairment, to increasingly improve their cognitive deficits. Evidence to support the use of computer based cognitive rehabilitation programs has been growing for the last decade with examples extending to memory (Tam and Man, 2004); working memory (Johansson and Tornmalm, 2012); attention (Zickefoose et al., 2013); and, visual perception (Kang et al., 2009). A more futuristic scenario might be using a BCI to supplement a natural neuromuscular output with an additional, artificial (i.e., robotic) output (Wolpaw and Wolpaw, 2012). Last but not least, another area of increasing recent research interest is in the recognition of the user’s mental states (e.g., stress or attention levels, “cognitive load” or “mental fatigue”) and cognitive processes (e.g., learning or awareness of errors) that will facilitate interaction and stimulate user’s interest. In these cases, by preventing stress or attentional lapses, the BCI enhances the natural output. Basically, we can deal with neurofeedback or gaming applications to enhance user’s performance.

For further information refer to the original manuscript (Otal et al., 2014), Towards BCI Cognitive Stimulation: From Bottlenecks to Opportunities. There, we explored potential new opportunities and research breakthroughs that may consider BCI based cognitive stimulation applications to enhance people overall performance in clinical and non-clinical settings to maintain general wellbeing and quality of life.

Part of this work was based on the BNCI Horizon 2020 project. Enjoy the reading of the recently published BNCI Horizon 2020 roadmap and think…. that Enobio will soon enhance your cognitive performance and likely improve your neurorehab training soon!

 

rtrtr

—-

References:

  • Birbaumer, N. Ghanayim, N., Hinterberger, T., Iversen, I., Kotchoubey, B. (1999). A spelling device for the paralysed, Nature 398, 297-298.
  • Carabalona, R., Castiglioni, P., & Gramatica, F. (2009). Brain–computer interfaces and neurorehabilitation. Stud. Health Technol. Inform 145: 160-176.
  • Johansson, B., & Tornmalm, M. (2012). Working memory training for patients with acquired brain injury: effects in daily life. Scandinavian Journal of Occupational Therapy, 19, 176-183.
  • Kang, A.S.H., Kim, D.K., Kyung, M.S., Choi, K.N., Yoo, J.Y., Sung, S.Y., & Park, H.J. (2009). A computerized visual perception rehabilitation programme with interactive computer interface using motion tracking technology − a randomized controlled, single-blinded, pilot clinical trial study. Clinical Rehabilitation, 23, 434-446.
  • Kübler, A., & Birbaumer, N. (2008). Brain-computer interfaces and communication in paralysis: extinction of goal directed thinking in completely paralysed patients? Clin Neurophysiol., 119(11):2658-66
  • Leeb, R., Friedman, D., Mueller-Putz, G.R., Scherer, R., Slater M., & Pfurtscheller, G. (2007). Self-paced (asynchronous) BCI control of a wheelchair in virtual environments: A case study with a tetraplegic, Computational Intelligence and Neuroscience, 1-8.
  • Otal, B., Vargiu, E., & Miralles, F. (2014). Towards BCI Cognitive Stimulation: From Bottlenecks to Opportunities. Proceedings of the 6th International Brain-Computer Interface Conference 2014, Graz, Austria.
  • Philips, J., del R. Millán, J., Vanacker, G., Lew, E., Galán, F., Ferrez, P.W., Van Brussel, H., & Nuttin, M. (2007). Adaptive shared control of a brain-actuated simulated wheelchair, Proceedings of the 2007 IEEE 10th International Conference on Rehabilitation Robotics, Noordwijk, The Netherlands, 408-414.
  • Pfurtscheller, G., Mueller-Putz, G.R.. Pfurtscheller, J., Gerner, H.J., & Rupp, R. (2003). ‘Thought’ – control of functional electrical stimulation to restore hand grasp in a patient with tetraplegia, Neuroscience Letters 351, 33-36.
  • Tam, S.F., & Man, W.K. (2004). Evaluating computer-assisted memory retraining programmes for people with post-head injury amnesia. Brain Injury, 18, 461–470.
  • Wolpaw, J., & Wolpaw, E. W. (2012). Brain computer interfaces: Something new under the sun. In J. Wolpaw & E. W. Wolpaw (Eds.), Brain-Computer Interfaces: Principles and Practice (pp. 3-14). Oxford: Oxford Univ Press.
  • Zickefoose, S., Hux, K., Brown, J., & Wulf, K. (2013). Let the games begin: A preliminary study using Attention Process Training-3 and LumosityTM brain games to remediate attention deficits following traumatic brain injury. Brain Injury, 27, 707-716.

 


Moving Neuroscience out of the lab

0
0

84_neuroeletrics_WIRED_00-3

Recently in Starlab we conducted some recordings out of the lab. We equipped some volunteers with a bunch of EEG, ECG and GSR sensors along with a GoPro cam so we could capture electro-physiological signals as well as volunteers’ point of view during the whole experiment.

The idea was to record people while visiting an art exhibition in a museum to catch their affective state along the visit.

During the experiment we faced some challenges and had to sort some problems out derived  from this completely new environment for us. I’d like to share here that out of the lab experience with you.

 

84_neuroeletrics_WIRED_00-3

 

Hurrying up. You got two hours before the place is open to the public

Being out of the lab makes thing funnier but a bit more complicated. In our case we got from the art exhibitor a two-hours window before the opening to have the volunteers alone wandering around the exhibition. Taking into account that a normal visit could last around 45-50 minutes, it presented our first challenge to solve: how to set-up all the sensors in less than 15 minutes.

Our main concern was about the EEG sensor since we meant to record up to twenty channels and we know how important is to do a good set-up to obtain meaningful signals.

With such a constraint, using dry electrodes might have been an option. You can place this type of electrodes directly in the cap so you just need to put it on the volunteer and place the reference. This would have for sure sped up the set-up but with a cost we weren’t sure that we could afford: motion artefacts. Dry electrodes would be more susceptible to volunteers’ movement and in that concrete experiments where they would be freely visiting the exhibition could have been a big problem.

We then preferred going for the classic wet electrodes. We replicated the set-up in our premises several times before doing the actual recordings in the art exhibition. We wanted to be sure that everything could be done in the proper amount of time. If you go out of the lab make sure that at least everything works _in_ the lab.

 

All right, you put gel on my hair now I need a shower

In this experiment we explore for half of the recordings a new electrode technology. Here we wanted to deal with the problem of putting gel of the hair of the volunteers. Obviously they wanted to wash their hair before heading to work after the experiment but this wasn’t possible at the art gallery.

Our colleagues at Neuroelectrics provided us with a new solid-gel electrodes. They made as good contact than be using gel but since they were solid (imagine a piece of gelatin material), they barely left any trace in the hair. They were a perfect solution to deal with the “shower” problem.

 

Triggering the experiment

In a free environment like the one we had, triggering the experiment to later analyse the different conditions might be difficult. In that case it was necessary to track when the volunteers observed each of the stimuli (the pieces of art in the exhibition).

If everything had been in our hands and there had been no budget limitation we would have go for some dedicated solution where every piece of art would have some NFC or RFID emitter. By doing that way and including a receiver in our set of sensors we could match the electro-physiological signals with the observation of every artwork.

However, the real world has its limitation and we could not invade the room and the artwork with extra stuff that then had to be removed before opening to public. Our compromise solution was to print some QR codes and place them strategically all over the room. The GoPro then recorded them for later process the images and select when every volunteer observed every picture in the room. Out of the lab you need to be imaginative to find solutions that meet the constrains of not owning the place.

 

Presenting the results

Eventually we got to record all the signals and analyse them to obtain some nice results (though being a pilot with few subjects the statistical significance was not as important but to demonstrate the viability of the technology to carry out this type of experiments). Next challenge to sort out was about how to present the results so they were meant to be consumed by a non technical audience. We opted to put all the results in a web service in dynamic tables so the client could sort the obtained results by every volunteer, stimuli or grand averages. Of course we also delivered a face to face presentation where we explained the obtained results.

All in all it was a great experience that we are delighted to repeat in the future. What would be the next scenario?

file-2328627683

SONAR – BRAINPOLYPHONY

0
0

efe_20150708_133928508

Since its first edition in 1994 Sonar Festival is definitely one of the electronic music festivals you cannot miss. For its sixth consecutive year in Barcelona, Sonar+D organized its famous hackathon along with the Music Technology Group of the Universitat Pompeu Fabra. Sonar Music Hack Day (MHD) is a hacking session in which participants will conceptualize, create and present their projects. Music + software + mobile + hardware + art + the web. Anything goes as long as it’s music related. The challenge is to develop disruptive technologies useful for music composition and performance based on non-conventional interfaces. This year the hacking was focused on wearable technologies. 100 hackers were selected to participate and during the first two days of Sonar+D they implemented new software and hardware prototypes for over 24 hours. Companies participating in this Hackaton provided APIs, SDKs and their sensors and devices to hackers willing to use them for their sonification projects. For the third year Starlab and Neuroelectrics sponsored Sónar MHD providing to hackers Enobio electrophysiological sensors and the necessary tools and support for EEG\ECG\EMG feature calculation and streaming.

 

efe_20150708_133928508

 

This year two participants of the Brainpolyphony consortium, Jordi Sala from CRG/Mobility Lab and myself from Starlab, have participated and developed a hack on body sonification inspired in the brainpolyphony project, our hack was named ‘One Man Band’. We aimed to extract electrophysiological and movement measures from one person wearing a wireless sensor and convert them into music. We wanted each measure to be mapped into an independent instrument or tune so one of us could act as a DJ mixing them in order to create a novel musical interface between the signals from the body that are converted into sound and a DJ.

 

The hack used Enobio in its 8 channel version for the measurement of brain activity (EEG) via 6 electrodes placed on the surface of the scalp, heart activity in the an electrode placed on the left wrist, muscular activity in an electrode placed on the biceps and head movements using Enobio’s integrated accelerometer. All this information was streamed via Bluetooth to a piece of software constructed over the Enobio API that calculated real-time elctrophysiological features. In the brain electrical activity we computed valence and arousal emotional features. Arousal is a measure of how strong is the emotion while valence tells us if this emotion is positive or negative. Based on valence and arousal measures, complex emotions (fear, happiness…) can be mapped as we can see here. From the heart we detect the heart-rate (how many times the heart beats per minute), the heart-rate-variability (variation in the time interval between heartbeats), and when the heart beats. Heart-rate and heart-rate-variability are also arousal measures. The electrode placed in the arm measures the muscular activity so we can detect when a muscle is active and when it is not. Finally from the accelerometer it is possible to measure head movements and defining a threshold when a large head movement occurs.

 

All this information is sent via TCP/IP to a PD patch that is in charge of the sonification. When the heart beats a drum is played, depending on the level of the heart-rate and heart-rate-variability the pitch of a tune changes making the tune faster or slower, when the system detects that the muscle has been activated (for example when rising the arm) a guitar solo is played, valence and arousal levels are mapped into complex sounds, the accelerometer into a tone whose frequency is changed and finally when a large movement is detected the bongos are played. If the user wearing Enobio is able to self-regulate his emotions, he will be able to change the music and sounds coming out of his body. If the user wants he also can interact by moving his head or raising his arm. In the following video you can have a look at the presention of our hack: ‘One Man Band’, mixing music coming out of your body!

EL PAÍS: http://elpais.com/elpais/2015/07/08/ciencia/1436365941_292625.html?utm_content=17767446&utm_medium=social&utm_source=twitter

EL MUNDO: http://www.elmundo.es/economia/2015/07/08/559d0a1822601dd5208b45a0.html

EL ECONOMISTA:  http://www.eleconomista.es/salud/noticias/6855062/07/15/Desarrollan-el-primer-prototipo-que-transforma-las-emociones-en-sonidos.html

 

 

 

file-2328627683

Can cortical networks be used to analyse EEG signals?

0
0

84_neuroeletrics_WIRED_00-2

84_neuroeletrics_WIRED_00-2Broadly speaking, when we talk about analysing EEG signals, we are talking about how to use those time-varying signals to classify clinical conditions, behavioural events or predict responses (is the person awake or not? is this person going to develop Parkinson?). As such, learning temporal dependencies in those time series and predicting or classifying events with that is an essential step on the study of EEG signals.

But wait, isn’t that one of the main function of our brain? Our neurons, embedded in networks, continuously process time-varying sensory information and, through integrating and learning temporal dependencies in those signals, they are able to perform classification and prediction, ensuring our own survival (Friston and Kiebel 2009). So, what can we learn from our own cortical networks for the analysis of EEG signals?

Neurons in the brain are wired with an abundance of recurrent and feedback connections. Those structures, that continuously process sensory information, are able to integrate, remember and manipulate to produce an output function or behaviour. How can that occur? The presence of the recurrent and feedback structures, together with slow plasticity mechanisms that are intrinsic of those neurons, allow the cortical network to remember past stimuli. The presence of large populations of neurons that perform a non-linear transform of their stimulus (the generation of an action potential) act as a reservoir to integrate and manipulate stimulus information. With this organization in mind, the learning of temporal dependencies for prediction and classification can be achieved by the training of a simple linear readout that observes the network activity (see Figure).

 

eer

 

This approach is known as the reservoir-computing framework (Buonomano and Maass 2009). In essence, the theory states that systems of recurrently connected non-linear dynamical units (with decaying memory traces) are able to perform history-dependent computations on time-varying stimuli, which are decoded from the activity itself by readout units. Interestingly, the generality of this framework is exemplified by recent advancements where the cortical and recurrent network has been replaced by a single delayed-non-linear differential equation (or a laser, see Appeltant et al., 2011), a bacterial cell culture (Jones et al. 2007) or a soft-silicon body (Nakajima et al. 2015).

And now, imagine we set-up our EEG experiments (properly designed, see our post on ‘5 basic Guidelines to Carry Out a Proper EEG Rerecording Experiment’) and collect a set of time-series associated to diverse clinical or behavioural conditions. Could we have a cortical network model processing those signal and performing the prediction or classification of those conditions that we are interested on? Yes. And there are, actually, few studies that have applied this framework for the analysis of the EEG and the classification/prediction of clinical or behavioural conditions. For instance, Buteneers et al. 2013 successfully predicted epileptic seizures from intracranial EEG, with an average detection delay of 1 sec, outperforming other state-of-the art algorithms used in clinical settings, with a sensitivity and specificity above 95%. Similarly, Schleibs et al. 2013, achieved a classification accuracy of 82% when analysing EEG recorded in relaxing state and memory task. So far, promising!

But one of the most interesting things regarding the reservoir-computing framework is that, as the reservoir is a dynamical system, the analysis of its dynamics can be performed in real time too. In other words, the performance of classification/prediction performed by the readouts can be evaluated in continuous time as the EEG data arrives the system (see Lukoševičius and Jaeger 2009 for a review on readout algorithms). This could have tremendous implications for brain-computer interface (BCI) approaches (and brain to brain (BtB) approaches!), which essentially rely on the efficiency of the algorithms performing the closed-loop information transfer between computers and brains.

Another relevant aspect is that with this approach, there is no need to explicitly indicate what features of the EEG we may be interested in (is there a power modulation at a particular frequency? coherence?). The prediction/classification of stimuli can be performed on the network responses arising from the EEG signals, which are automatically transformed to a higher dimensional space by the non-linear nodes of the reservoir. However, this can also be seen as a drawback from the pure neuroscientific curiosity: if there is no explicit knowledge of what are the particular features on your signals that allow performing such prediction or classification, how can I understand the mechanism? So far, the algorithm can only be used as black box that internally creates and selects the best features, especially when some learning process is added on the network.

When introducing the learning, we are bringing in another critical aspect of this approach: the number of parameters involved on the algorithm is rather high, and its parametrization is necessary to ensure stability of the system (Verstraeten and Schrauwen, 2009). While such discussion is reminiscent from the artificial neural network community, recent studies propose the use of biologically-based synaptic plasticity mechanisms as self-organizing rules that ensure stability of the network (Lazar et al., 2007, Castellano 2014).

Putting together all we have seen, the usage of biologicaly-based cortical networks as algorithms for the analysis of EEG data seems promising, especially where the implementation allows for the real-time analysis of data. Could we learn from it and improve clinical prediction or classification algorithms? Keep posted! If you have any question regarding this post, or you want to learn more about learning algorithms, please do not hesitate to send us a message (also, very interesting presentations can be found here and here).

 

file-2328627683

Appeltant, L., Soriano, M. C., Van der Sande, G., Danckaert, J., Massar, S., Dambre, J., … Fischer, I. (2011). Information processing using a single dynamical node as complex system. Nature Communications, 2, 468.

Buonomano, D. V, & Maass, W. (2009). State-dependent computations: spatiotemporal processing in cortical networks. Nature Reviews Neuroscience, 10, 113–125.

Castellano, M. (2014). Computational Principles of Neural Processing : modulating neural systems through temporally structured stimuli (PhD thesis, chapter 2).

Friston, K., & Kiebel, S. (2009). Predictive coding under the free-energy principle. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 364(1521), 1211–1221.

Jones, B., Stekelo, D., Rowe, J., & Is, C. F. (2007). Is there a liquid state machine in the bacterium escherichia coli? IEEE Artificial Life.

Lazar, A., Pipa, G., & Triesch, J. (2009). SORN: a self-organizing recurrent neural network. Frontiers in Computational Neuroscience, 3(October), 23.

Lukoševičius, M., & Jaeger, H. (2009). Reservoir computing approaches to recurrent neural network training. Computer Science Review, 3(3), 127–149.

Nakajima, K., Hauser, H., Li, T., & Pfeifer, R. (2015). Information processing via physical soft body. Scientific Reports, 5, 10487. http://doi.org/10.1038/srep10487

Schliebs, Stefan, Elisa Capecci, and Nikola Kasabov. “Spiking Neural Network for On-line Cognitive Activity Classification Based on EEG Data.” Neural Information Processing. Springer Berlin Heidelberg, 2013.

Verstraeten, D., & Schrauwen, B. (2009). On the Quantification of Dynamics in Reservoir Computing. Lecture Notes in Computer Science, 985–994.

Out-of-Body-Experiences and Neurosciences

0
0

“The easiest path is not always the best” Lobsang Rampa, The Third Eye 

 

Ever since I read when I was a teenager Lobsang Rampa’s The Third Eye, I have been fascinated with astral traveling, or as modern science calls it, Out-of-Body Experiences (OBE).

firstIn the last years, OBE have been studied from a neuro-scientific point of view and I would like to share in this post some of the research that has been done in this area.

But first let’s start with some definitions and some interesting information. An OBE is an experience that

“… typically involves a sensation of floating outsides one’s body and, in some cases, perceiving one’s physical body from a place outside one’s body … OBE can be induced by brain traumas, sensory deprivation, near-death experiences, dissociative and psychedelic drugs, dehydration, sleep and electrical stimulation of the brain, among others. It can also be deliberately induced in some. One in ten people have an OBE once, or more commonly, several times in their life.”

In this paper a single case study is presented. The participant was a 24 year old woman that reported being able to produce OBE’s willfully. They used an fMRI to monitor brain functional changes. They saw differences between OBE and motor imagery.

“Activations were mainly left-sided and involved the left supplementary motor area and supramarginal and posterior superior temporal gyri, the last two overlapping with the temporal parietal junction that has been associated with OBE’s. The cerebellum also showed activation that is consistent with the participant’s report of the impression of movement during the OBE. There was also left middle and superior orbital frontal gyri activity, regions often associated with action monitoring. The results suggest that the OBE reported here represents an unusual type of kinesthetic imagery.”

Other papers study the induction of OBE using different techniques. In this other paper, OBEs were induced repeatedly to a patient by invasive electrical stimulation of the right amygdala and the surrounding cortex. The patient reported

“I see myself lying in bed, from above, but I only see my legs”….The remaining parts of the room including the table next to the bed and the window, as well as three other people present were also seen from the above visual perspective. An essential part of the experience was the feeling of being separated from her seen body. She said: “I am at the ceiling” and “I am looking down at my legs”. Two further stimulation induced an identical experience. She felt an instantaneous sensation of ‘floating’ near the ceiling and localized herself ∼2 m above the bed. During these trials, the patient was very intrigued and surprised by the induced responses.

Another techniques used for OBE induction is virtual reality (VR), as explained in this paper. The idea is to use full immersive VR and visuotactile stimulation. The subject sees a humanoid body from different perspective that trigger full body illusions. This technique has been used by different teams to study OBE and it is a variation of the rubber hand illusion.

These experiments show that the multisensory integration mechanism contributes to the self-body perception, and this can be easily hacked.

secondAnother interesting experiment that is still undergoing is the AWARE study. The idea is to place figures on suspended boards facing the ceiling and not visible from the floor in hospitals. Each time there is a near-death experience accompanied by an OBE, the patient is asked if he/she has seen these pictures. At the moment, data is still being collected but so far, it seems no one has been able to describe the hidden pictures.

We are still pretty far away from fully understanding how the brain works, so logically we are still far from understanding OBE. In any case, this topic often associated with spirituality, is starting to be studied from a neuro-scientific approach. I think this is quite interesting and not necessarily incompatible with spiritual believes. I am quite sure in the near future much more will be discovered in this research field, so stay tuned!

Thanks for reading and see you next time.

file-2328627683

 

Photo credit: from Wikihow under Creative Commons License

On memory and why I (you) can’t remember what I (you) ate today

0
0

©neuroelectrics47

In psychology, memory is defined as the process by the brain stores and remembers information from the past. Classical psychological models see memory as a sequential process, where information is first encoded into the brain, and only then can be used or stored. Whatever information has been encoded, it is now represented in the brain and can be saved over prolonged periods of time. Once stored, a retrieval mechanism allows that information to be accessed or recalled (see upper-right Figure, adapted from Dudai 2004). The timespan in which those operations occur allows psychologists to differentiate between short-term memory (seconds to a minute) and long-term memory (potentially unlimited). When short-term memories are manipulated and applied to cognitive tasks, memory is identified with the term working memory.

wewe

When it comes to the biological substrates of memory, these stages are not so well defined and much less understood. Overall, all aspects related to memory seem to be distributed processes, involving a large number of brain areas and several encoding mechanisms. For instance, hippocampus seems to be involved in the encoding of new information: when damaged, the ability to form or encode new memories is lost, although old memories are safe and can be recalled. This process seems to depend on the ability of the hippocampus to coordinate the activity of its neurons and produce oscillations in theta frequency (Buzsáki an Moser 2013). In parallel, frontal areas of our brain seem to be involved on the manipulation of memories that are already stored, as patients with damage in frontal lobe show deficits in recall (Simons and Spiers 2003). In frontal areas, memories seem to be encoded in asynchronous activity across large populations of neurons (Baeg et al 2003). How are these distant brain areas and different neuronal codes coordinated?

In top of that complexity, the content of what is encoded as a memory can be also modulated by the cognitive demands. Or your age. For instance, paying attention to a particular event increases the likelihood of its storage, while the presence of distractors, like an interesting conversation over lunch, or a sudden sound while you are learning a phone number, will reduce the likelihood of its storage (Cowan 1997). Attention, expectation, novelty, reward, or your age will influence on how well your memory works. Similarly, dementia, Alzheimer and other neurological diseases affect brain areas that are involved with several aspects of memory (see right side of the Figure). And, going back to the core question, how do all these areas communicate to give rise to the memory phenomena?

Emerging evidence from imaging, neurophysiology and computational modelling studies (Sauseng et al. 2005; Brincat and Miller 2015; Wimmer et al. 2014) indicate that precisely timed interactions between these (and several other) brain areas are crucial for memory function. But the particular mechanisms that support memory function through precise communication across brain areas are not sufficiently understood. Would that improve in few years?

 

file-2328627683

References:

  • Baeg, E. H., Kim, Y. B., Huh, K., Mook-Jung, I., Kim, H. T., & Jung, M. W. (2003). Dynamics of population code for working memory in the prefrontal cortex. Neuron, 40(1), 177–88
  • Brincat, S.L. and Miller, E.K. (2015)  Frequency-specific hippocampal-prefrontal interactions during associative learning.  Nature Neuroscience.
  • Buzsáki, György, and Edvard I. Moser. “Memory, navigation and theta rhythm in the hippocampal-entorhinal system.” Nature neuroscience 16.2 (2013): 130-138.
  • Cowan, Nelson. Attention and memory. Oxford University Press, 1997.
  • Dudai, Y. (2004). The neurobiology of consolidations, or, how stable is the engram? Annual Review of Psychology, 55, 51–86.
  • Fell, J., & Axmacher, N. (2011). The role of phase synchronization in memory processes. Nature Reviews Neuroscience, 12(2), 105–18.
  • Ranganath, C., & Ritchey, M. (2012). Two cortical systems for memory-guided behaviour. Nature Reviews Neuroscience, 13(10), 713–726.
  • Sauseng, P., Klimesch, W., Schabus, M., & Doppelmayr, M. (2005). Fronto-parietal EEG coherence in theta and upper alpha reflect central executive functions of working memory. International Journal of Psychophysiology, 57(2), 97–103.
  • Simons, J. S., & Spiers, H. J. (2003). Prefrontal and medial temporal lobe interactions in long-term memory. Nature Reviews. Neuroscience, 4(8), 637–48.
  • Wimmer, K., Nykamp, D. Q., Constantinidis, C., & Compte, A. (2014). Bump attractor dynamics in prefrontal cortex explains behavioral precision in spatial working memory. Nature neuroscience17(3), 431-439.

Solving BCI bottlenecks for medical applications

0
0

Medical BCI-based applications are still limited. Neuroelectrics is addressing most critical issues so that they can be effectively adopted in clinical settings or in users’ homes.

Carabalona et al. (2009) discussed several points that need to be considered when designing, selecting and using a BCI system for neurorehabilitation purposes. In particular, they emphasized the importance of technology acceptance and usability, as well as, issues related to the impact on the patient’s emotional and motivational states. Although some users and assistive technology experts may be quite satisfied with some BCI devices, others could not imagine using most of the devices in daily life without further improvements (Zickler et al., 2011). User-centered design is critical, and testing medical BCI applications with healthy users may be inadequate.

Overall, medical and other non-medical BCI emerging applications may encounter similar bottlenecks. Main obstacles vary from the long preparation and setup times, to the ergonomics of the electrode caps, and the low speed and lack of reliability of the BCI system. The learning curve is also a major drawback. That is, the user must learn completely new skills to operate a BCI system. Therefore, there is still an important need to overcome the following identified “key factors” (Allison, 2010) for better BCI adoption:

  1. Cost (financial, help, expertise, training, invasiveness, time, attention, fatigue)
  2. Throughput (accuracy, speed, latency, effective throughput)
  3. Utility (support, flexibility, reliability, illiteracy)
  4. Integration (functional, distraction quotient, hybrid/combined BCIs, usability)
  5. Appearance (cosmetics, style, media, advertising)

Following a user-centered approach and increasing the engagement with the appropriate end users while designing, selecting and using a BCI system will offer the opportunity to increase technology acceptance and usability. The possibility of using dry electrodes or solid-gel electrodes for EEG acquisition, and the monitoring of psychological effects during BCI tasks may give rise to a broader range of cognitive rehabilitation and stimulation programs (Otal et al., 2014).

Following such a view, Neuroelectrics keeps on working on non-conventional sensors for less obtrusive brain signal recording, and affective interfaces able to adapt the BCI according to emotional status changes in the patient.

qwqw

Captura de pantalla 2015-08-13 a las 17.37.58

 

 

 

file-2328627683

 


 

References

Allison, B. Z. (2010). Toward Ubiquitous BCIs. In B. Graimann, B. Z. Allison, & G. Pfurtscheller (Eds.), Brain-Computer Interfaces. Springer Berlin Heidelberg.

Carabalona, R., Castiglioni, P., & Gramatica, F. (2009). Brain–computer interfaces and neurorehabilitation. Stud. Health Technol. Inform 145: 160-176.

Otal, B., Vargiu, E., & Miralles, F. (2014). Towards BCI Cognitive Stimulation: From Bottlenecks to Opportunities. Proceedings of the 6th International Brain-Computer Interface Conference 2014, Graz, Austria.

Zickler, C., Riccio, A., Leotta, F., Hillian-Tress, S., Halder, S., Holz, E., Staiger-Sälzer, P., Hoogerwerf, E.J., Desideri, L., Mattia, D., & Kübler, A. (2011). A brain-computer interface as input channel for a standard assistive technology software. Journal of Clinical EEG & Neuroscience.

Part of this work was based on the original manuscript, Towards BCI Cognitive Stimulation: From Bottlenecks to Opportunities. There, we explored potential new opportunities and research breakthroughs that may consider BCI based cognitive stimulation applications to enhance people overall performance in clinical and non-clinical settings to maintain general wellbeing and quality of life.

From the lab to the Market: The experience lab

0
0

©neuroelectrics49

The path from lab research to the market is a tough one. Nearly all funding research calls ask you to report on your exploitation plans, so you need to figure out how the outcomes of your research could go to the market. The truth is that it is not always possible to materialize your findings on an actual product or service.

 

Here at Starlab we really try hard to successfully walk this path down as many times as possible. We always say that our main mission is to transform science into technologies with a real positive impact on society. Take as an example Enobio and StarStim, two products that are helping today both the health and research community.

 

Inspired by this motto, we recently decided to collect all the outcomes that we obtained working on different EU-funded research projects related with user affective state. As a result, we happily put in the market The Experience Lab: a neurophysiological sensing platform and data analysis services for user affective state.

 

The starting point of this sensing platform may be found in the projects INTERSTRESS and BEAMING. In the former, we investigated if stress markers could be found in EEG signals. The later brought affective computing and immersive technologies together to set a whole telepresence experience.

 

Those projects, especially BEAMING, brought the opportunity to work with other physiological sensors and signals apart from EEG which, as you might know, we have investigated for many years in other different use cases and scenarios (such as sleep studies, brain computer interfaces, response to brain stimulation, biometrics or biomarkers for early detection of neurodegenerative diseases like Parkinsons’ disease).

 

Thanks to the integration effort made in those projects, not only from the hardware perspective (synchronising different wireless sensors), but also from the point of view of combining the analysis of the different physiological signals individually, we ended up with an easy-to-use hardware neurophysiological sensing platform along with a software back-end which can characterize the user affective state.

 

©neuroelectrics49

 

The sensing platform is based on Enobio but integrating other physiological sensors. Up to now, we have successfully integrated several ones in order to read brain activity (of course by using Enobio), heart activity (either using Enobio or a Shimmer sensor), galvanic skin response (through a Shimmer sensor), and breathing rhythm (by using a chest belt with an embedded piezoelectric). The platform is also ready to record face expressions with an HD web cam.

 

The whole system is fully customizable since the sensing platform can be configured to use as many sensors as needed depending on the type of experiment and the needs of the client. For instance, just the EEG sensor was used in the context of a simulated shopping campaign by a company studying what consumers wanted and how they went shopping. In another experiment, which was conducted in a museum, the sensing platform was equipped with sensors for reading brain and heart activity as well as the galvanic skin response. A GoPro camera was added to the sensing platform in that occasion. The purpose of that camera was not to record face expressions but to record the user‘s perspective to synchronize the user’s affective state with the artwork on sight (more details of this experiment can be found here). The whole system, with all the sensors listed above fully integrated, is currently in action at a company whose products are exposed to their professional testers. They are now evaluating their products not only with the classical questionnaires but taking into account the physiological analysis provided by The Experience Lab.

 

The analysis service provided by The Experience lab is based on a back-end processing chain that takes the raw data recorded by the sensing platform and processes it to extract different features that are correlated with the user’s state. That processing chain, like the sensing platform, can be fully adapted according to the type of signals that are present at recording time and the type of analysis and reports that the client is interested in.

 

The user state characterization provided by the analysis service is based on a valence-arousal approach. The back-end processing chain takes each of the signals recorded by the sensing platform individually and extracts different features that are correlated either with the user’s valence (positive/negative emotion domain) or arousal (high/low emotion intensity domain). The features extracted from the galvanic skin response, for instance, are correlated with the user’s arousal. The ones extracted from the heart activity can also be correlated with the emotion intensity. In the case of the brain activity, we extract either EEG features that correlate with valence or features that correlate with arousal. Detected face expressions can be categorized with pre-defined emotions which can be mapped to a specific level of arousal and valence. You can check this emotional granularity entry in the Wikipedia to better understand this valence and arousal mapping to different emotions and user states.

 

The extracted features mentioned above are combined using fusion algorithms to obtain the final user state representation that the client is interested in (after stimulus presentation, as an average over different period of times, grand averages over subjects or stimuli subsets, etc.). The final stage of the back-end processing chain is the classification of the user state characterizations by applying machine learning techniques. These techniques can provide information, for instance, on how different stimulus can be classified according to the user’s physiological responses to them. Based on that, an automatic tool could be built to know how well a user response to a certain new stimuli can be classified with different predefined labels. Let me put this with different words through the following example.

 

Let’s consider an auditory sensory experiment where pleasant, unpleasant, relaxing and invigorating audio stimulus are provided. The machine learning algorithms are trained with the physiological emotional features captured when those stimulus are played. Then, a new stimulus, which is desired to be characterized and it is different from the previous one, is provided. By analysing the users’ response to this new stimulus, it is possible to provide the level of membership for each of the different categories (pleasant, unpleasant, relaxing, invigorating) so it can be checked if the new audio stimulus provokes the effect that it was meant for.

 

As you can see, The Experience Lab benefits from all our previous experience with wireless sensors, third-party hardware integration, EEG and other physiological signals analysis as well as data fusion and machine learning algorithms. Mostly all of these concepts worked first in the lab under research projects, now they are in the market through Neurokai which is a division of Starlab Neuroscience born some months ago to release The Experience Lab.

 

Neurokai also offers other solutions based in user performance and biomarkers research to provide state-of-the-art neuroscience data services but let’s leave those interesting applications for other posts.

 

file-2328627683


Can Pollutants affect our Physiological Rhythms?

0
0

In all cities, the air we breathe contains emissions from motor vehicles, industry, heating and commercial sources which have sometimes travelled long distances from sources located far afield. Ambient air pollution is the major environmental contributor to the global burden of the disease, contributing to an estimated 4.7 million deaths per year worldwide. Air pollutants can be defined as substances in the air that can affect negatively to humans and to the ecosystem. These substances can be found in the three estates; it means solid particles, liquid droplets, or gases. Major primary pollutants produced by human activity include sulphur oxides (SOx), nitrogen oxides (NOx), carbon monoxide and CO2 among others. In Starlab and Neuroelectrics we are concerned about how pollutants alter our physiological rhythms.

 

©neuroelectrics10

Ambient particles matter and nanoparticles have been seen as a cause of changing brain activity and influencing the central nervous system. Over the past decades, several studies have suggested that nanoparticles could arrive to the brain through the inhalation of these particles via olfactory nerves. Passage to the brain is a particular concern since it has been demonstrated that nanoparticles are potent inducers of oxidative stress. The reason why the oxidative stress is so worrying is because it has been related to the appearance of several neurodegenerative diseases such as Parkinson’s or Alzheimer’s disease. MRI evaluation of the brain reaches two results. Firstly, those children who live in more polluted locations revealed greater prefrontal lesions; and secondly, highly-exposed children and young adults showed upregulated inflammatory markers.

In particular diesel exhaust pollutants has been identified as an important and harmful source of health diseases. We can take as an example this study [1], in which a garage’s environment was simulated through controlled diesel exhaust exposure. Brain activity alterations could be measured in the EEG. Participants exposed to diesel exhaust for one hour presented a significant increase in fast waves activity (β2) in the frontal cortex. But not just our brain’s response is altered by pollutants exposure. Environmental exposure to suspended particles is also widely associated to cardiovascular diseases and to cardiac rhythm abnormalities. Recent studies suggest that pollutants are associated with a reduced heart rate variability, which in turn, is known as an independent risk factor for cardiovascular mortality.

There are many more evidences to be worried that link pollutants and adverse human health effects; but there are also many questions to be addressed yet. In Starlab we want to go a step forward and during the next years we will try to give an answer to some of these questions. Using Enobio as electrophysiological sensor we want to go beyond the state of the art in the study of electrophysiological biomarkers, which hopefully will reflect the effect of pollutants on human health. For this study we plan to use concurrent measures of electroencephalography (EEG), electrocardiography (ECG) and air pollution. We are particularly interested in the exploration of short and long term impact of pollutants and its correlation with cardiovascular and neurophysiological health. But we are not just looking at the effects of pollution. Otherwise, as it is well known that also plants and green areas help to mitigate air pollution and ameliorate the climate among many other benefits, we are also interested in the studio of how physiological changes are linked with the exposure to polluted green areas compared with non-green polluted areas. Our intention with this concrete part of the studio is measuring the benefits of parks and trees in urban areas.

Stay tuned for further updates in this study!

AM15_Logo_CMYK_Horizontal_SavedForWeb

[1] [Crüts, B., van Etten, L., Törnqvist, H., Blomberg, A., Sandström, T., Mills, N. L., &Borm, P. J. (2008). Exposure to diesel exhaust induces changes in EEG in human volunteers. Part FibreToxicol, 5(4), 6.

 

 

The future of our species at the CCCB and Brain Polyphony

0
0

Captura de pantalla 2015-11-01 a las 16.28.00

New technologies constantly appear on our life. Who could have imagined that the competence of a classical music ‘tape’ and its accompanying radiocassette would have been integrated in our telephones? Or that human communication through long distances would depend on satellite swarms floating around earth? Communications, transport and health are clear examples of what aspects of our lives have been shaped by the innovations over the last 50 years.

 

Captura de pantalla 2015-11-01 a las 16.28.00

 

But not only has Homo Sapiens been doing some major technological discoveries to modify its environment. Some of these new technologies are explicitly designed to enhance or improve its living conditions or even its cognition. For instance, recently developed robotic prosthetic limbs can adapt to the wearer’s gait while balancing its body motion or directly communicate with the user’s brain sending sensations of touch. Analogously, cognitive enhancers are a class of drugs that increase your brain’s performance on particular mental functions (known as nootropics, see, which have been used by between the 4.5% and 35% of college students for diverse performance enhancements.

In this technology driven era, more and more technologies get integrated onto our daily life, technologies that evolve from state-of the art research. The faster the scientific advancement, the faster the technological development and the strongest impact on our daily lives. At this rate of change, what is the future of our spices?

George Dyson

I view the universe as a phase-space of things that are possible, and we’re doing a random walk among them. Eventually we are going to fill the space of everything that is possible.

The exposition Human+ at the CCCB explores the technologically enhanced future of our spices, inviting the visitor to consider the potential trajectories of humanity, to explore the boundaries of what it means to be human.

 

Captura de pantalla 2015-11-01 a las 16.27.46

 

The exhibition draws together a range of installations that explore the acceptance of cloning on our society, the appearance of cyborgs (as a human that physically merges technology and biology), the advancement of assisted reproductive technologies, or the integration of brain computer interfaces onto our daily lives.

From Starlab, we wanted to contribute to this exhibition by presenting our Brain Polyphony setup, a new communication concept where different cognitive states can be decoded from physiological signals recorded through an Enobio in real time. This alternative communication tool system appears as a result of a CRG Awards-founded project involving the Centre de Regulació Genòmica (CRG), Universitat de Barcelona, Mobility Lab, ASDI and Starlab (see for further details), where this sonification process is being studied as therapeutic and communication platform for cerebral palsy patients. Although what is being presented at the CCCB is a simplification of the overall project, we are working very hard to develop this new rehabilitation tool for patients with motor and cognitive impairments.

Our installation aims to bring neurobiological research to the every-day museum visitor with a modern and interactive setup, while presenting the newest technologies that allow for the interpretation of those brain signals into emotions, and from emotions to music. During a couple of hours a day, any visitors of the CCCB can hear songs representative of their affective state during their time at the installation. The spectators will then be able to hear the switch between several cognitive states of that user – active – relaxed – concentrated. Can you willingly change between these? Visit the CCCB.

PS: bringing neurosciences out of the lab is not so simple! If you want to read a great discussion on the matter, do not miss this post from Javier Acedo!

Document 1: Sequences 1

Document 2: Sequences 2

 

Captura de pantalla 2015-10-22 a las 15.42.59

8 reasons why affective computing should be multimodal and include EEG

0
0

Sin título

Recently my colleagues Javier Acedo and Marta Castellano posted about ExperienceLab, Neurokai’s platform for characterizing the emotional response of users, and its very last presentation in an art exhibition discussing technology-driven evolution of humans. As you might already know ExperienceLab gathers data of a user when perceiving an external stimulus and associates an arousal and a valence values to this perception process. The platform is based on Enobio and other sensing devices for the simultaneous measurement of the electrical brain, heart, and skin responses, of respiration rhythms, and of the facial expression. Since several emotional states can be defined in the arousal-valence two-dimensional space, the system can associate an emotional response to the stimuli. This founds great applications for user experience analysis, man-machine interfaces, and neuromarketing among others. I am trying to give today 8 reasons why such an emotional characterization should be multimodal, i.e. should include data from diverse sensors.

The first reason is conceptual and lies on the so-called sensor gap, which we have already mentioned in older posts on data fusion. When sensing reality sensors are sensitive to particular aspects of reality, e.g. sensitivity to particular visible wavelengths in imaging camera sensors, or to particular frequencies in audio microphones. The only way of getting a more complete representation of reality is to combine the sensitivity of the different sensor units. This conceptual reason translates into practical ones in the case of the affective multimodal characterization.

The skin response is the more validated modality for affective characterization. Already in 1888 the neurologist Charles Féré demonstrated the relationship between skin response and emotional response in subjects. Indeed Jung used electrodermal activity (EDA) monitoring in psychoanalysis praxis. But EDA, which is modernly known as Galvanic Skin Response (GSR), can be affected by external factors like humidity and temperature. So one needs to avoid the external influence on the outcome.

Robot Kismet, one of the first affective computing applications. Photograph taken by Jared C. Benedict on 16 October 2005. © Jared C. Benedict.

Robot Kismet, one of the first affective computing applications. Photograph taken by Jared C. Benedict on 16 October 2005. © Jared C. Benedict.

Therefore the first affective multimodal system was born. The polygraph combined GSR with other physiological measures, namely heart rate, respiration and blood pressure. Invented in 1921 the idea was to get rid of the environmental factor dependence as much as possible. Although the utility of polygraphs for lie detection has not been proven, all the modalities included have been extensively validated as arousal measures in the scientific literature. However all the former sensing modalities share two common problems. They react too slowly to the external stimuli. This is crucial when implementing real-time systems or for avoiding too long data collection protocols. Moreover they are exclusively capable of characterizing arousal, whereas they leave valence out of scope.

The more rapid reaction backs up the multimodal extension. This is our fourth reason. Both the facial expression and the brain electrical activity seem to present a lower latency than the other modalities. The Heart Rate Variability response time is estimated in 6-10 seconds. The GSR response is even slower upon what we could observe in different ExperienceLab applications. On the other hand facial expressions can be detected with computer vision systems in one second.

Furthermore differences in the brain activity in response to emotional stimuli have been already detected in event related potentials that are defined in the millisecond domain.

Fifth not only the reaction time of the EEG modality is larger but also its capability of detecting subtle changes in mood. While facial expression recognition systems have been mostly trained with extreme facial expressions, EEG-based alternatives have been developed with validated and real-world stimuli. It is not easy to generate emotional responses in lab conditions, but scientific evidence proves the performance of EEG-based measures for characterizing arousal and valence. Next time you see someone monkeying in front of his laptop camera, you know he is training the facial expression recognition module. A recent paper targets concretely this problem.

EEG is also a crucial modality for characterizing valence. No other modality is able to characterize valence. We have formerly commented on the target of polygraph-like modalities, which is arousal. Facial expression recognition on the other hand detect and classify basic emotions, but not valence, which is one of their components, in an isolated manner. This is one of the most important assets of EEG technology for affective characterization.

The seventh reason for choosing a multimodal setup for affective gathering relates to the facial expression modality in spite of its outcome limitation. Facial expressions are the more natural way for humans to non-verbally communicate affects. It is the natural evolution choice and therefore a reliable one. Moreover it can serve the practical and quick validation of the system. The videos used for facial expression recognition, can be used offline to analyze the user reaction in a very natural and straightforward manner. Nevertheless faces can be easily faked.

Faking electrophysiological measurements is not easy. This is the last motivation for our multimodal advocacy. Self-regulation of electrophysiological indices is doable but with some extensive training. Not for the heart rate, but for the other modalities this is extremely difficult, especially for the brain activity. Summarizing we can state that the multimodal characterization of affective reactions offer a more validated alternative, with quicker reaction times, and a more complete and reliable emotional characterization than the unimodal alternatives on their own. This has motivated our choice for inclusion in ExperienceLab. Do you want to test it?

 

Captura de pantalla 2015-10-22 a las 15.42.04

A window into the brain networks: magnetoencephalography (MEG) and simultaneous Transcranial Current Stimulation (tCS).

0
0

fig1.

Based on already published large evidence, non-invasive brain stimulation (NIBS) techniques like tdCS represent very important approach for the improvement of abnormal brain functions in various conditions (psychiatric and neurological). NIBS can induce temporary changes of neural oscillations and performance on various functional tasks. One of the key-points in understanding a mechanism of NIBS is the knowledge about the brains response to current stimulation and underlying brain network dynamics changes. Until recently, concurrent observation of the effect of NIBS on multiple brain networks interactions and most importantly, how current stimulation modifies these networks remained unknown because of difficulties in simultaneous recording and current stimulation. Recently, in Neuroelectrics wireless hybrid EEG/tCS 8-channel neurostimulator system has been developed that allows simultaneous EEG recording and current stimulation. Now, a relatively new imaging technique called magnetoencephalography (MEG) has emerged as a procedure that can bring new inside into brain dynamics. In this context, our group conducted a successfully proof of concept test to ensure the feasibility of concurrent MEG recording and current stimulation using Starstim and a set of non-ferrous electrodes (Figure 1). But first of all, what actually is MEG? Magnetoencephalography (MEG) is a noninvasive recording method of the magnetic flux from the head surface. Magnetic flux is associated with intracranial electrical currents produced by neural activity (the neural currents are caused by a flow of ions through postsynaptic dendritic membranes). From Maxwell equations, magnetic fields are found whenever there is a current flow, whether in a wire or a neuronal element. Hence, MEG detects these magnetic fields generated by spontaneous or evoked brain activity.

 

fig1.

Figure 1

 

Figure 2

Figure 2

The magnetic fields generated by neural activity are extremely small. These fields are about one billion times weaker than the ambient magnetic field of the earth, on the order of femtoTesla (10-15Tesla) to picoTesla (10-12Tesla). For a measurement of these tiny fields a magnetic detector (a conductive wire loop) that is sensitive to magnetic flux passing through it is used. For such very weak magnetic fields to induce current in the loop it must have practically no electrical resistance (must be superconducting) and this can be achieved by reducing the temperature of the wires to close to absolute zero. Hence, the (loop) magnetometer wires are housed in a thermally insulated reservoir filled with liquid helium (the coldest cryogenic liquid), which keeps them at a temperature of about 4.2 Kelvin (-269°C). In MEG systems as amplifiers superconductive quantum interference devices (SQUIDS) are used (loops containing Josephson junction). These devices convert the feeble induced currents to high-amplitude voltages. MEG systems are equipped with a head-shaped array of more than hundred SQUIDS sensors (Figure 2). As I mentioned already, brain fields are about one billion times weaker than the ambient magnetic field of the earth. Cars moving past the building where MEG system is located generate a larger magnetic field than that from the brain, as does the nearby lift, and even people walking up the metal staircase in the building generates a measurable noise signal. The magnetic activity of the brain is substantially smaller than ambient noise and this is the main reason why MEG recordings are performed in a magnetically shielded room to isolate them from external magnetic fields. Reduction of environmental noise is achieved by placing the MEG system inside those magnetically shielded room (MSR). The MSR provides passive shielding of high frequency noise achieved by the use of a layer of highly conductive metal (typically aluminium). Low frequencies (e.g., < 100 Hz), shielded rooms also have layers of high magnetic permeability material (Mu-metal), which, depending on the number of layers can provide attenuation ranging from a modest factor of 30 for very low frequencies up to 100 or more at higher frequencies. Recently, MEG has become an important tool in neurological signal processing and functional neuroimaging. During the last decade, an increasing number of studies of language and cognitive functions and brain connectivity have been carried out. The main applications of MEG are clinical investigations (epilepsy, language, somatosensory, auditory, motor and visual area mapping), and cognitive neuroscience research (you can find more information in Hamalainen et al. 1993; Papanicolaou 2009 or on a webpage htttp://megcommunity.org) (Figure 3 and 4).

 

Captura de pantalla 2015-12-02 a las 16.54.59

Figure 3

 

Figure 4

Figure 4

 

Its high temporal resolution permits assessment of fractions of milliseconds. MEG also has excellent spatial resolution; sources can be localized with a millimetre precision. Due to the fact that magnetic fields are not volume conducted over the scalp as in the case of electrical potentials (the influence of skull, scalp, cerebrospinal fluid and brain tissue on magnetic fields is very weak), detailed topographical patterns of magnetic field activity overlying the scalp can be used to infer the location of activity and enables an almost undistorted view on brain activity. Therefore, MEG source analysis, after being overlaid onto subject’s individual MRI yields a realistic image of the location of underlying tissue causing neurophysiological activity. A major attraction of MEG is that its temporal resolution is limited only by the sampling rate of the electronics, which in MEG systems typically far exceeds the largest bandwidth of interest for brain signals. Using MEG we can measure both DC shifts related to slow polarization of the cortex, as well as high-frequency oscillations and transient spikes. This is in contrast to functional brain imaging technique such as PET or fMRI, which are based on metabolic/hemodynamic phenomena and have temporal resolution on the order of minutes or seconds, thus are incapable of imaging rapidly changing patterns of brain activity.

Regarding our proof of concept test with MEG-tCS simultaneously recording. We used a whole-head magnetoencephalography system (148 sensors – magnetometers, 4D Neuroimaging Magnes 2500WH, San Diego, CA,) to record magnetic fields at 678 Hz sampling rate using the phantom head included with the MEG system. The stimulation test (tDCS) has been performed using MRI Sponstim electordes (Neuroelectrics® sponge electrodes for MRI compatible stimulation). Stimulation electrodes have been placed on the phantoms right center (C4) and right posterior (O2) positions. We performed the quantification of noise induced by tDCS relative to sensor noise. As expected our Starstim device stimulators generated significant electromagnetic noise resulting in FFT power changes up to 5 dB (relative to sensor noise) on average across the MEG sensor array. Noise was highest at lower frequencies (0-2Hz) than (higher frequencies (2-200Hz). Noise levels were highest around the stimulating tDCS electrodes and detectable in areas remote from the stimulation electrode with varying amplitude. We can compare levels of the noise produced by tCS with a noise produced by metallic interference that come from inside the head in clinical studies, such as implanted intracranial electrodes and dental ferromagnetic prosthesis and brackets, or from outside, such as pacemakers and vagal stimulators. It has been already shown already that algorithms based on signal space separation (SSS) and blind source separation (BSS) techniques to remove metallic artifacts from MEG signals can be successfully applied to any MEG dataset affected by artifacts, allowing further analysis of unusable or even poor quality data. For instance, the goal of BSS algorithms is to estimate the different original source signals or components from the observation signals assuming a linear mixture model. As has been shown in our recent study (Migliorelli et al. 2015), this can be done because, although original source signals and the mixing signals are unknown, a certain statistical independence between sources is assumed. Both (SSS and BSS) algorithms can increase the SNR by approximately 100%. Noise values from our test with tCS are comparable to those with vagal stimulator or ferromagnetic prosthesis. We believe that the same methods that effectively remove metallic interference from MEG signals can be successfully applied to tCS unwanted signal components reduction or removal (as has been shown already in Soekadar et al. 2103, Garcia-Cossio et al. 2015 and Marshall et al. 2015). Our next step will be to perform in-vivo study with real subjects to open a window into the brain networks with MEG and simultaneous transcranial current stimulation. This will help in the identification of affected networks and thus help in optimal NIBS stimulation approaches.

 

Captura de pantalla 2015-10-22 a las 15.42.59

 

 

1.Hamalainen, M., Hari, R., Ilmoniemi, R.J., Knuutila, J., and Lounasmaa, O.V. (1993). Magnetoencephalography – Theory, Instrumentation, and Applications to Noninvasive Studies of the Working Human Brain. Rev Mod Phys 65, 413-497. DOI: 10.1103/RevModPhys.65.413

2.Andrew C. Papanicolaou (editor), Clinical Magnetoencephalography and Magnetic Source Imaging, Cambridge University Press; 1 edition (14 September, 2009); 220 pages; ISBN-13: 978-0521873758

3.Carolina Migliorelli, Joan F Alonso, Sergio Romero, Miguel A Mañanas, Rafał Nowak and Antonio Russi.

Automatic BSS-based filtering of metallic interference in MEG recordings: definition and validation using simulated signals, J Neural Eng. 2015 May 27;12(4):046001.

4.Soekadar SR, Witkowski M, García Cossio E, Birbaumer N, Robinson SE, Cohen LG. In vivo assessment of human brain oscillations during application of transcranial electric currents. Nature Communications 2013;4.

5.Garcia-Cossio, E., Witkowski, M., Robinson, S.E., Cohen, L.G., Birbaumer, N.,

Soekadar, S.R. Simultaneous transcranial direct current stimulation (tDCS) and whole-head magnetoencephalography (MEG): assessing the impact of tDCS on slow cortical magnetic fields. Neuroimage. 2015; doi: 10.1016/j.neuroimage.2015.09.068 [in press].

6.Marshall T, Esterer S, Herring JD, Bergmann TO & Jensen O (2015). On the relationship between cortical excitability and visual oscillatory responses – a concurrent tDCS-MEG study. NeuroImage, doi:10.1016/j.neuroimage.2015.09.069.

 

Imagine… Suddenly… Not being able to speak anymore!

0
0

rr33

112Every 2 seconds someone in the world will have a stroke for the first time. There were almost 17 million incidences of first-time stroke worldwide in 2010. Stroke is one of the leading causes of long-term adult disability.

 

 

 

 

 

 

 

Are you aware of the impact of post-stroke aphasia?

4343Post-stroke aphasia accounts for around 85% of all cases of aphasia, is present in 21-38% of post-stroke patients (Laska et al., 2001; Berthier, 2005), and poses a major challenge in neurorehabilitation.

While spontaneous post-stroke aphasia recovery occurs, this largely takes place in the first 2 to 3 months after a stroke with a slower rate and longer progress time compared with spontaneous motor recovery (Sarno and Levita, 1981; Wade et al., 1986). Further, 12% of post-stroke survivors are left with some degree of chronic communication deficit even after vigorous treatment (Wade et al., 1986; Lazar et al., 2010).

Patients with post-stroke aphasia experience longer length of stays, greater morbidity, and greater mortality than those without aphasia and therefore incur greater costs (Ellis et al., 2012). Additionally, people with aphasia tend to participate in fewer activities and report worse quality of life after stroke than those without aphasia (Hilari, 2011).

 

 Speeding post-stroke language recovery – the real unmet need!

The aphasic population is heterogeneous, with individual profiles of language impairment varying in terms of severity and degree of involvement across the modalities of language processing, including the expression and comprehension of speech, reading, writing and gesture (Parr et al., 1997; Code and Herrmann, 2003).

rr33

Speech and language therapy (SLT) is the most commonly employed treatment in aphasia. Generally, SLT is tailored to meet the individual needs of patients. Nevertheless, its therapeutic effects are quite variable and usually modest (Brady et al., 2012).

 

 The potential of neuromodulation in post-stroke language rehab

Neurorehabilitation with non-invasive brain stimulation techniques (NIBS) ‒ particularly repetitive transcranial magnetic stimulation (rTMS) or transcranial direct current stimulation (tDCS) ‒ may enhance the effects of SLT in selected patients. rTMS and tDCS, have shown promise as potential approaches for enhancing aphasia treatment due to the evidence of modulating neural reorganization after stroke.

Like rTMS, tDCS can alter cortical excitability in predictable ways. However, tDCS is characterized as neuromodulatory rather than neurostimulatory, since the currents delivered during tDCS are not sufficient to directly generate or inhibit action potentials. tDCS currents modulate neural resting membrane potentials, in which anodal tDCS (a-tDCS) increases cortical excitability and cathodal (c-tDCS) decreases cortical excitability (Nitsche and Paulus, 2000). tDCS can easily be administered during behavioral treatment, and is less expensive and likely to be better accepted by patients than rTMS (Floel et al., 2011; Floel, 2014). Nevertheless, implications for clinical practice should be ascertained in larger multicentre trials.

 

Neuroplasticity and the concept of interhemispheric inhibition

eerLanguage recovery after a stroke depends significantly on the degree of neuroplastic change, which is usually associated with reorganization and reconnection of the lesioned and perilesional dominant hemisphere regions, acquisition or unmasking of the homologous language area in the non-dominant hemisphere, or activation of the non-dominant cortical region (Hamilton et al., 2011).

As already analyzed by Shah et al. (2013), many studies employing tDCS as a therapy for aphasia have adopted approaches that are broadly consistent with an interhemispheric inhibition model of aphasia recovery. That is, a-tDCS investigations are mainly centered on left hemisphere language areas in order to increase the excitability in the perilesional and residual fronto-temporal areas (Baker et al., 2010; Fiori et al., 2011; Fridriksson et al., 2011; Marangolo et al., 2013), whereas c-tDCS is generally applied to the right homotopic areas to inhibit over activation (due to transcollasal disinhibition) in the contralesional right homologs. In a recent Cochrane meta-analysis, Elsner et al. (2013) evaluated five tDCS interventional trials with sham-controls involving 54 post-stroke aphasic patients. Although these studies using tDCS (a-tDCS or c-tDCS) in combination with SLT favored the intervention in each of these five trials (Monti et al., 2008; Floel et al., 2011; Kang et al., 2011; Marangolo et al., 2011; You et al. 2011), confidence interval width did not allow the results to be generalized. Elsner et al. (2013) did however state that when considering only c-tDCS over the non-lesioned hemisphere versus sham-tDCS, the effect on naming accuracy rises and the probability of error declines.

 

Further steps towards enhancing our quality-of-life….

Otal et al. (2015) identified and summarized RCTs and randomized controlled cross-over trials assessing the clinical efficacy of NIBS techniques in their inhibitory form (i.e., low-frequency rTMS or cathodal tDCS) over the unaffected non-language dominant hemisphere as an adjunct to SLT for post-stroke aphasia rehabilitation. When outcome measures were considered comparable, the authors combined these in an exploratory meta-analysis. This study allowed to specifically examine the neuroplastic process underlying aphasia recovery in adults considering the concept of reducing interhemispheric competition. The results reflect that low-frequency rTMS and c-tDCS over the unaffected non-language dominant hemisphere may be a promising approach compatible with the concept of interhemispheric inhibition (for specific meta-analysis details refer to the original article by clicking on Frontiers in Human Neuroscience).

 

Take home message

Neromodulation is promising!, yet, further multicenter RCTs with larger populations and homogenous intervention protocols are required to confirm these and the longer-term effects of rTMS and tDCS in post-stroke aphasia rehabilitation.

 

Captura de pantalla 2015-10-22 a las 15.42.59

 

Baker JM, Rorden C, Fridriksson J. Using transcranial direct-current stimulation to treat stroke patients with aphasia. Stroke (2010) 41:1229-1236. doi: 10.1161/STROKEAHA.109.576785

Berthier ML. Postroke aphasia – epidemiology, pathophysiology and treatment. Drugs Aging (2005) 22:163-182. doi:10.2165/00002512-200522020-00006

Brady MC, Kelly H, Godwin J, Enderby P. Speech and language therapy for aphasia following stroke. Cochrane Database Syst. Rev.(2012) 5:CD000425. doi: 10.1002/14651858.CD000425

Code C, Herrmann M. The relevance of emotional and psychosocial factors in aphasia to rehabilitation. Neuropsychol Rehabil (2003) 13(1-2):109-32. doi: 10.1080/09602010244000291

Ellis C, Simpson AN, Bonilha H, Mauldin PD, Simpson KN. The one-year attributable cost of poststroke aphasia. Stroke (2012) 43(5):1429-31. doi: 10.1161/STROKEAHA.111.647339

Elsner B, Kugler J, Pohl M, Mehrholz J. Transcranial direct current stimulation (tDCS) for improving aphasia in patients after stroke. Cochrane Database (2013). Syst Rev 6:CD009760. doi: 10.1002/14651858.CD009760

Fiori V, Coccia M, Marinelli CV, Vecchi V, Bonifazi S, Ceravolo MG, Provinciali L, Tomaiuolo F, Marangolo P. Transcranial direct current stimulation improves word retrieval in healthy and nonfluent aphasic subjects. J Cogn Neurosci (2011) 23(9):2309-23. doi: 10.1162/jocn.2010.21579.

Flöel A, Meinzer M, Kirstein R, Nijhof S, Deppe M, Knecht S, Breitenstein C. Short-term anomia training and electrical brain stimulation. Stroke (2011) 42(7): 2065-2067. doi: 10.1161/STROKEAHA.110.609032

Flöel A. tDCS-enhanced motor and cognitive function in neurological diseases. Neuroimage (2014) 85 Pt 3:934-47. doi: 10.1016/j.neuroimage.2013.05.098

Fridriksson J, Richardson JD, Baker JM, Rorden C. Transcranial direct current stimulation improves naming reaction time in fluent aphasia: A double-blind, sham-controlled study. Stroke (2011) 42(3):819-821. doi: 10.1161/STROKEAHA.110.600288

Hamilton RH, Chrysikou EG, Coslett B. Mechanisms of aphasia recovery after stroke and the role of noninvasive brain stimulation. Brain Lang (2011) 118:40-50. doi: 10.1016/j.bandl.2011.02.005

Hilari K. The impact of stroke: are people with aphasia different to those without? Disabil Rehabil (2011) 33(3):211-8. doi: 10.3109/09638288.2010.508829.

Kang EK, Kim YK, Sohn HM, Cohen LG, Paik NJ. Improved picture naming in aphasia patients treated with cathodal tDCS to inhibit the right Broca’s homologue area. Restor Neurol Neurosci (2011) 29(3):141-52. doi: 10.3233/RNN-2011-0587

Laska AC, Hellblom A, Murray V, Kahan T, von Arbin M. Aphasia in acute stroke and relation to outcome. J Intern Med (2001) 249(5):413-22.

Lazar RM, Minzer B, Antoniello D, Festa JR, Krakauer JW, Marshall RS. Improvement in aphasia scores after stroke is well predicted by initial severity. Stroke (2010) 41:1485-1488. doi: 10.1161/STROKEAHA

Marangolo P, Marinelli CV, Bonifazi S, Fiori V, Ceravolo MG, Provinciali L, Tomaiuolo F. Electrical stimulation over the left inferior frontal gyrus (IFG) determines long-term effects in the recovery of speech apraxia in three chronic aphasics. Behav Brain Res (2011) 225(2):498-504. doi: 10.1016/j.bbr.2011.08.008

Marangolo P, Fiori V, Calpagnano MA, Campana S, Razzano C, Caltagirone C, Marini A. tDCS over the left inferior frontal cortex improves speech production in aphasia. Front. Hum. Neurosci. (2013) 7:539. doi:10.3389/fnhum.2013.00539

Monti A, Cogiamanian F, Marceglia S, Ferrucci R, Mameli F, Mrakic-Sposta S, Vergari M, Zago S, Priori A. Improved naming after transcranial direct current stimulation in aphasia. J Neurol Neurosurg Psychiatry (2008)79(4):451-3.

Nitsche A, Paulus W. Excitability changes induced in the human motor cortex by weak transcranial direct current stimulation. J Physiol (2000) 527 Pt 3:633-9.

Otal B, Olma M, Flöel A, Wellwood I. “Inhibitory non-invasive brain stimulation to homologous language regions as an adjunct to speech and language therapy in post-stroke aphasia: a meta-analysis”, Frontiers in Human Neuroscience 04/2015; 9(236). doi:10.3389/fnhum.2015.00236

Parr S, Byng S, Gilpin S, Ireland C. Talking about Aphasia: Living with loss of language after stroke. Buckingham: OUP, (1997).

Sarno MT, Levita E. Some observations on the nature of recovery in global aphasia after stroke. Brain Lang (1981)13:1-12.

Shah PP, Szaflarski JP, Allendorfer J, Hamilton RH. Induction of neuroplasticity and recovery in post-stroke aphasia by non-invasive brain stimulation. Front Hum Neurosci (2013) 7:888. doi: 10.3389/fnhum.2013.00888

Stroke statistics, January 2015 – stroke.org.uk

Wade DT, Hewer RL, David RM, Enderby PM. Aphasia after stroke: natural history and associated deficits. J Neurol Neurosurg Psychiatry (1986) 49(1):11-16.

You DS, Kim DY, Chun MH, Jung SE, Park SJ. Cathodal transcranial direct current stimulation of the right Wernicke’s area improves comprehension in subacute stroke patients. Brain Lang (2011) 119(1):1-5. doi: 10.1016/j.bandl.2011.05.002

Part of this work is based on the original manuscript published by the same author on Frontiers in Human Neuroscience in April 2015, Inhibitory non-invasive brain stimulation to homologous language regions as an adjunct to speech and language therapy in post-stroke aphasia: a meta-analysis. The analysis was performed prior to joining the company (NE) without any conflict of interest. This text contains direct paragraphs from Otal et al. (2015) publication.

Viewing all 97 articles
Browse latest View live




Latest Images