Quantcast
Channel: Neuroelectrics Blog – Latest news about EEG & Brain Stimulation
Viewing all 99 articles
Browse latest View live

Neuroscience at Burning Man

$
0
0

dream

1-DREAM

Copyright© 2016 Toni Riera

“There will be dust”

This year I made one of my dreams come true…I attended the Burning Man! For those who haven’t heard about it, Burning Man is a week long festival where a community of 70.000 “burners” build a city in a place called “La Playa” in the middle of a dessert about 90 miles from Reno, Nevada. After the festival, the city is dismantled and the dessert remains just as it was before, following the “Leave no trace” principle.

What happens during this week is just amazing and hard to explain in words. There is art and creativity everywhere. It is surely the place on earth where more interesting people are gathered together and where every single person you meet is your best friend.

“Participation” and “Radical Self-expression” are other important principles of the Burning Man. This basically means that there are no invited artists, but on the other hand, everyone is welcome to participate and create art, performances and shows. Based on that, I decided to prepare a really fancy dress, obviously including an EEG cap and other interesting components.

After a couple of months designing electronic circuits, including an Arduino, loud speakers, portable batteries, a tablet, an ENOBIO and many many many LEDs, I was ready to fashion design my fancy dress! That included a lot of sewing a welding!

My LED rampage was strong and took me to connect LEDs to my “Playa” boots with pressure sensors. Every step I did my boots would flash blue, creating a really neat effect during the night. Some more LEDs in my hat. Some more in the perimeter of the wheels of my bike, and those would change their colour creating a fancy spiral effect. Even more LED stripes in the back of my jacket controlled by Arduino creating interesting patterns and finally two powerful LEDs on the front of my jacket used as driving lights for my night rides.

2-Aye_chicas

Copyright© 2016 Toni Riera

In any case, my real performance was based on ENOBIO and on the concept of EEG sonification. With my colleagues we had already worked on that, but in this case I had some constraints that I needed to follow in order to make this performance practical at the Burning Man. First of all I needed everything to be portable and wearable. As ENOBIO is wireless and quite small, I only needed a tablet with enough power to receive the data and to perform the required realtime data analysis. I also needed to attached portable loud speakers. With some velcro, I was able to attach them in my shoulders.

The sonification part was quite interesting too. The generated music was quite psychedelic and interactive. With just few electrodes, I was able to control quite a lot of different virtual electronic instruments. The EEG data was collected with a posterior electrode that mapped alpha power to different chords (minor type for high alpha, major type for low alpha, i.e. relax/sad chords vs excited/happy chords). The rhythm tempo was controlled by the heartbeat, just by placing an electrode in my left wrist. I had different drums, but mainly with a random generator I changed the different beat sounds creating a jungle style rhythm, quite danceable. As ENOBIO includes an accelerometer, I was also able to map head movements with a sort of fuzzy/scratchy noise quite fancy and heavy-metalish style. Finally, with an EMG electrode placed in the middle arm, I was able to perform “solos”, mapping notes from a musically tuned scale with the energy of my middle arm muscles.

This brain sonification, together with all the leds made quite a funny and interesting show as you can imagine, with beautiful desert landscape, amazing sun-sets and also few sun-rises.

3-Burningman_CC by 2.0 Aaron Logan

Burning Man  – By Aaron Logan – CC 2.0

Burning Man is in itself a mind blowing experience where your senses are overwhelmed with creativity and art. Indeed that is already a very interesting experience for the brain. If on top of that, you add some artistic application based on EEG, you get quite a nice neuroscientific performance. After my first Burn, I have many new ideas and applications for EEG based performances, to be implemented some day soon! For the moment I share some nice pictures with you. Thanks for reading and stay tuned!

 

 

 

5-Black Rock City_aerial_CC BY 2.0 Kyle Harmon

Black Rock City  –  by Kyle Harmon – CC 2.0

4-Aye_Enobio

Copyright© 2016 Toni Riera

6-Aye_Enobio2

Copyright© 2016 Toni Riera

 

Captura de pantalla 2015-10-22 a las 15.42.35


Correcting Ocular Artifacts in EEG Signals

$
0
0

84_neuroeletrics_WIRED_00-8

All of us who deal with EEG signals know how important artifacts are. EEG is one of the biological potentials with lowest amplitude (typically a few microvolts), which makes it highly sensitive to be contaminated by undesired interferences, known as artifacts. We can find several artifacts sources such as be biological (recorded activity from other sources than the brain, i.e. the heart, muscles or eye movements), electromagnetic (interferences from other electronic equipment or mains noise) and others derived from head movements or changes in the skin-electrode interface. In general we can manage artifacts in two ways: detecting their appearance and remove the entire contaminated EEG sequence or cleaning artifacts interference from the recorded EEG signal, this last technique known as artifact correction.

One of the most relevant artifacts that pollute EEG recordings comes from interferences from eyes and eyelid movements. The closer to the eyes an electrode is placed, the more it is affected by these ocular artifacts. Among EOG artifacts the most relevant ones are eye-blinks. These interferences can be in the order of hundreds of micro-volts while artifact free EEG is the order of tens of micro-volts. Ocular artifacts contaminate the EEG in the band ranging from 0 to 15 Hz interfering with three of the most commonly used EEG bands, Delta, Theta and Alpha.

There are many techniques to correct artifacts, in this post I introduce three of the most commonly used. However none of them is perfect and researchers are still devoted to find mathematical methodologies to robustly subtract these interferences from EEG recordings. To this effect, EEG set-ups usually include electrodes to record EOG signals that are usually placed above/below eyes (Vertical EOG artifacts) and next to them (Horizontal EOG artifacts). These techniques are an example of currently accepted EOG correction techniques, however, there are many more such as neural networks, wavelet analysis…

 

84_neuroeletrics_WIRED_00-8

 

Linear Regression

Linear regression methodologies have been largely used for EOG correction in EEG time series. It is no doubt one of the simplest methods to remove ocular interference. Its main constraint is that they require measured ocular interference to be linear dependent and normally distributed, which in general is not the case. Linear regression coefficients are calculated during a calibration process in which it is assumed that the signal of the blinks and the eyes movements is orders of magnitude higher than the EEG signal and that these interferences can be measured in EOG electrodes. For each EEG channel the linear combination of the EOG channels that maximizes its ocular interferences is calculated. Therefore to clean the EEG, the calculated linear combination of EOG electrodes is subtracted. One of the main advantages of this method is that it has online capabilities, its main disadvantage is that sometimes it is not accurate enough and that if not well parameterized can dirty the recording.

Independent Component Analysis

Blind source separation (BSS) addresses the significant problem of finding a suitable representation of multivariate data. Independent component analysis separates a multivariate signal into additive independent components. This is done by assuming that the components are non-Gaussian signals and that they are statistically independent from each other. ICA delivers in the best case as many components as EEG channels and the corresponding transformation matrix to get these components. When applying ICA to an EEG time sequence, ocular artifacts generate in general independent components. These components can be identified due to their similarity with EOG channels. If these components are correctly tag and removed, we can reconstruct our EEG measurement free of EOG interferences. ICA is a popular method for the removal of eye-movement artifacts, and it has received considerable attention due to the fact that can account for multiple and independent artifact sources. In Kroupi 2011 subspace projection and adaptive filtering EOG correction methodologies are compared. The paper performs a comparative study of the performances of several methods using two measures, namely the mean square error (MSE) and the computational time of each algorithm. According to this study, ICA methods appear to be the most robust but not the fastest ones.

Principal Component Analysis

Principal component analysis (PCA) methodologies use an orthogonal transformation to transform and EEG sequence into a set of linearly uncorrelated signals, known as principal components. The procedure is similar as in ICA. We should identify which components corresponds to ocular artifacts and reconstruct the EEG removing these components.

 

Captura de pantalla 2015-10-22 a las 15.42.20

Enobio at the outer space. A step closer

$
0
0
Enobio at the outer space. This is a think I personally would love to see happen. Our device has already been in a lot of different scenarios so far. From the classical ones, considering this type of device, like research labs or hospitals, to other more “exotic” ones, like parabolic flightsart exhibitionsfestivals, or monitoring top athletes in action. Might the outer space be the next one. At least, we are a step closer thanks to an ESA funded project to be conducted in the following months.
In this project, basic neuroscience, applied neuroscience technology and spaceflight technology will be brought together by three European high-specialised partners: the UCL Institute of Cognitive Neuroscience from UK, Starlab Barcelona from Spain and the DLR Institute of Aerospace Medicine from Germany respectively.
The project title, Lost in Space, fits quite well the phenomena that is going to be investigated. Sensory and cognitive disturbances have been regularly reported in high performance flights. Among them, the symptoms that astronauts and high performance flight pilots might exhibit when exposed to gravitational force (g+) are of greater concern. They include symptoms of depersonalisation and derealisation phenomena (DD). When this happens pilots and astronauts perceive the surrounding environment as not real. This feeling of ‘unreal’ distancing from the external world can be profoundly disturbing. Imaging you can not recognize your own body as yours, or your perception of what you are seeing or hearing is processed by your brains as something that does not match to a real situation.

The origin for such profoundly unsettling feelings comes from a conflict between the gravity information reported by the vestibular system compared with what is received from other sensory systems.
As a result of all those symptoms, the pilot’s ability to control their aircraft can be temporally impaired so lives can be potentially placed at risk. By addressing this issue, positive and conclude results in this project will lead to an improvement on the safety condition for this kind of flights. In a more general sense, it is expected that the current scientific knowledge about how gravity influences the relation between organism and the surrounding environment will be extended by the end of the project.

To replicate this conflict between vestibular and other sensory systems in an earth-based research facilities, a short human centrifuge platform (SAHC) will be used to deliver artificial gravity to a group of healthy participants. They will sit in the SAHC, which will spin at 45 rpm to deliver 1g artificial gravity at head’s level, for around 15 minutes. During this time their EEG will be recorded by a 32 channel Enobio while they perform a vestibular spatial judgement task. Different two consecutive scenes that might have slight perspective modification from one to the other will be presented. Participants will have to judge whether the second scene is congruent or not according to the predictions based on the change in perspective during the delay period between the scenes and the intervening motion they are sensing.

©neuroelectrics43The provided answers to this task will produce a performance curve showing the ability to match the vestibular signals induced by the SAHC and the visual differences in the scenes. When there is a poor vestibular-visual matching a drop in performance doing the task should appear. The processing of the EEG signals recorded by Enobio would hopefully lead to detect physiological markers of that vestibular-based uncertaintyEEG, evoked potential caused by the two consecutive scenes will be extracted and compared with those produced in baseline conditions (i.e. trials with not artificial gravity). Especial attention will be given to measure differences in the P300 ERPs components. Differences in amplitude and latency of this component are associated with the brain’s ability to respond to mismatch stimuli.

A practical outcome of the project is that the use of the information from those individual signatures might help when both selecting personnel and pilot’s training. Those whose can better resolve vestibular-visual mismatch could prevent sensory-motor errors during flight and better guarantee the mission safety.

If you want to know the actual results from the experiment just stay tuned, in a couple of months we are going to spin Enobio around.
Captura de pantalla 2015-10-22 a las 15.41.46

The rise and decline of EEG, where are we now?

$
0
0

84_neuroeletrics_WIRED_00-5

Electroencephalography (EEG) is a brain-state monitoring technique that allows for the recording of neural activity non-invasively. Since the first human EEG recording in 1924 by Hans Berger, the technique has become of widespread use in clinical diagnosis and in the research community [1] (besides other not-so conventional uses, see this post by Alejandro Riera). Widespread? The EEG has not been equally attractive or useful since its discovery as its clinical and experimental usage reached a high point around 1970 – 1980. After that, the community seemed to loose the interest on the method, and is not until the start of the 2000s that the downhill trend stabilized.

Upon its discovery, the analysis of EEG was based on the manual tracing and observation of the oscillatory activity. Through visual inspection, researchers and clinicians had to infer diagnoses and the brain mechanisms that explained population trends. But as soon as the technology allowed, the analysis and processing of EEG signals shifted towards the automatization. For instance, the first ink-writing amplifier was developed in 1932 and from then, tools for the automatic monitoring and analysis of neural signals have only improved. During this time also, the processing power of computers started increasing, and the use of signal processing transformations like the Fourier or Hilbert transforms and event related potentials (introduced in this post) spread among the EEG community. However, sooner than later, researchers and clinicians realized that EEG is far too complex and variable for its use in clinical environments with such rudimentary techniques. By the 80s, the experimentation with EEG started to drop, and the expected automatization that had to be introduced in clinical procedures was never successful.

While all this information can be gathered from diverse books and articles [1 for instance], these trends and changes of attitudes towards EEG can now be quantified through computational linguistics. A particularly interesting tool for that would be the Google Ngram Database, which contain the frequency count of words from all the books published between 1500-2008 on diverse text corpora (English text books, French, German…). By analyzing the English corpora, we can see that the aforementioned rise and decline of EEG is reflected through the change on the bibliographical references to this technique, visualized in Figure 1. After the 80s, EEG saw a general disinterest reflected by a huge decrease of bibliographical references, reaching in 2008 allusion levels observed in 1968 (see Figure 1, adapted from Google Ngram Viewer, similar trend observed for keywords ‘electroencephalogram’, electroencephalography’, and analogous words).

 

Figure 1. Prevalence of the text keywords in the Google books database (in ‰, adatpted from Google Ngram Viewer)

Figure 1. Prevalence of the text keywords in the Google books database (in ‰, adatpted from Google Ngram Viewer)

 

To understand the decrease of the prevalence of EEG in the bibliography consider that, despite of the efforts over these decades, the automatization of EEG analysis during the 70-80s was limited to the analysis of the frequency domain. Machine-learning approaches were on its childbirth (see Figure 1), as well as the computational power available on the computers used for that analysis. But this has been changing, as our mobile phones reach a computing power similar to 2010 laptops, the computing power and methodology associated to the analysis of EEG evolves. Nowadays, the use of machine learning and computational intelligence approaches for the analysis of EEG is becoming prominent in the neuroscience community performing research (as already introduced by Aureli Soria-Frisch in this post). The methodology applied for EEG analysis, it’s finally starting to consider the broad spatio-temporal complexity of the signals, and in turn, the complexity of the underlying brain.

However, a rise of the use of EEG in clinical settings is yet to be seen as many of these state of the art methodologies seem not to be reaching the clinicians. Can we change the course of this steady presence of EEG in bibliography? Can we extend the applicability of EEG in clinical settings?

There is still lots of work to do, involving research and development in the medical and clinical domains, but we will be watching closely and working to see how far can we bring EEG into benefiting clinical practice.

 

Captura de pantalla 2015-10-22 a las 15.42.59

 

References:

  1. Niedermeyer, E., & da Silva, F. L. (Eds.). (2005). Electroencephalography: basic principles, clinical applications, and related fields. Lippincott Williams & Wilkins.
  2. National Clinical Guideline Centre (January 2012).The Epilepsies: The diagnosis and management of the epilepsies in adults and children in primary and secondary care (PDF). National Institute for Health and Clinical Excellence. pp. 21–28.

From Barcelona to the world: Neuroelectrics in Boston

$
0
0

Boston skyline (Creative Commons license)

Boston skyline (Creative Commons license)

Boston skyline (Creative Commons license)

It has already been two years since we landed in the US to open our new offices in Cambridge (Massachusetts). We chose the Boston-Cambridge environment because of its vivid ecosystem of talent, industry and top institutions working in neuroscience research.

There are many research groups working in the field of noninvasive brain stimulation and neuroimaging in the Boston area, spanning from technology development, through basic neurobiologic insights from animal studies and modeling approaches, to human proof-of-principle and multicenter clinical trials.

For example, some research groups at Harvard Beth Israel Deaconess Medical Center use non-invasive brain stimulation techniques to understand the mechanisms that control brain plasticity across the life span to be able to modify them for the patient’s optimal behavioral outcome, prevent age-related cognitive decline, reduce the risk for dementia, and minimize the impact of neurodevelopmental disorders.

unnamed2

MIT Killian Court, © Justin Jensen

Also on the clinical side, the Neuromodulation Lab at Massachusetts General Hospital aims to understand how the brain’s structure and function affect disease and how interventions such as transcranial Direct Current Stimulation (tDCS) can change the mechanisms that contribute to neurological diseases such as schizophrenia, depression or bipolar disorder.

Another group at Massachusetts Eye & Ear Infirmary is also using electroencephalography (EEG) and tDCS to develop an assistive technology for the blind and understanding how the brain adapts to the loss of sight, both from ocular and brain-related causes.

On the more basic research side, some investigators at Massachusetts Institute of Technology (MIT) also use transcranial current stimulation to investigate how categories, concepts, and rules are learned by the human brain, how attention is focused, and, more generally, how the brain coordinates goal-directed thought and action.

Other groups at Harvard University are also using non-invasive brain stimulation and Magnetic Resonance Imaging (MRI) to investigate the factors that shape individual differences in self-control, which determine why some people are really good at flexibly adapting their behavior, forgoing short-term rewards to maximize long-term gains, while others appear to be so bad at it.

Boston is also a great place for startups seeking their first funding, either through venture capitalists or through grants awarded by private foundations or government funding. As an example of this welcoming environment that attracts foreign companies, Neuroelectrics recently received financial support from Massachusetts Life Sciences Center (MLSC), a governmental agency that fosters life sciences innovation, research, development and commercialization in the area of Massachusetts.

As a medical device company with a research background, we have always believed in the scientific method to develop our products: that’s why we have always partnered with the best research groups to understand the scientific needs and goals, and build upon that evidence. We believe this industry-academic type of collaborations is what really pushes the limits of our knowledge about the how the brain works and how to interact with it. And that is why we decided to open our new offices in Boston, a meritocratic ecosystem where academic institutions and startups have the amazing opportunity to work together and get the support from governmental and private institutions.

 

Captura de pantalla 2015-10-22 a las 15.42.20

Skiing for Research.

$
0
0

unnamed

As soon as the weather in the mountains dips below 0 degrees I look forward to one of my favourite time of the year…winter. In Barcelona it’s not easy to find a snow, however only 200km from the city, in the Pyrenees Mountains you can easily find white powder. With winter come the snow – and most importantly skiing, especially my favourite downhill skiing.

This time, I opted for something bit different and decided to record my EEG while skiing using Enobio, our wearable, lightweight and wireless EEG system. Our recent developments in mobile EEG technology provide an unprecedented opportunity to move EEG from the lab into real-world. Here comes Enobio with the mission to: measure brain functions during motor activity and reveal brain mechanism underlying sport performance.

 

unnamed5

 

 

We have to have in mind that real-world EEG data collection creates a number of specific requirements including portability of the equipment, ease of application and the ability to effectively handle motion artifacts. The conditions in Pyrenees were typical to winter season: -7 degrees (C), altitude more that 2100m (Grandvalira Pyrenees Mountains (Andorra)) and a lot of snow. For EEG setup and recording Enobio 8 – NE monitoring system has been used. Configured in EEG Holter mode (wireless and thus without restriction regarding movements range and type), with incorporated 3D accelerometer (important for the detection of every movement onset), using 8 Geltrodes (placed in F7, C3, PO7, F8, C4, PO8, FZ, CZ) in a NE headcap and using single reference electrode attached to right mastoid.

 

unnamed4

 

 

First, the configuration and data quality checking was performed using NIC software on Mac OS computer (via Bluototh transmission). Once Enobio was ready to acquire my brain signals the Holter mode was turned on (with a recording in a microSD card) and I was ready to go skiing. The whole set-up in these conditions took me no more than 8min, taking into account that has been performed in a ski-locker on the top of mountains.

 

 

Before going into neurosciences lets try to remind first what’s the basics physic of skiing. The skier gains speed by converting gravitational potential energy [mgy] into kinetic energy of motion [1/2mv2]. So the more a skier descends down a hill, the faster he goes. A skier following straight line going down will reach the maximum speed for the slope. However, alpine skiing consists of the use of turns to smoothly turn the ski from one direction to another to control the speed of descent. Successful motor execution related to alpine-ski involves 3 basic aspects: application of the right measure of strength, a high degree of coordination and exact timing. Additionally, the whole process must be coordinated by billions of cortical neurons, motor neurons on the spine level, and more than 600 muscles with 240 degrees of freedom! In the cortex the area of the excitation will indicate the direction of the muscular action, whereas the excitation size and the firing rate will encode the force.

 

unnamed

 

unnamed2

 

 

In my trial, the EEG was monitored continuously while I was skiing on the slopes at an altitude of between 2300-2100 meters. The signal varied in quality depending upon my skiing activity. While stationary and moving down the signal was of very high quality. When actively alpine skiing the signal showed artifact in proportion to the degree of muscle movement, however with high quality and in the acceptable range. Moreover, new data-driven analysis approaches allow separation of signals arriving at the sensors from the brain as well as non-brain sources like neck muscles, eyes, heart, and the electrical environment. For instance, independent component analysis (ICA) and related blind source separation methods have proven effective for separating brain from non-brain activities from electrophysiological data recorded during experimental paradigms. Studies designed to measure brain activity during sport performance have, to date, been limited, with Enobio issues of equipment portability and movement artifact are not a significant impediment any more!

 

Sin título

Mobile EEG in skiing context. EEG traces (Ch1-Ch8) with accelerometer (ACC) indicating onset of the movement. EEG data shown were recorded during alpine skiing (movie above), providing an illustration of data quality during performance.

The best tech conference ever: Near future summit!

$
0
0

Sin título

Sin título

Neuroelectrics

After years as an entrepreneur and after attending numerous conferences I have to admit that the conference that visionary Zem Joaquim and her team (thanks also to  James Joaquin & Obvious for the support) were able to put together at La Jolla last March was without hesitation the best one in my experience.  250 of the best forward thinking minds in the world met to  SHARE THEIR VIEW ON THE FUTURE and – what is most important –  DOING something about it.

I am going to try to give you here a sense of what was discussed (sorry I cannot discuss in any depth  all the talks!), although I am sure it will not be quite the same as being there – but let´s give it a try!

We started by “the know your cell session” (brilliantly curated by Peter Diamandis). Can you imagine a top scientist like George Church talking about gene sequencing and gene editing and how for example CRISPR will allow to find better therapies  for certain diseases?

One of the most fascinating areas that were discussed (we were able to even visit on site at Calit2-UCSDiego-UCIrvine) is the work of Larry Smarr & Dr. Robert Knight on the microbiome. Just unbelievable know how important is your “p..hh”  and how it may help you not only to identify risk health but actually cure different pathologies. Some of the attendees were able to provide their own samples to populate the database ; – )

Seems like the whole healthcare field is moving steadily forward into prevention, and genetics and microbiome research  are just two examples on how science and technology is rapidly evolving. The Health Nucleus project is even transforming prevention into an experience and clinical research project. Powered by Human Longevity, they are offering advanced and comprehensive health evaluation that we had the chance to visit as well. Spectacular installations and cutting edge technology to scan you in every possible way in eight hours to finally provide a report on where your health risks may be.

 One of my favorite sessions was on food and just how much innovation is needed in that field. Given the lack of sustainability of current food industry and the harm being doing to our planet, these brilliant minds are committed to change what we eat, how we grow food and how we cook it. From something as simple as making a solar BBQ by Dr. Catlin Powers @onearthdesigns (in fact, saving lives of women in Himalaya), to bring back farmers and community back at the center of food in The kitchen project (Kimbal Musk), to artificially grown hamburgers (Memphis meat) to Caleb Harper at MIT developing what he calls a food computer, growing delicious, nutrient-dense food, indoors anywhere in the world. Mind blowing!

Health and food may sure have an impact on how much we live, and indeed the next session focused on how to help us live longer and better. One of the most respected entrepreneurs in USA, Dean Kamen, talked not only on what Deka Research is doing to improve lives of people with chronic diseases (such as dialysis at home or reinventing the wheelchair by self-balancing it to allow the user to go up and down staircases), but his more passionate project, FIRST, aimed “To create a world where science and technology are celebrated… where young people dream of becoming science and technology heroes”.

To live longer we may have to shower less (tell my kids about it) and James Heywood (Patientslikeme enterpreneur) described his new venture, ABiome on getting blessed mother dirt (yes dirt) again on us in the form of a spray restoring and maintaining beneficial bacteria in our skin. Seems like when your kid does not want to shower there may be a good reason for it ; – )

As if our minds were not challenged enough, Willian MCDonough talked about the Cradle to cradle movement on how it is influencing the whole architecture field and creating a new paradigm on how we make things. Shifting the mindset from scarcity to abundant ways of creating and producing is the unique way to protect our planet and ensure sustainability.

There was also time to discuss about collaborative approaches, like the new venture from Linda Avey (ex 23andme founder) We are curious,  a new personal data aggregating and tracking platform and Thomas Goetz, founder of Iodine a company that gives consumers better information — and better visualizations — of their health data.

There was time to relax and listen to good music (thanks Michael Franti for being so inspirational!)  and Neuroelectrics was honored to speak in the panel that closed the event.

 Our advisor Adam Gazzaley at UCSF (he is the real reason we were invited, thanks Adam!) talked about how video games are not longer just games and how they may help to prevent cognitive decline and enhance memory. New healthcare therapies will have to be combined treatments, no longer silo approaches, where games, electrophysiology, drugs and cognitive therapies will all be  combined to interact with the brain from a dynamic interactive perspective.

During my talk I discussed how the novel technologies developed at Neuroelectrics, allowing both brain monitoring and brain stimulation, may have a great impact in the near and in the far future. In the near term: the use of Starstim in close loop applications like epilepsy (diagnosing a seizure and prevent it from happening by injecting low voltage currents into the brain) or cognitive decline (by combining EEG, stimulation and video games such as the ones developed by Akili, Adam´s company) to be taken home. In the future: this technology may even change the way we communicate with humans, by allowing brain to brain communication. Our team´s paper published describing the first time two brains were connected from India to France was explained during the presentation (with my big helper ET who lost his cove ears during one of the parties).

As mentioned in the beginning I have not been able to go throughout all the talks: there were astronauts, cooks, entrepreneurs of all kinds, venture capital leaders, musicians, scientists, journalists… over all, a unique mix of people with an exceptional clarity on how the NearFuture will look like and pioneering their fields to bring the future to us. Thanks Zem for letting Neuroelectrics be part of the NearFuturist community!

(More info on each of the modules will be featured at Medium, so stay tunned at https://medium.com/near-future )

2nd Indian BCI and Neurostimulation Workshop and Malaysian tour

$
0
0

Captura de pantalla 2016-05-17 a las 20.16.41

Captura de pantalla 2016-05-17 a las 20.16.41Before starting this post, I would like to share with you my own definition of Brain Computer interfaces (BCI):

A system that uses some type of brain signal, analyzes this signal to extract features and then uses some type of classifier to take a decision on these features. 

As you can see, this definition is very broad and I am sure some BCI purists will not like it. In any case, this definition of BCI includes a lot of systems, not only traditional EEG based BCI. For instance we can use an MRI, MEG or even deep brain electrodes as brain signal acquisition systems. We can also not work in real-time. For instance, a system that goes through your brain activity and is able to detect if you are suffering from a neurologic condition, would be a BCI to me. Also biometric, emotion detection and workload monitoring systems would be BCI’s as long as they work with brain signals. So please keep this in mind when talking about BCI. It includes much more than what we think!

The second edition of the Indian BCI and neurostimulation workshop took place last October in Bangalore. It was a two-day meeting that gathered over 60 neuroscientists from all around India. It was a real pleasure to see how neuroscience research in India is growing so fast and I am sure that very good publications and results will soon be published from Indian research groups.

The workshop was organized by Neuroelectrics together with ITIE, our local indian partner. Most of the talks were given by Dr. Alejandro Riera, but in this edition several Indian groups presented and shared their research. Just to mention a few:

  • Dr. Mahesh and Ms. Anand from Saint John Research Institute (Bangalore) presented their results with Event Related Potential using Enobio. They extracted Miss Match Negativity (MMN) and P300 potentials from children and infants to study malnourishment, among other fields.
  • Mr. Sandeep Bodda from the Computational Neuroscience Lab, Amrita School of Biotechnology, Kerala presented his interesting work with a robotic arm controled by a motor cortex BCI. They just started to use Enobio 32 channels and hopefully will start to get very good results soon.
  • Mr. Sanjeev Kubakaddi from ITIE Knowledge Solutions (Bangalore) presented his company and their research projects. He also did a very impressive demo of a wireless and portable ECG/EMG device that is able to transmit the data via RF to a computer located several kilometers from the subject.
  • Mr Appaji from Dept. of Medical Electronics of BMS College of Engineering Bangalore presented, in a very interesting talk, several ideas, projects and ethical issues related with Brain Computer Interfaces research, such as workload, stress and drowsiness monitoring.

Captura de pantalla 2016-05-17 a las 20.16.21

Before this workshop, I had the chance to organize three other workshops in Malaysia, where I was also impressed by the interest in neuroscience. Very strong research groups are working hard in this interesting and growing field. Together with Miss Li Hun Tan from H&A Medical Supply we visited:

Neuroscience in Asia is stepping strong and I am very happy to collaborate with so many different research centers. I am looking forward to continue working with them in this very interesting field. Who knows, maybe it is time to move to Asia! Thank you for reading and see you next time!

Captura de pantalla 2016-05-17 a las 20.15.44

Captura de pantalla 2016-05-17 a las 20.16.05

Captura de pantalla 2015-10-22 a las 15.41.46


Stimulus Presentation solutions for your EEG experiments

$
0
0

_DanielLoewe_0976

Neuroscientists usually need presenting stimuli as well as behavioral monitoring while recording brain activity. Several software solutions can be found out there that fit the minimum requirements an experiment might have in that regard. I am compiling and commenting here on some of them. I will appreciate very much your comments in case I am missing any others.

Presentation

Presentation and e-Prime are certainly the most popular stimulus presentation software used in labs. Presentation offers a large comprehensive scripting language which allows you to fully program your experiment. Its intuitive user interface is also very convenient for configuring and testing the hardware that might be involved in the campaign. Access to serial and parallel port is provided which might be useful when communicating with other programs to synchronize both the stimuli and the electrophysiological recorded signals through hardware triggers.

They provide an example scripts database which makes the language easy to learn and it is also large enough to find something you can start from when coding your own experiment.

Its extension manager allows to add functionalities on the fly. I have especially found the Lab Streaming Layer extension very useful. As I explained here, it allows the synchronization of experiments via software with the only requirement being that the applications be synchronized in the same network.

In case you need integrate Presentation functionalities with your own program they offer an API which can be called from C/C++, Matlab and Python and others.

E-Prime

E-Prime’s main difference from Presentation is the way experiments can be programmed. E-Prime offers a drag and drop graphical interface which can be very convenient in case you do not feel confident about your programming skills. The underlying scripting language, which is very similar to Visual basic, can also be accessed to design more complex experiments.

Like Presentation, E-Prime seamlessly integrates with fMRI devices by synchronizing with their scanner trigger pulses. In addition, E-Prime also integrates other hardware like the Tobii eye tracker and the eye gaze and eye movement sensors from SMI.

The timing accuracy for both Presentation and E-Prime is solid for precise timing experiments (within the ms accuracy). However, E-Prime needs a bit of a learning curve to achieve proper accuracy since the stimulus pre-loading functionality needs to be considered so as not to alter the timing of the paradigm.

Regarding synchronization with other applications, E-Prime supports TCP/IP link which, as discussed before, might not be enough depending on the experiment precision needed. The Lab Streaming Layer is not natively supported.

Like Presentation, E-Prime is designed to run in a Windows operative system.

PsychoPy

PsychoPy is a free open source alternative to Presentation and E-Prime in case you do not want (or you cannot afford) to buy a licensed software. Like E-Prime, it offers an intuitive user interface

experiment builder to define the experiment’s workflow. The experiment scripts can also be directly edited in Python which might be very convenient for Neuroscientists already familiarized with such popular language. By using python as its native scripting language there are no restrictions on developing or deploying the experiment in either Windows, Mac, Linux or other platforms.

Apart from the PsychoPy standalone version, it is also possible to work with PsychoPy as a library which your projects can call from your Python environment.

For using Lab Streaming Layer in PsychoPy for robust synchronization across multiple computers you can find some ports of that synchronization library to Python like this one.

Psychtoolbox

Psychophysics Toolbox is a free set of Matlab functions which its main functionality is to provide the interface to the computer hardware to achieve proper synchronization when displaying visual stimuli or playing audio ones. Since no graphical user interface is provided, good Matlab skills are recommended in case you decide to go for it. In case you cannot access a licensed Matlab, the Psychtoolbox functions can also be used in GNU Octave.

Paradigm

Paradigm is a commercial software with a similar approach to PsychoPy. It is a Python based solution providing both access to scripting programming with Python and with a very simple user interface.

Paradigm has an added value over and above PsychoPy which is the integration of several devices, including Enobio, to which it sends event triggers during the experiment. It also supports running experiments on mobile platforms running iOS, however, you need to design and develop the experiment in a Windows computer since Mac is not supported yet.

Inquisit

Inquisit might be your choice if you are looking for a solution working either in Windows and Mac. This commercial software provides quite a large library with examples and experiments which will speed up your development. To program your experiment a proprietary scripting language is provided. Its syntax is very similar to other imperative programming languages like Java, JavaScript, C#, PHP or C. If you are not familiar with any of those languages or have poor program skills the learning curve might be large in your case.

Like Paradigm, Inquisit experiments can run on iOS devices through its Inquisit Web special license which allows deploying the experiment remotely on the participant’s own devices. The data collection can be later accessed through the web too.

PEBL

PEBL is an open source GPL-licensed software. Its name stands for The Psychology Experiment Building Language. It provides a cross-platform solution, a scripting language to write your experiment’s workflow and a mailing list where you can find support from the community.

Custom solutions

As far as your development platform allows you access to the computer hardware resources like

video and audio cards and input and output ports, you might develop your own stimulus presentation application. For instance, when conducting neurobehavioral experiment on the field of gaming platforms like Unity or Superlab, they may be programmed in such a way that presents and records synchronized stimuli and responses.

_DanielLoewe_0976

So, what is finally an emotion?

$
0
0

stress_729-620x349

We all feel happy when something nice happens in our life, we cry when we lose someone we love, we feel fear when we perceive that our life is in danger. But where do all these processes come from? What happens inside us when we feel this way or another? Emotion is one of the most controversial topics in psychology, a source of inspiration, intense discussion and controversy among intellectuals and artists across the centuries. Many researchers believe that emotions are part of human evolution and contribute to personal and general awareness as they facilitate social communication. All modern theorists agree that emotions influence what people perceive, learn and remember, and that they play an important role in personality development and communication.

Emotions are powerful enough to affect and determine social relationships and interactions, memory and creativity, and to influence the mechanisms of rational thinking and decision making. A. Damasio [1] describes how a patient who had normal intelligence score but lacked certain emotions acted irrationally and was unable to make proper decisions. He beautifully argues that without emotions humans would need a huge amount of time to compute the gain of every response option and act according to the one that gives the highest gain for a certain situation. Thus, emotions do not impair rationality as people used to believe, but they help, instead, to have interactions and to make decisions in an intelligent way.

 Emotion analysis is a challenging topic in various research fields such as psychology and neuroscience. Many researchers across the world aim at understanding basic human functions, such as the brain-body function and how this function is related to emotional processes. In medicine, such research outcomes may provide an indicator of certain disorders. In this case, a pathological condition may reveal differences in affective patterns, which could help to identify the pathological condition in an early enough stage to intervene.

 Interestingly enough, various theories on what emotion is and where it emanates from have been proposed by various respected researchers around the world. Each of these emotion theories has its own set of assumptions about the nature of emotion, and is followed by relative experimental research. Although there is overlap among them, each one distinctly presents an account for what emotion is and for its basic mechanisms. For instance according to the social constructivists, emotions imply a cultural connotation and are seen as products of learned social rules. As social products, then, emotions can be understood through social analysis. Darwin, on the other hand, relates emotions to evolution, and argues that they exist for survival purposes [2]. In this context, certain emotions have dominated throughout time, as they have served facing survival-related issues. According to this perspective, humans experience similar emotions, as they have gone through similar survival-related issues, some of which are common with those of other closely-related species. Thus, Darwin opposes the previous theory, arguing that it is not society that has provoked emotions, but the survival instinct which happens to be common among species of similar society.

stress_729-620x349

Photo Credit: Totallifecounseling.com

Then comes James, who agrees with Darwin, but instead of focusing on emotional expression, focuses on emotional experience. James accredits the importance of bodily changes during emotional processes. He argues that bodily responses initiate emotions, and not the other way around, since the former ones have to automatically adapt fast enough to the environmental changes. James says, “we feel sorry because we cry, angry because we strike, afraid because we tremble’’ and not the other way around [3]. The similarity with the Darwinian perspective lies on James’ claims that we experience emotions because our bodily responses automatically change and/or adapt fast enough, depending on the survival-related significance of various situations.

In neuroscience, initially emotions were thought to be related to the limbic system of the brain, which among other brain structures includes the hypothalamus, cingulate cortex and hypocampi. The amygdala which is located near the temporal lobes has also been related to emotions, specifically to fear. The prefrontal cortex is located in the frontal part of the brain, and is related to emotional decision making. Frontal-lobe asymmetric activity is highly associated with approach and withdrawal emotional processes. One has higher right frontal activity (compared to left frontal activity) when he or she wants to avoid a stimulus or situation (withdrawal) and higher left frontal activity (compared to right frontal activity) when he or she wants to engage to a stimulus or situation (approach). All these findings confirm the hypothesis that emotions influence/are influenced by bodily changes, however, causality still has not been proven. The truth is that during many years scientists have been trying to find causal relationships in various processes in nature, but lately they mainly focus on synchronizations across natural phenomena rather than their causal relationships.

 To bridge the gap between bodily changes and perceived environmental changes, another group, namely cognitive theorists, argues that emotions depend on appraisals, processes by which perceived environmental changes or events are judged as good or bad. As a consequence, every emotion experienced is a result of a specific cognitive appraisal, which informs the organism about the environment, and helps to adapt the emotion in order to be ready and able to act. Thus, in the cognitive perspective, thought and emotion are inter-related. Cognitive appraisals actually describe the personal significance of an event, as opposed to the social constructivist approach.

Hacker points out the importance of discriminating processes that may look similar, so as to better approach the topic and be able to measure emotions [4]. For instance, he argues that feelings consist of perceptions, sensations, appetites and affections. Perceptions are not localized in the body but exercise cognitive functions, such as to feel the heat, cold, etc. However, sensations, such as pain, tickle or tingle, are localized in the body, but they are not felt with any part of the body. Appetites are natural, such as hunger, thirst or animal lust, and non-natural (addictions). Affections, like appetites and sensations are felt. They can be divided into emotions, agitations and moods. However, the limits among emotions, agitations and moods cannot be clearly identified, and there is much interaction among them. The difference between emotions and moods is that moods describe long-term dispositional states, compared to emotions that describe short-term affective reactions.

So, what is finally an emotion?

To my point of view, and certainly based on all above-mentioned ideas, one could say that emotion is a feeling distinct from a perception or appetite, which serves the purpose of maintaining and prolonging well-being (survival-related), but can be influenced by social rules, since the subjective interpretation of well-being is highly influenced by social rules. Emotion is related to bodily responses, since we produce tears when we are sad, we sweat when we are afraid, etc, and is a subjective feeling which can change based on re-appraising the situation that caused it.

Thus, although all the above beautiful theories initially seem controversial, they are rather complementary and highlight different aspects of emotions. Thanks to many visionary psychologists, sociologists and generally all those people who have been thirsty for knowledge and understanding on how our own mechanisms function, we now have a variety of theories and models on emotions. We can use these theories to produce interesting representations of emotions, based on signals from the brain and the periphery together with appropriate machine learning frameworks, to create artistic and medical applications that can improve our emotional well-being!

Captura de pantalla 2015-10-22 a las 15.41.46

 

References:

[1]. A. Damasio, Descartes’ error: Emotion, Reason and the Human Brain. Putnam

New York, 1994.

[2]. C. Darwin, The expression of the emotions in man and animals. Oxford University

Press, 1998.

[3]. W. James, ‘’What is an emotion?” Mind, vol. 9, no. 34, pp. 188-205, 1884.

[4]. P. Hacker, \The conceptual framework for the investigation of emotions,” International Re-

view of Psychiatry, vol. 16, no. 3, pp. 199-208, 2004.

Source localization for EEG and why to work on cortical space

$
0
0

111

 Electroencephalography allows for the recording of neural activity non-invasively by placing a set of electrodes on the scalp. While the fact that electroencephalography is non-invasive has clear advantages for the monitoring of brain activity in humans, it also comes with its limitations.

Neural activity generated by neurons has to propagate through neural tissue, cerebrospinal fluid, skin and bone before reaching the surface electrodes, layers of resistive tissue that alter the original properties of the signal. On top of that, neurons generating the electrical activity are not isolated, but instead, embedded on a complex network of neurons that are constantly active and generating its own electrical activity. All these distortions of the neural signal before it reaches the electrodes placed at the scalp are grouped into the concept of volume conduction effects, technically defined as the distortions an electric (or magnetic) field suffers when it passes through biological tissue towards the measurement sensors. Because of these distortions and diffusions through tissue, scalp sensors can only record brain activity generated at several centimeters below the recording positions. From another prospective, at every position where you put a sensor on the scalp, the recorded brain activity will reflect a weighted sum of the underlying brain sources that span a couple of cm around that position (Makeig et al. 1996).

 Despite these distortions, EEG remains as one of the most important approaches to study brain activity. While the volume-conduction effect limits us to an estimated spatial resolution of 5-9cm (Nunez 1981), it keeps a temporal resolution on the scale of ms, one of the highest compared to all other methods available for the study of the nervous system (see Figure 1).

2222

Figure 1. Main methods available for the study of the nervous system, organized according to the spatio-temporal domain that they can observe. Each colored region corresponds to one of the methods and expands across the time resolution (x-axis) and spatial resolution (y-axis) they can measure. Note that these regions are approximated and represent an estimate of the real limits. Adapted from Sejnowski et al. 2014.

How much can we improve upon these technical limitations? In recent years, there has been remarkable progress in reconstructing the underlying cortical activity from surface electromagnetic data, a set of techniques grouped under the label of source localization or source reconstruction methods (Scherg 1990; Gross et al., 2001; Dale et al. 2000). Intuitively, this technique aims to estimate the spatio-temporal dynamics of currents of the brain that better explain the observed electromagnetic fields through EEG or MEG. In other words, the source localization process consists of calculating, from a set of observed data, what the causal factors that produced them are, what is known as the inverse problem (Figure 2). Note however that, in solving the inverse problem, the number of variables that can be observed (i.e. electrodes we record from) is remarkably small if we compare to the large number of causal factors (i.e. the number of points in the brain where this surface activity could come from, which in practice corresponds to the number of points we would need to create a volume model of a brain). Thus, the inverse problem is, by nature, an ill-posed problem, as we can find more than one solution (brain activity) for an observed scalp voltage.

Figure 2. Schematics of source reconstruction approaches for brain source analysis

Figure 2. Schematics of source reconstruction approaches for brain source analysis

 In order to make this inverse problem well posed, it is necessary to impose additional constraints on the solution. The most common source reconstruction approaches rely on the assumption that sources are temporally uncorrelated, which is particularly appropriate when analyzing the responses sensory stimuli (Mosher et al 1992). Physiological constraints can be introduced by considering that the EEG records signal from populations of neurons localized in the grey matter and are oriented perpendicular to the cortical sheet (Nunez 1981). Knowing the exact shape of the cortical surface through the Magnetic Resonance Imaging (MRI) can further impose anatomical constraints on the head volume conductor model, an approach which is being increasingly used. Constraints on the spatial orientation can also be introduced, especially considering that the non-invasive sensors would record the neural populations that are oriented with a particular angle to the scalp (Miranda et. al 2013). These and other differences would add relative strengths and weaknesses to the different localization methods in terms of accuracy of the approximation, computation time, or spread of the source. For instance, if one expects very few sources, then MAP-estimation may better reflect the true sources while being computationally efficient (Ahlfors et al 1992); if one expects multiple sources with variable spatial extent, then methodologies based on a Sparse-Bayesian Learning may be the most appropriate (Mackay 1992).

 While different methods differ on the modeling assumptions, all methods are known to improve spatio-temporal resolution of the EEG, reducing the spatial resolution to 2-4cm (Yao and Dewald 2005; Ding and Lai 2005; Pizzagalli 2007). In the end, the variability in methods to approximate the currents on the brain offer great opportunity to select the most appropriate algorithm for a given experiment.

 

Captura de pantalla 2015-10-22 a las 15.42.59

 

References:

Ahlfors SP, Ilmoniemi RJ, Hamalainen MS. (1992): Estimates of visually evoked cortical currents. Electroencephalogr Clin Neurophysiol 82(3):225-36

Dale AM, Liu AK, Fischl B, Buckner RL, Belliveau JW, Lewine JD, Halgren E (2000): Dynamic statistical parametric mapping: combining fMRI and MEG to produce high-resolution spatiotemporal maps of cortical activity. Neuron 26:55-67.

Ding, L., Lai, Y., & He, B. (2005). Low resolution brain electromagnetic tomography in a realistic geometry head model: a simulation study. Physics in Medicine and Biology, 50(1), 45.

Gross J, Kujala J, Hamalainen M, Timmermann L, Schnitzler A, Salmelin R. Dynamic imaging of coherent sources: Studying neural interactions in the human brain. Proc Natl Acad Sci USA. 2001 Jan 16;98(2):694-9.

Mackay DJC. (1992): Bayesian Interpolation. Neural Computation 4(3):415-447

Makeig, S., Bell, A., Jung, T.-P., Sejnowski, T., 1996. Independent component analysis of elec- troencephalographic data. In: Touretzky, D., Mozer, M., Hasselmo, M. (Eds.), Advances in Neural Information Processing Systems vol. 8. MIT P, Cambridge MA, pp. 145–151.

Mosher, J.C., Lewis, P.S., and Leahy, R.M. (1992). Multiple dipole modeling and localization from spatio-temporal MEG data. IEEE Trans. Biomed. Eng. 39, 541–557.

Miranda, P. C., Mekonnen, A., Salvador, R., & Ruffini, G. (2013). The electric field in the cortex during transcranial current stimulation. Neuroimage, 70, 48-58.

Nunez, P.L. (1981). Electric Fields of the Brain (New York: Oxford University

Sejnowski, T. J., Churchland, P. S., & Movshon, J. A. (2014). Putting big data to good use in neuroscience. Nature neuroscience, 17(11), 1440-1441.

Scherg M. Fundamentals of dipole source potential analysis. In: Auditory evoked magnetic fields and electric potentials. eds. F. Grandori, M. Hoke and G.L. Romani. Advances in Audiology, vol. 6. Karger, Basel, pp 40-69, 1990.

Pizzagalli, D. A. (2007). Electroencephalography and high-density electrophysiological source localization. Handbook of psychophysiology, 3, 56-84.

Yao, J., & Dewald, J. P. (2005). Evaluation of different cortical source localization methods using simulated and experimental EEG data. Neuroimage,25(2), 369-382.

Artificial Neural Networks – The Rosenblatt Perceptron

$
0
0

ffSin título

Artificial Neural Networks (ANN) are machine learning models that have been inspired by the brain functioning. Through my next posts I will try to introduce artificial neural networks in a simple high level way, highlighting its capabilities but also showing its limitations. In this first post, I will introduce the simplest neural network, the Rosenblatt perceptron, a neural network compound of a single artificial neuron. This artificial neuron model is the basis of today’s complex neural networks and was until the mid-eighties state of the art in ANN.

As ANN are inspired by the brain, let’s start describing how the brain works. The brain is a connected network of neurons (approximately 21*10^9 ) that communicate by means of electric and chemical signals through a process that is known as synapse, in which information from one neuron flows to other neurons. When a neuron is inactive, the electrical difference across the membrane of the neuron (resting potential) is typically around –70 mV. The electrical impulses received from other neurons connected to its axon delivers neurotransmitters that can be both inhibitory or excitatory. If excitatory neurotransmitters increase the membrane voltage and it reaches a certain threshold the cell depolarizes and triggers the action potential that travels from the dendrite to other neurons axons.   Communication among neurons therefore takes place when the action potential arrives to the axon terminal of other presynaptic neuron.

ffSin título

 

Inspired by the biological principles of a neuron, Franck Rosenblatt developed the concept of the perceptron at Cornell Aeronautical Laboratory in 1957:

  • A Neuron receives ‘communication messages’ from other neurons in form of electrical impulses of different strength that can be excitatory or inhibitory.
  • A neuron integrates all the impulses received from other neurons.
  • If the resulting integration is larger than a certain threshold the neuron ‘fires,’ triggering the action potential that is transmitted to other connected neurons.

Rosenblatt perceptron is a binary single neuron model. The inputs integration is implemented through the addition of the weighted inputs that have fixed weights obtained during the training stage. If the result of this addition is larger than a given threshold θ the neuron fires. When the neuron fires its output is set to 1, otherwise it’s set to 0.

The equation can be re-written as follows including what it’s known as the bias term: .

Captura de pantalla 2016-08-02 a las 16.25.35

This model implements the functioning of a single neuron that can solve linear classification problems through very simple learning algorithms. Rosenblatt Perceptrons are considered as the first generation of neural networks (the network is only compound of one neuron ☺ ). This simple single neuron model has the main limitation of not being able to solve non-linear separable problems. In my next post I will describe how this advantage was overcome and what happens when we have a layer of various perceptrons or try different neuron activation functions. Stay tuned!

Captura de pantalla 2015-10-22 a las 15.41.46

Credits:

Neural networks for pattern recognition, Cristopher Bishop

Suárez, J.R. Dorronsoro, A. Barbero, Dpto. de Ingeniería Informática and Instituto de Ingeniería del Conocimiento, Escuela Politécnica Superior de Madrid – ASDM 2016

Image Credits 1: http://learn.genetics.utah.edu/content/addiction/neurons/

Image Credits 2: Wikipedia.org

How to study consciousness in the electric brain?

$
0
0

What is consciousness?

Last April Starlab kicked-off the H2020 FET Open Luminous[1]. We are very proud to coordinate one of the 11 projects out of 800 selected for funding in last year’s Call of the FET Open[2] program, one of the most competitive programs within the European funding landscape. The project aims to study consciousness by combining the general frameworks of Information Theory and the electromagnetic nature of brain activity. The general goal of the project is advancing the scientific study of consciousness as well as to find applications for consciousness measurement and alteration technologies grounded on electrophysiology and non-invasive brain stimulation techniques. As once done with the HIVE project[3], which helped give birth to Neuroelectrics, Starlab expects Luminous to achieve a major breakthrough in neurotechnology. Our idea is once again to bring the project results into the market in form of DigitalHealth services, relying on brain data science and advanced electrotherapies. But some fundamental scientific questions related to consciousness have to be addressed first. The project team, which includes some of the best specialists in the world[4], will start answering in the next 3 years the following questions:

 

What is consciousness?

What is consciousness?

The scientific definition of consciousness is an ambitious question with no unique answer, an umbrella term that covers a wide variety of mental phenomena. The plot (adapted from Steven Laureys’ paper published in Trends in Cognitive Sciences in 2005) shows consciousness as a relationship between awareness (as the ability to know and perceive or feel) and wakefulness (a brain state where an individual engages to the external world in a coherent way). I remember flying to Paris to explain to Steven the project plans. To my surprise (because I did not know him at all at that time) he accepted to participate very quickly. The plot above represents a general framework that has been one of the inspirational starting points of Luminous. It allows to turn the more ethereal concept of consciousness into a taxonomy with clinical implications. The project includes people suffering Disorders of Consciousness (Coma, Unresponsive Wakefulness State, Minimal Consciousness State) who are treated at Steven’s group in Liege. Locked-In patients are also included. They are being treated at Tübingen by world BCI leader Niels Birbaumer and his group. Michael Nitsche, who is one of the most prominent transcranial current stimulation experts, and his group will be experimenting on sleep, including the very interesting state of lucid dreaming. Healthy subjects under anesthesia will be analyzed by Irene Tracey and her collaborators at Oxford. Where to place in this general framework fetal brain activity is one of the most challenging questions in the project, which will be addressed by Hubert Preussl and his team.

Can consciousness be measured?

Can consciousness be measured?

The work by Marcello Massimini and his group constitutes a cornerstone of the project. They have recently correlated changes in the brain signal complexity after Transcranial Magnetic Stimulation (TMS) with differences in consciousness levels (see the plot adapted from Casali et al published in Science Translational Medicine in 2013). We at Starlab found in that work the perfect link to non-invasive brain stimulation protocols as a fundamental technology to probe consciousness. We immediately wondered whether we could find an analogous measurement technique based on transcranial current stimulation. The basic idea of the TMS-based probing technique is to actively ping the brain with an external stimulation independent from the regular sensory pathways. The activity generated from this input spreads differently depending on the consciousness level. What is measured then is the complexity of the brain signals. The relationship between consciousness and complexity has been already explored by several researchers. This question will be furthered explored within project works by Marcello and Giulio Ruffini, who have been fascinated by the relationship of conscious experience with Information theory and Complexity for some years. They will have the help of Fabrice Wendling and his team, which will develop computational simulations of neuronal activity to better understand this relationship.

Can consciousness be electromagnetically altered?

Can consciousness be electromagnetically altered?

The measurement of consciousness has been tackled in Neuroscience by finding so-called neural correlates of consciousness. They are studied by establishing relationships between behavioral consciousness and the observed neural mechanisms underlying them. Methodologies to establish these neural correlates are structured in the former plot under ‘measurement’ techs. However Luminous does not only aim to study the neural correlates of consciousness. Furthermore we aim to find out if electromagnetic non-invasive brain stimulation can be applied to alter consciousness states. We expect from this project leg to be applied in the long-run in clinical settings for treating Disorders of Consciousness or for inducing consciousness states of clinical relevance on-demand, e.g. sleep, anesthesia. This is definitely an extremely challenging target. We will advance the first steps within the next 3 years and will keep you posted on this channel.

Captura de pantalla 2015-10-22 a las 15.42.35

[1] http://www.luminous-project.eu/

[2] https://ec.europa.eu/programmes/horizon2020/en/h2020-section/fet-open

[3] http://hive-eu.org/

[4] http://www.luminous-project.eu/#consortium

Free Will

$
0
0

brain-book-460x307

brain-book-460x307

http://www.salon.com/2011/11/13/the_controversial_science_of_free_will/

Next week I will attend a panel about Free Will, in which a group of neuroscientists, physicists and philosophers from Harvard University and MIT will tackle questions like: What is free will? Does it exist? If so, how is it generated? Do animals have free will? How can one make moral decisions without free will?  I have always been intrigued by this topic, especially from the neuroscience perspective, and here I would like to give an overview of the concept of free will and its relationship with neuroimaging and neuromodulation techniques.

Free will, understood as the ability to choose between different possible courses of action, has long been a topic of debate among neuroscientists, philosophers and mathematicians. We take for granted that in our daily life we have free will, that what we do from moment to moment is determined by conscious decisions that we freely make. You get out of bed, you go for a walk, you eat vanilla ice cream. It seems that we’re in control of actions like these; so we assume we have free will. But in recent years, some have argued that free will is an illusion, pleading the determinist perspective, which holds that every physical event is predetermined, or completely caused by prior events, or by physics laws, or simply by our genes.

There are several philosophical and scientific arguments against free will, including one based on Benjamin Libet’s famous neuroscientific experiments [1], which allegedly showed that our conscious decisions are caused by neural events that occur before we choose.

vuelo-de-pajaros

http://bitnavegante.blogspot.com.es/2016/01/la-mecanica-de-una-sociedad-libre.html

In that sense, neuroimaging techniques such as EEG and fMRI can help us understand if all the reasons we think we’ve made a decision for are actually just after-the-fact rationalizations. For example, fMRI machine learning of brain activity (known as multivariate pattern analysis) has been used to predict the user choice of a button (left/right) up to 7 seconds before their reported decision to do so [2]. Multivariate pattern analysis using EEG has also suggested that decisions could be predicted by neural activity immediately after stimulus perception [3].

Furthermore, it has also been suggested that the sense of authorship can be manipulated using neuromodulation techniques. For example, some research suggests that Transcranial Magnetic Stimulation (TMS) can be used to manipulate the perception of which hand a subject wants to move, even though the experience of free will remains intact [4]. In a follow-up experiment in 2015 a team of researchers from the UK and the US published another paper demonstrating that motor responses and the choice of which hand to move can also be modulated using transcranial Direct Current Stimulation (tDCS) [5].

There are other experts in the field, such as Mark Balaguer, who think that anti-free-will arguments put forward by philosophers, psychologists, and neuroscientists don’t provide any good reason to doubt the existence of free will, but this doesn’t necessarily mean that we have free will. From their point of view, the question of whether we have free will remains an open one; we simply don’t know enough about the brain to answer it definitively and therefore we need to keep pushing forward the limits of the field to understand how this intriguing organ works.

 

Captura de pantalla 2015-10-22 a las 15.41.46

 

How does a baby’s brain work?

$
0
0

_DanielLoewe_1173

“Will you stay in our lovers’ story
If you stay you won’t be sorry
‘Cause we believe in you”

Kooks from Hunky Dory – David Bowie – 1971

David Bowie wrote Kooks when he received news of the arrival of his son Duncan Jones (aka Zowie Bowie). I am going to be a father too! But in my case, not being as good a composer as David Bowie, I thought I’d write a post instead.

birth-466140_960_720During these last months I have learned a lot about babies and I would like to share some of my findings with you. There is plenty of information available about mother and fatherhood; however, a big part of it is quite contradictory and many parts are under debate. In this post I will focus on the neuroscience perspective.

The baby’s brain starts to form 16 days after conception, probably before you even know you are pregnant! In fact, it is one of the very first systems to develop. By the end of the first trimester, the baby will begin to develop the sense of touch. Around the same time, the baby will start to detect sounds; so speaking to your baby will indeed make her/him get used to your voice. In fact, studies have shown that the sound of the mother’s voice will be better heard, as her voice reverberates through her bones and body, amplifying it.

But for the fathers-to-be, do not worry, your voice is also recognized by your baby! This study shows that newborns react differently to words and sounds that were repeated daily throughout the third trimester compared to those that they never heard during pregnancy.

As we know, the brain is extremely plastic, even more so at early ages. A newborn baby has 100 billion neurons, roughly as many as stars in the Milky Way. About half of these neurons will be pruned by adulthood due to the “fire together wire together” principle. Only the neurons and synapses that are often used will remain and all other ones will disappear. We can assume that all babies are born synesthetic and in early ages will learn how to differentiate senses. When this pruning process does not go as usual, some adults will maintain this synesthesia … they will listen to colors or taste sounds…weird, huh?

Another interesting fact is that all babies are born early and that the gestation process finishes at month 21. That is 9 months intrauterine gestation + 12 months of extrauterine gestation. This is because of how big the baby’s head is compared to how small the pelvis is. That causes the baby’s brain to be quite underdeveloped compared to those of other primates and mammals, which is why human babies are helpless and rely on caregivers for so long.

_DanielLoewe_1173Taking into account how sensible the baby’s brain is and its huge capacity for learning, the correct stimuli during pregnancy and during her/his first years of life are crucial. As this is very well known, it’s no wonder that the birth process itself is very important! There are thousands of theories and lots of controversy regarding this topic. What I have learned, and what I do believe, is that the more natural the birth is, the better it is for the baby and mother. There has been a tendency in the last years to medicalize the birth process, and thank god, this seems to be changing nowadays. If birth is induced by artificial means, such as using oxytocin, then anesthesia will likely be necessary. External help will also likely be required for the baby to exit the womb, sometimes even a C-section. All these artificial stimuli are the very first stimuli received by the baby and, many studies show, will strongly affect her/him. Just to mention a few:

- C-Sections May Boost Child’s Risk of Obesity

- Woman are 3 times more likely to die during Caesarean delivery than a vaginal birth

- Women who have had a C-section are less likely to begin early breastfeeding than women who had a vaginal birth.

nihms155619f1To finish I would like to share with you this amazing technique to study the babies brains in a non-invasive way: the Fetal MEG. With state-of-the-art signal processing techniques, scientist are able to measure the babies spontaneous brain electrical activity and even record event related potentials.

 There are few of these devices in the world, but I was lucky enough to visit one in the lab of Niels Birbaumer in the University of Tübingen.

Well, that’s it for today. I hope you have learned a bit about babies’ brains, and that you will understand them better and treat them with more patience and empathy. They have to learn almost everything from scratch! I will surely do in the next forthcoming months :) Thanks for reading and until next time!

Captura de pantalla 2015-10-22 a las 15.42.59


NE goes computational at ICANN

$
0
0

Figure 1 Group picture of the ICANN 2016 attendees at the BarcelonaTech building of the Universitat Politecnica de Catalunya (UPC).

During the 6-9th of September, Barcelona was home to the 25th International Conference on Artificial Neural Networks or ICANN, the annual conference that brings together researchers from two scientific worlds: neurosciences and information sciences. As the scope of the conference is wide, it gathers specialists in Machine Learning algorithms and computational neurosciences, as well as researchers that focus on building models of real nervous systems. This multidisciplinary conference aims to facilitate discussions and interactions in developing intelligent systems, and to enhance our understanding of the cognitive system, which makes it a great chance for NE to get involved!

Figure 1 Group picture of the ICANN 2016 attendees at the BarcelonaTech building of the Universitat Politecnica de Catalunya (UPC).

Figure 1 Group picture of the ICANN 2016 attendees at the BarcelonaTech building of the Universitat Politecnica de Catalunya (UPC).

This year, the conference gathered about 200-250 attendees and had a special flavor as it celebrated the 25th edition (see Figure 1 for a group picture!). This year, the conference was organized in two main tracks: Brain inspired computing and Machine learning research, where neutrally-inspired algorithms would group in neural coding, decision making and unsupervised learning approaches.

With this structure, the organizers managed to create an interactive environment with strong interdisciplinary discussions, which facilitated communication between attendees. Just to mention a few, we were able to listen Prof. Erkki Oja, one of the contributors in the design of independent component analysis (Figure 2, note that we use this algorithm daily, remember this post!); Prof. MD. Joaquin Fuster from University of California in USA with a long and intense career in the study of prefrontal cortex or Prof. Gunther Palm, an established researcher in the field of neural networks and associative memory. While there is not enough space in this blog to mention every relevant contribution to ICANN and our experience, we would like to refer you to the program of the conference where you will be able to find a list of attendees.

Figure 2 Erkki Oja from Aalto University (Helsinki, Finland), the first plenary speaker of the ICANN 2016.

Figure 2 Erkki Oja from Aalto University (Helsinki, Finland), the first plenary speaker of the ICANN 2016.

This time, Starlab and NE had the chance to contribute in two ways: through the organization of a special session and the presentation of our recent research. The special session gathered researchers that use advanced neural networks as pattern recognition tools for EEG/MEG, with special emphasis in deep learning and reservoir computing. Several works in the literature have been using these tools to analyze multimedia data (e.g. for speech recognition, for object detection on video), while EEG/MEG have not deserved so much attention. In our opinion, it is worth investing some effort in designing strategies tailored for EEG/MEG data, which present some special features: multi-channel temporal signals, high correlated channels, low signal-to-noise ratio, unclear feature invariants, non-linear dynamical and non-stationary signals. With this motivation in mind, we organized the special session, which was a great success!

Figure 3: Program of the special session organized by Starlab/NE at ICANN 2016

Figure 3: Program of the special session organized by Starlab/NE at ICANN 2016

Our special session facilitated the presentation of 6 research topics, listed in the Figure 3, which were not only scientifically relevant but also at the personal level – for the first time, I had the chance of presenting very good friends as speakers! Scientifically speaking, it was a pleasure to see how new methodologies are being used for the analysis of EEG, and I am sure that good results will soon be published and accessible for general use. You can see the presented research as conference proceedings in the Lecture Notes in Computer Science series, by Springer, as detailed in the ICANN webpage.

Did we peak your curiosity? Also, it may be a good time to start thinking about next ICANN that will take place in Alghero, Sardinia between the 11-15th of September…or to have a look at images of the first ICANN that was held in Helsinki in 1991 here.

 Captura de pantalla 2015-10-22 a las 15.42.35

More than Humans? The Ethical Debate about Neuroenhancement

$
0
0

©neuroelectrics45

Can we force a suspect in a trial to undergo a scan of his/her brain to read his/her mind (e.g. a lie detector) and this must be objective evidence to the case? Can we force certain professionals (surgeons, pilots or people with important responsibilities) to take psychostimulants (e.g. methylphenidate) to ensure maximum performance and concentration, and reduce the risk of errors? Must we allow the use of beta-blockers, for example, by musicians before a concert in order to control the nerves, increase control, and improve the performance? Must we allow the use of psychostimulants (methylphenidate) among students so they can get better results? Should neurostimulation be used in schools and among students to improve the pace of learning, increased memory and improve performance in mathematics?

Neuroscience is promising yet it also generates large fears and concerns. This scientific discipline was born in a time where biological technology started to grow rapidly with the techniques that allowed direct observation of the brain activity, which in turn facilitated an understanding of the nervous system and provided new applications and technologies in the treatment of mental illnesses. Thus neuroscience aims to understand how the brain works to act on it.

All new tools for studying the brain, new ways of understanding diseases and new treatments that modify the biology of the brain (which have already shown their effectiveness), lead to the development of neuroscience as a field beyond medicine. All this new knowledge has given way to a use that is not only for therapeutic purposes. Use of this knowledge outside the health sector made the new technologies become cognitive enhancement tools and this is when what is known as neuroenhancement techniques appeared.

The neuroenhancement techniques aim to enhance our skills and improve our cognitive abilities. Neuroscience applications in the medical field has already had high moral and ethical implications to its use, so you can imagine how the neuroenhancement techniques designed at the improvement of cognitive abilities in healthy people has intensified the debate around ethical and philosophical implications.

©neuroelectrics45

The fast rate of technological development is making available new ways to intervene directly on people, but the most strikingof these developments is the possibility of intervening the brain to both treat diseases and to change behaviour or modify feelings. Neuroscience have implications for all areas of society that we consider important: scientific, legal, social and political. The understand of human behaviour, creating new forms of treatment for enhancement and improvement of cognitive abilities, and the possible consequences that this involves, affects issues that should be treated from an interdisciplinary point of view.

The reality is that many of the techniques for neuroenhancement are already on the market for everyone. Currently there are many pharmaceutical companies that sell drugs to healthy people who look for an improvement[1]. In my opinion, the companies that seem to succeed are those who have used the Internet as a way to market their products for at home use directly by the consumer, including brain stimulation products. All this leads to the urgent need to raise a public debate to address questions about its use and its potential, but also on the risks and the need for its regulation as it may be in play fundamental values of our society.

To treat these last objections in a rational way and to consider the positive and negative aspects, we must first release the prejudices and fears that have associated the treatment of the mind and the direct intervention on the brain. The arguments against neuroenhancement should be specific, thus, not based solely on the fact that it is “dangerous” because it intervenes with brain activity.

Leaving aside prejudices and irrational fears, questions are raised about possible dangers: which considerations should legislation take to ensure responsible use, how may they be introduced into society and how can they affect the values that we consider fundamental? The main arguments to take into account to justify the need of regulation for neuroenhancement have to do with the individual effects of the improvements, including their impact on society and its contributions (both beneficial and harmful ones). Some questions remain: What should we really worry about in these methods? Do we currently have reasonable reasons to reject or limit new technologies in a time when it seems that they are directly responsible for progress?

At the individual level, the concerns that have been raised about their use are about safety and side effects, the authenticity of the enhanced minds and the value of what is achieved after improvement. At a social level, ethical concerns have to do with the extent that improvements can increase or decrease social inequality, if individuals can be coerced to take drugs or how the improvement affects aspects related to the maintenance of social cohesion. Considering the main philosophical objections that are on the table today, there are no general arguments of principle strong enough to rule out a priori brain technologies.

However, given that neuroscience can change the moral, social and legal landscape, it seems that is necessary to treat from the beginning, assess and analyze its possibilities and implications, case by case, think where this “progress” leads us and, if necessary, take steps to prevent the development of an undesirable society.

Today, neuroenhancement is in its infancy and would need to have some kind of regulation that would allow it to progress in the right direction and evolve according to the values that we defend as a society. There have been many predictions about what they can offer in the distant future but that does not mean that we are talking about a technology that does not really exist now. The value of speculation is not prediction but reflection. We have the ability to step in and guide our own evolution, to create the future human, now, more than ever, we can shape our future, but where we do want to go? Do we know the answer to this latter question?

[1] NY Times “The Selling of ADHD” article. Accessed on November 22: http://www.nytimes.com/2013/12/15/health/the-selling-of-attention-deficit-disorder.html?pagewanted=all&_r=0

Captura de pantalla 2015-10-22 a las 15.41.46

3 practical thoughts on why deep learning performs so well

$
0
0

artificial-intelligence-ai-machine-learning-brain-ss-1920

The superior performance of deep learning relative to other machine learning methodologies has been commented in several forums and magazines in recent times. I would like to post today on three reasons that, in my opinion, are the basis of this commented superiority. I am not the first to comment on this issue [1], and for sure I won’t be the last. But I would like to extend the discussion by taking into account the practical reasons behind the success of deep learning. Hence if you are looking for its theoretical background you would do better to look for it in the deep learning literature, where the “Hamiltonian of the spin glass model [2]”, the exploitation of compositional functions to cope with the curse of dimensionality [3], their capability to best represent the simplicity of physics-based functions [4], and the flattening of the data manifolds [5] have been proposed. All of these are related, but I would like to comment here on those practical aspects that turn deep learning into a useful technology to tackle some practical classification problems and applications. But not all problems and applications. Deep Learning methodologies appear today as the ultimate pattern recognition technology, as formerly Support Vector Machines and Random Forests,. But based on the No Free Lunch Theorem [6] there is no optimal algorithm for all possible problems. Therefore it is always good to keep the evaluation of the classification performance [7] in mind. Performance evaluation constitutes a basic stage in machine learning applications, and is directly linked with the Deep Learning success as described below.

The first important reason for the Deep Learning success is the integration of feature extraction within the training process. Not so many years ago, pattern recognition was focused on the classification stage. Thus feature extraction was treated as a somehow independent problem. It was a part very much based on artisan manual work and on expert knowledge. You used to invite an expert on the topic you wanted to solve to form part of your developing team. For instance, you should count on an experienced electrophysiologist if you want to classify EEG epochs, or on a graphologist if you want to recognize handwriting. This expert knowledge was used to select the features of interest in a particular problem. In contrast Deep Learning approaches do not set up a priori the features of interest. Deep Learning methods train the feature extraction and the classification stages together. For instance  a set of image filters or primitives is trained in the first layers of the  classification network for image recognition. This was a concept already proposed in the Brain-Computer Interface community some years ago, i.e. Common Spatial Filters (CSP) were trained in order to adapt the feature extraction stage to each of the BCI users.

Moreover Deep Learning has succeeded in previously unsolved problems because it encourages the collection of large data sets and the systematic integration of performance evaluation in the development process. These are two sides of the same coin. Once you have large data sets, you can not just conduct the performance evaluation through a manual procedure. You need to automate the process as much as possible. The automation implies the implementation of the cross-validation stage, and therefore its integration in the development process. Large data set collection and performance evaluation have found very good support as well from the popularization of data analysis challenges and platforms. The first challenges and competitions were organized around the most important computer vision and pattern recognition conferences. This was the case, for instance, with the PASCAL [8] or the ImageNet [9] challenges. These challenges allowed for the first time to count on large image data sets, and most importantly, with associated ground truth labels for systematic performance evaluation of the algorithms. Furthermore the performance evaluation was done blind on the test set ground truth. So, it was not possible to tune the parameters to boost performance. The same challenge concept was then applied on data analysis platforms, of which the most popular is Kaggle [10] but it is not the only one, e.g. see DrivenData [11], InnoCentive [12]. The data contest platforms make use of the same concepts: they provide a data set for training and a blind data set for testing, plus a platform to compare the achieved performances among different competing groups. This has been definitely a good playground for data science in general, and particularly for Deep Learning.

.

artificial-intelligence-ai-machine-learning-brain-ss-1920

martechtoday.com

.

The last thought I would like to share with you is closely related to the former ones. None of the innovations mentioned above would have been possible without technology development.  A  decrease in memory and storage prices [13] allows storing data sets of ever increasing volume. The well-known Moore’s law describes the terrific simultaneous increase in computational power. Lastly the explosion of network technologies have definitely democratized the access to both memory and computational power. Cloud repositories, and High Performance Computing are allowing an exponential increase in the complexity of the implemented architectures, i.e. the number of network layers and nodes taken into account. So far this has been demonstrated as the most successful path to follow. For the moment …

Captura de pantalla 2015-10-22 a las 15.41.46

[1] http://www.kdnuggets.com/2016/08/yann-lecun-3-thoughts-deep-learning.html

[2]  https://arxiv.org/pdf/1412.0233.pdf

[3]  https://arxiv.org/pdf/1611.00740v2.pdf

[4] https://arxiv.org/abs/1608.08225

[5]  http://ieeexplore.ieee.org/document/7348689/

[6]  https://en.wikipedia.org/wiki/No_free_lunch_in_search_and_optimization

[7]  http://blog.neuroelectrics.com/how-good-is-my-computational-intelligence-algorithm-for-eeg-analysis/

[8] http://host.robots.ox.ac.uk/pascal/VOC/

[9]  http://www.image-net.org/challenges/LSVRC/

[10]  https://www.kaggle.com/

[11]  https://www.drivendata.org/

[12]  https://www.innocentive.com/

[13]  http://www.jcmit.com/mem2015.htm

Synchronization in nature generally and between the brain and periphery specifically

$
0
0

event_related_potentials

Nature has a beautiful tendency to coordinate things. If two pendulum clocks are hanging from the same beam, they start moving in the same way. In a similar way, women that spend time together have a tendency to synchronize their menstrual cycles, or fireflies in the woods that are flashing synchronously. Many researchers have tried to investigate the reason for the fireflies’ synchronization, one possibility that I like is that males are flashing in various frequencies to attract females, while the latter are hidden in the woods until they find an ‘’interesting’’ male so they start flashing at the same frequency as the male.

Apart from the physical world, synchronization phenomena have also been observed in the psychological world. Carl Jung observed that internal processes manifested in dreams could synchronize with external situations manifested in the material world, which he called ‘’synchronicity’’. In his book Synchronicity (1952) he provides an example with one of his patients who had dreamed the night before that someone had given her a golden scarab, and while at Jung’s office the next day, a scarab appeared outside his window, which he took and gave to her.

In the biological world, the most famous example of synchronization refers to cardiorespiratory coupling, according to which the respiration pattern synchronizes with the heart rate, introducing a phenomenon called respiratory sinus arrhythmia, in which the R-R interval of the electrocardiogram (ECG) is shortened during inspiration and prolonged during expiration. Moreover, recent evidence from neuroscience and psychology supports the idea of underlying interactions during emotional processes. In particular, brain activation is associated with peripheral and other visceral responses during emotional processes, and synchronization phenomena of various subsystems of the organism seem to occur when experiencing emotions.

So what exactly is synchronization and how is it defined? Are there different notions of synchronization? Typically synchronization refers to a process during which two or more dynamical systems adjust some of their properties to a common behavior due to strong or weak coupling. The stricter way of using the concept of synchronization refers to ‘’complete’’ or ‘’identical’’ synchronization, and describes the dynamics of two (or more) systems that move along identical state-space orbits [1]. However, this is obviously not the case for systems that are not identical, which leads to the need of adopting looser concepts of synchronization. Some of these concepts are described below.

Phase synchronization refers to a state wherein the phases of the systems are locked even though their amplitudes may be random. For instance, the women’s menstrual cycle adjustments, as well as the synchronization across fireflies, fall in to this category. Phase synchronization has been also observed between heart rate and respiration, and is typically displayed using the cardiorespiratory synchrogram [2]. Phase synchronization between heart rate and respiration seems to decrease while people are exposed to neutral pictures, and to increase with highly exciting pictures [3].

Generalized synchronization, on the other hand, refers to interdependencies in the state space of the systems, and quantifies how neighboring state vectors of one state space map into the other [4]. Generalized synchronization between electroencephalography (EEG) and respiration also seems to be related to emotional processes.

Another concept of synchronization refers to the interaction of oscillators with different frequencies, in a way such that the phase of a low-frequency oscillator synchronizes with the amplitude of a high-frequency oscillator, a process known as Phase Amplitude Coupling (PAC) [5]. PAC has been explored between EEG and electrodermal activity, i.e., between the central nervous system and the peripheral nervous system, and it was revealed that the synchronization between brain and periphery increases when strong emotions occur and decreases when neutral emotions occur, which is in line with the famous appraisal theory of emotions [6].

…And these are some of the many notions and examples of synchronization that we experience in our ordinary life, some more and others less often.

If you feel like it, observe and have fun in your everyday life: notice how various phenomena tend to synchronize and see into which synchronization category they fall. Do you think there is a synchronization pattern, of the three discussed above, which is more likely to be observed than the others?

 

Captura de pantalla 2015-10-22 a las 15.42.59

 

References:

  1. Fujisaka and T. Yamada, Stability theory of synchronized motion in coupled-oscillator systems, Progress of Theoretical Physics, vol. 69, no. 1, pp. 32-47, 1983.
  2. Rosenblum, A. Pikovsky, J. Kurths, C. Schafer, and P. A. Tass, Phase synchronization: from theory to data analysis, Handbook of biological physics, vol. 4, pp. 279-321, 2001.
  3. Valenza, A. Lanata, and E. P. Scilingo, Oscillations of heart rate and respiration synchronize during affective visual stimulation. Information Technology in Biomedicine, IEEE Transactions on, vol. 16, no. 4, pp. 683-690, 2012.
  4. Arnhold, P. Grassberger, K. Lehnertz, and C. Elger, A robust method for detecting interdependences: application to intracranially recorded EEG, Physica D: Nonlinear Phenomena, vol. 134, no. 4, pp. 419-430, 1999.
  5. Tort, R. Komorowski, H. Eichenbaum, and N. Kopell, Measuring phase amplitude coupling between neuronal oscillations of different frequencies, Journal of neurophysiology, vol. 104, no. 2, pp. 1195-1210, 2010.
  6. Kroupi, J.-M. Vesin and T. Ebrahimi. Phase-Amplitude Coupling between EEG and EDA while experiencing multimedia content. Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII), 2013 IEEE, pp. 865-870.

NYC Neuromodulation 2017

$
0
0

Neuroelectrics' booth

During January 13-15th 2017, New York was home to the third NYC Neuromodulation conference focused on technology and mechanisms for brain stimulation in areas that include transcranial direct current stimulation (tDCS), transcranial alternating current stimulation (tACS), transcranial magnetic stimulation (TMS), electroconvulsive therapy (ECT), deep brain stimulation (DBS), and other emerging areas. The conference took place at the Great Hall of the Shepard Hall building at the gorgeous campus of the City University of New York.

 

Shepard building

Great Hall

Captura de pantalla 2017-02-13 a las 16.24.33

Shepard building

 

This year, the conference included practical sessions, poster sessions, panels for discussion and research talks. With this structure, the organizers managed to create an interactive environment with strong interdisciplinary discussions among attendees and speakers. Just to name a few of them, Miguel Alonso from Harvard University talked about prospective of tDCS for obesity; Mar Cortes, from Burke Hospital, presented new applications of tDCS for rehabilitation of spinal cord injured patients; Vince Clark, from University of New Mexico, talked about fMRI Guided tDCS; Michael Nitsche, from University of Göttingen, discussed the variability of tDCS effects; Anli Liu, from New York University, and Alexander Opitz, from Nathan Kline Institute, talked about tDCS model validation in human and non-human primates; and David Putrino, from Burke Hospital, presented the results of a tDCS/EEG study in which Neuroelectrics also contributed[1]. While there is not enough space in this blog to mention every relevant contribution to the conference, we would like to refer you to the program of the conference, where you will be able to find the complete list of speakers.

Given the increased media, public, and commercial interest in personal non-invasive brain stimulation, the conference also included different panels to discus emerging “consumer” technologies and their scientific and regulatory frameworks. The off-label use of new clinical protocols was also discussed from the scientific, medical and regulatory perspectives. The conference also focused on timely and novel targets of neuromodulation including glia, as well as new waveforms including high-rate (10 kHz) stimulation.

This year, Neuroelectrics’ research team contributed in the poster session with three posters (you can find the full abstracts and proceedings at the conference website):

  • Poster 1: Marta Castellano, David Ibanez-Soria, Javier Acedo, Eleni Kroupi, Xenia Martinez, Aureli Soria-Frisch, Josep Valls-Sole, Ajay Verma, Giulio Ruffini. “tACS bursts slows your perception: increased RT in a speed of change detection task”.
  • Poster 2: Ricardo Salvador, Jaume Banus, Oscar Ripolles, D. B. Fischer, Micahel D. Fox, Giulio Ruffini. “Intersubject variability effects on montages used to target the motor cortex in tDCS”.
  • Poster 3: Laura Dubreuil-Vall, Peggy Chau, Alik Widge, Giulio Ruffini, Joan Camprodon. “Electrophysiological mechanisms of tDCS modulation of executive functions”.

 

Captura de pantalla 2017-02-13 a las 16.31.34

 

Neuroelectrics team also showcased our latest products, Starstim 32 channels and NIC 2.0, in our new booth with renovated colors and corporate image:

 

Neuroelectrics' booth

Neuroelectrics’ booth

 

Last but not least, we also enjoyed New York’s stunning sunrises and experienced the city’s nightlife at the social activities organized as part of the conference. Neuroelectrics’ team was invited to attend the speaker dinner at the Red Rooster restaurant, which holds one of New York’s best kept secrets: the Ginny’s Supper Club downstairs, in which we enjoyed chef Marcus Samuelsson’s cuisine and jazz live music in a space that transported us back in time.

 

Sunrise

New York sunrise at Central Park

RedRoosters

Live jazz music at Red Roosters

 

Captura de pantalla 2015-10-22 a las 15.42.59

 

[1] David Putrino, Alejandra Climent, Laura Dubreuil Vall, Giulio Ruffini, Douglas Labar, Dylan Edwards, Mar Cortes. “Motor evoked potential changes in response to transcranial direct current stimulation correlate with quantitative EEG changes in subjects with chronic spinal cord injury”.

Viewing all 99 articles
Browse latest View live




Latest Images