r3ds3rpent - Kode, Transistors and Spirit

r3ds3rpent

Kode, Transistors and Spirit

Machine Learning, Big Data, Code, R, Python, Arduino, Electronics, robotics, Zen, Native spirituality and few other matters.

107 posts

Latest Posts by r3ds3rpent

r3ds3rpent
7 years ago

The Paradoxical Commandments

The Paradoxical Commandments were written in 1968 by Dr. Kent M. Keith. Mother Theresa reffered to them often. People are illogical, unreasonable, and self-centered. Love them anyway.   If you do good, people will accuse you of selfish ulterior motives. Do good anyway.   If you are successful, you will win false friends and true enemies. Succeed anyway.   The good you do today will be forgotten tomorrow. Do good anyway.   Honesty and frankness make you vulnerable. Be honest and frank anyway.   The biggest men and women with the biggest ideas can be shot down by the smallest men and women with the smallest minds. Think big anyway.   People favor underdogs but follow only top dogs. Fight for a few underdogs anyway.   What you spend years building may be destroyed overnight. Build anyway.   People really need help but may attack you if you do help them. Help people anyway.   Give the world the best you have and you’ll get kicked in the teeth. Give the world the best you have anyway. © Copyright Kent M. Keith 1968, renewed 2001

r3ds3rpent
7 years ago

Solar System: 5 Things To Know This Week

Our solar system is huge, so let us break it down for you. Here are 5 things to know this week: 

1. Make a Wish

Solar System: 5 Things To Know This Week

The annual Leonids meteor shower is not known for a high number of “shooting stars” (expect as many as 15 an hour), but they’re usually bright and colorful. They’re fast, too: Leonids travel at speeds of 71 km (44 miles) per second, which makes them some of the fastest. This year the Leonids shower will peak around midnight on Nov. 17-18. The crescent moon will set before midnight, leaving dark skies for watching. Get more viewing tips HERE.

2. Back to the Beginning

Solar System: 5 Things To Know This Week

Our Dawn mission to the dwarf planet Ceres is really a journey to the beginning of the solar system, since Ceres acts as a kind of time capsule from the formation of the asteroid belt. If you’ll be in the Washington DC area on Nov. 19, you can catch a presentation by Lucy McFadden, a co-investigator on the Dawn mission, who will discuss what we’ve discovered so far at this tiny but captivating world. Find out how to attend HERE. 

3. Keep Your Eye on This Spot

Solar System: 5 Things To Know This Week

The Juno spacecraft is on target for a July 2016 arrival at the giant planet Jupiter. But right now, your help is needed. Members of the Juno team are calling all amateur astronomers to upload their telescopic images and data of Jupiter. This will help the team plan their observations. Join in HERE.

4. The Ice Volcanoes of Pluto

Solar System: 5 Things To Know This Week

The more data from July’s Pluto flyby that comes down from the New Horizons spacecraft, the more interesting Pluto becomes. The latest finding? Possible ice volcanoes. Using images of Pluto’s surface to make 3-D topographic maps, scientists discovered that some mountains on Pluto, such as the informally named Piccard Mons and Wright Mons, had structures that suggested they could be cryovolcanoes that may have been active in the recent geological past.

5. Hidden Storm

Solar System: 5 Things To Know This Week

Cameras aboard the Cassini spacecraft have been tracking an impressive cloud hovering over the south pole of Saturn’s moon Titan. But that cloud has turned out to be just the tip of the iceberg. A much more massive ice cloud system has been found lower in the stratosphere, peaking at an altitude of about 124 miles (200 kilometers).

Make sure to follow us on Tumblr for your regular dose of space: http://nasa.tumblr.com

r3ds3rpent
7 years ago
Visualizing Growth In The U.S.: Take A Look At America’s Economic Growth [831 X 1200] CLICK HERE FOR

Visualizing Growth in the U.S.: Take a look at America’s Economic Growth [831 X 1200] CLICK HERE FOR MORE MAPS! thelandofmaps.tumblr.com

r3ds3rpent
7 years ago
Helping Hand

Helping Hand

Robots, video games, and a radical new approach to treating stroke patients.

BY KAREN RUSSELL

In late October, when the Apple TV was relaunched, Bandit’s Shark Showdown was among the first apps designed for the platform. The game stars a young dolphin with anime-huge eyes, who battles hammerhead sharks with bolts of ruby light. There is a thrilling realism to the undulance of the sea: each movement a player makes in its midnight-blue canyons unleashes a web of fluming consequences. Bandit’s tail is whiplash-fast, and the sharks’ shadows glide smoothly over rocks. Every shark, fish, and dolphin is rigged with an invisible skeleton, their cartoonish looks belied by the programming that drives them—coding deeply informed by the neurobiology of action. The game’s design seems suspiciously sophisticated when compared with that of apps like Candy Crush Soda Saga and Dude Perfect 2.

Bandit’s Shark Showdown’s creators, Omar Ahmad, Kat McNally, and Promit Roy, work for the Johns Hopkins School of Medicine, and made the game in conjunction with a neuroscientist and neurologist, John Krakauer, who is trying to radically change the way we approach stroke rehabilitation. Ahmad told me that their group has two ambitions: to create a successful commercial game and to build “artistic technologies to help heal John’s patients.” A sister version of the game is currently being played by stroke patients with impaired arms. Using a robotic sling, patients learn to sync the movements of their arms to the leaping, diving dolphin; that motoric empathy, Krakauer hopes, will keep patients engaged in the immersive world of the game for hours, contracting their real muscles to move the virtual dolphin.

Many scientists co-opt existing technologies, like the Nintendo Wii or the Microsoft Kinect, for research purposes. But the dolphin simulation was built in-house at Johns Hopkins, and has lived simultaneously in the commercial and the medical worlds since its inception. “We depend on user feedback to improve the game for John’s stroke patients,” Ahmad said. “This can’t work without an iterative loop between the market and the hospital.”

In December, 2010, Krakauer arrived at Johns Hopkins. His space, a few doors from the Moore Clinic, an early leader in the treatment of AIDS, had been set up in the traditional way—a wet lab, with sinks and ventilation hoods. The research done in neurology departments is, typically, benchwork: “test tubes, cells, and mice,” as one scientist described it. But Krakauer, who studies the brain mechanisms that control our arm movements, uses human subjects. “You can learn a lot about the brain without imaging it, lesioning it, or recording it,” Krakauer told me. His simple, non-invasive experiments are designed to produce new insights into how the brain learns to control the body. “We think of behavior as being the fundamental unit of study, not the brain’s circuitry. You need to study the former very carefully so that you can even begin to interpret the latter.”

Krakauer wanted to expand the scope of the lab, arguing that the study of the brain should be done in collaboration with people rarely found on a medical campus: “Pixar-grade” designers, engineers, computer programmers, and artists. Shortly after Krakauer arrived, he founded the Brain, Learning, Animation, Movement lab, or BLAM! That provocative acronym is true to the spirit of the lab, whose goal is to break down boundaries between the “ordinarily siloed worlds of art, science, and industry,” Krakauer told me. He believes in “propinquity,” the ricochet of bright minds in a constrained space. He wanted to create a kind of “neuro Bell Labs,” where different kinds of experts would unite around a shared interest in movement. Bell Labs is arguably the most successful research laboratory of all time; it has produced eight Nobel Prizes, and inventions ranging from radio astronomy to Unix and the laser. Like Bell,BLAM! would pioneer both biomedical technologies and commercial products. By developing a “self-philanthropizing ecosystem,” Krakauer believed, his lab could gain some degree of autonomy from traditionally conservative funding structures, like the National Institutes of Health.

The first problem that BLAM! has addressed as a team is stroke rehabilitation. Eight hundred thousand people in the U.S. have strokes each year; it is the No. 1 cause of long-term disability. Most cases result from clots that stop blood from flowing to part of the brain, causing tissue to die. “Picture someone standing on a hose, and the patch of grass it watered dying almost immediately,” Steve Zeiler, a neurologist and a colleague of Krakauer’s, told me. Survivors generally suffer from hemiparesis, weakness on one side of the body. We are getting better at keeping people alive, but this means that millions of Americans are now living for years in what’s called “the chronic state” of stroke: their recovery has plateaued, their insurance has often stopped covering therapy, and they are left with a moderate to severe disability.

In 2010, Krakauer received a grant from the James S. McDonnell Foundation to conduct a series of studies exploring how patients recover in the first year after a stroke. He was already well established in the worlds of motor-control and stroke research. He had discovered that a patient’s recovery was closely linked to the degree of initial impairment, a “proportional recovery rule” that had a frightening implication: if you could use early measures of impairment to make accurate predictions about a patient’s recovery three months later, what did that say about conventional physical therapy? “It doesn’t reverse the impairment,” Krakauer said.

Nick Ward, a British stroke and neurorehabilitation specialist who also works on paretic arms, told me that the current model of rehabilitative therapy for the arm is “nihilistic.” A patient lucky enough to have good insurance typically receives an hour each per day of physical, occupational, and speech therapy in the weeks following a stroke. “The movement training we are delivering is occurring at such low doses that it has no discernible impact on impairment,” Krakauer told me. “The message to patients has been: ‘Listen, your arm is really bad, your arm isn’t going to get better, we’re not going to focus on your arm,’ ” Ward said. “It’s become accepted wisdom that the arm doesn’t do well. So why bother?”

Krakauer and his team are now engaged in a clinical trial that will test a new way of delivering rehabilitation, using robotics and the video game made by Ahmad, Roy, and McNally, who make up an “arts and engineering” group within the Department of Neurology. Krakauer hopes to significantly reduce patients’ impairment, and to demonstrate that the collaborative model of BLAM! is “the way to go” for the future study and treatment of brain disease.

Reza Shadmehr, a Johns Hopkins colleague and a leader in the field of human motor-control research, told me, “He’s trying to apply things that we have developed in basic science to actually help patients. And I know that’s what you’re supposed to do, but, by God, there are very few people who really do it.”

“You bank on your reputation, in the more conventional sense, to be allowed to take these risks,” Krakauer said. “I’m cashing in my chits to do something wild.”

In 1924, Charles Sherrington, one of the founders of modern neuroscience, said, “To move things is all that mankind can do; for such the sole executant is muscle, whether in whispering a syllable or in felling a forest.” For Sherrington, a human being was a human doing.

Yet the body often seems to go about its business without us. As a result, we may be tempted to underrate the “intelligence” of the motor system. There is a deep-seated tendency in our culture, Krakauer says, to dichotomize brains and brawn, cognition and movement. But he points out that even a movement as simple as reaching for a coffee cup requires an incredibly sophisticated set of computations. “Movement is the result of decisions, and the decisions you make are reflected in movements,” Krakauer told me.

Motor skills, like Stephen Curry’s jump shot, require the acquisition and manipulation of knowledge, just like those activities we deem to be headier pursuits, such as chess and astrophysics. “Working with one’s hands is working with one’s mind,” Krakauer said, but the distinction between skill and knowledge is an ancient bias that goes back to the Greeks, for whom techne, skill, was distinct from episteme, knowledge or science.

Keep reading

r3ds3rpent
7 years ago
r3ds3rpent - Kode, Transistors and Spirit
r3ds3rpent
7 years ago
50.1% Of The US Population Lives In These 244 Counties.

50.1% of the US population lives in these 244 counties.

More similar maps >>

r3ds3rpent
7 years ago
(Image Caption: Diagram Of The Research Findings (Taken From Article’s Table Of Contents Image) BFGF

(Image caption: diagram of the research findings (Taken from article’s Table of Contents Image) bFGF is produced in the injured zone of the cerebral cortex. Ror2 expression is induced in some population of the astrocytes that receive the bFGF signal, restarting their proliferation by accelerating the progression of their cell cycle)

How brain tissue recovers after injury: the role of astrocytes

A research team led by Associate Professor Mitsuharu ENDO and Professor Yasuhiro MINAMI (both from the Department of Physiology and Cell Biology, Graduate School of Medicine, Kobe University) has pinpointed the mechanism underlying astrocyte-mediated restoration of brain tissue after an injury. This could lead to new treatments that encourage regeneration by limiting damage to neurons incurred by reduced blood supply or trauma. The findings were published on October 11 in the online version of GLIA.

When the brain is damaged by trauma or ischemia (restriction in blood supply), immune cells such as macrophages and lymphocytes dispose of the damaged neurons with an inflammatory response. However, an excessive inflammatory response can also harm healthy neurons.

Astrocytes are a type of glial cell*, and the most numerous cell within the human cerebral cortex. In addition to their supportive role in providing nutrients to neurons, studies have shown that they have various other functions, including the direct or active regulation of neuronal activities.

It has recently become clear that astrocytes also have an important function in the restoration of injured brain tissue. While astrocytes do not normally proliferate in healthy brains, they start to proliferate and increase their numbers around injured areas and minimize inflammation by surrounding the damaged neurons, other astrocytes, and inflammatory cells that have entered the damaged zone. Until now the mechanism that prompts astrocytes to proliferate in response to injury was unclear.

The research team focused on the fact that the astrocytes which proliferate around injured areas acquire characteristics similar to neural stem cells. The receptor tyrosine kinase Ror2, a cell surface protein, is highly expressed in neural stem cells in the developing brain. Normally the Ror2 gene is “switched off” within adult brains, but these findings showed that when the brain was injured, Ror2 was expressed in a certain population of the astrocytes around the injured area.

Ror2 is an important cell-surface protein that regulates the proliferation of neural stem cells, so the researchers proposed that Ror2 was regulating the proliferation of astrocytes around the injured areas. They tested this using model mice for which the Ror2 gene did not express in astrocytes. In these mice, the number of proliferating astrocytes after injury showed a remarkable decrease, and the density of astrocytes around the injury site was reduced. Using cultured astrocytes, the team analyzed the mechanism for activating the Ror2 gene, and ascertained that basic fibroblast growth factor (bFGF) can “switch on” Ror2 in some astrocytes.

This research showed that in injured brains, the astrocytes that show (high) expression of Ror2 induced by bFGF signal are primarily responsible for starting proliferation. bFGF is produced by different cell types, including neurons and astrocytes in the injury zone that have escaped damage. Among the astrocytes that received these bFGF signals around the injury zone, some express Ror2 and some do not. The fact that proliferating astrocytes after brain injury are reduced during aging raises the possibility that the population of astrocytes that can express Ror2 might decrease during aging, which could cause an increase in senile dementia. Researchers are aiming to clarify the mechanism that creates these different cell populations of astrocytes.

By artificially controlling the proliferation of astrocytes, in the future we can potentially minimize damage caused to neurons by brain injuries and establish a new treatment that encourages regeneration of damaged brain areas.

*Glial cell: a catch-all term for non-neuronal cells that belong to the nervous system. They support neurons in various roles.

r3ds3rpent
7 years ago
# ‘tis But A Scratch | Python

# ‘tis but a scratch | Python

r3ds3rpent
7 years ago

Knights that say NI

# ‘tis But A Scratch | Python

# ‘tis but a scratch | Python

r3ds3rpent
7 years ago

Awesome

Earth And Moon From Saturn

Earth and Moon from Saturn

via reddit

r3ds3rpent
7 years ago
Model Sheds Light On Purpose Of Inhibitory Neurons

Model sheds light on purpose of inhibitory neurons

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory have developed a new computational model of a neural circuit in the brain, which could shed light on the biological role of inhibitory neurons — neurons that keep other neurons from firing.

The model describes a neural circuit consisting of an array of input neurons and an equivalent number of output neurons. The circuit performs what neuroscientists call a “winner-take-all” operation, in which signals from multiple input neurons induce a signal in just one output neuron.

Using the tools of theoretical computer science, the researchers prove that, within the context of their model, a certain configuration of inhibitory neurons provides the most efficient means of enacting a winner-take-all operation. Because the model makes empirical predictions about the behavior of inhibitory neurons in the brain, it offers a good example of the way in which computational analysis could aid neuroscience.

The researchers presented their results at the conference on Innovations in Theoretical Computer Science. Nancy Lynch, the NEC Professor of Software Science and Engineering at MIT, is the senior author on the paper. She’s joined by Merav Parter, a postdoc in her group, and Cameron Musco, an MIT graduate student in electrical engineering and computer science.

For years, Lynch’s group has studied communication and resource allocation in ad hoc networks — networks whose members are continually leaving and rejoining. But recently, the team has begun using the tools of network analysis to investigate biological phenomena.

“There’s a close correspondence between the behavior of networks of computers or other devices like mobile phones and that of biological systems,” Lynch says. “We’re trying to find problems that can benefit from this distributed-computing perspective, focusing on algorithms for which we can prove mathematical properties.”

Artificial neurology

In recent years, artificial neural networks — computer models roughly based on the structure of the brain — have been responsible for some of the most rapid improvement in artificial-intelligence systems, from speech transcription to face recognition software.

An artificial neural network consists of “nodes” that, like individual neurons, have limited information-processing power but are densely interconnected. Data are fed into the first layer of nodes. If the data received by a given node meet some threshold criterion — for instance, if it exceeds a particular value — the node “fires,” or sends signals along all of its outgoing connections.

Each of those outgoing connections, however, has an associated “weight,” which can augment or diminish a signal. Each node in the next layer of the network receives weighted signals from multiple nodes in the first layer; it adds them together, and again, if their sum exceeds some threshold, it fires. Its outgoing signals pass to the next layer, and so on.

In artificial-intelligence applications, a neural network is “trained” on sample data, constantly adjusting its weights and firing thresholds until the output of its final layer consistently represents the solution to some computational problem.

Biological plausibility

Lynch, Parter, and Musco made several modifications to this design to make it more biologically plausible. The first was the addition of inhibitory “neurons.” In a standard artificial neural network, the values of the weights on the connections are usually positive or capable of being either positive or negative. But in the brain, some neurons appear to play a purely inhibitory role, preventing other neurons from firing. The MIT researchers modeled those neurons as nodes whose connections have only negative weights.

Many artificial-intelligence applications also use “feed-forward” networks, in which signals pass through the network in only one direction, from the first layer, which receives input data, to the last layer, which provides the result of a computation. But connections in the brain are much more complex. Lynch, Parter, and Musco’s circuit thus includes feedback: Signals from the output neurons pass to the inhibitory neurons, whose output in turn passes back to the output neurons. The signaling of the output neurons also feeds back on itself, which proves essential to enacting the winner-take-all strategy.

Finally, the MIT researchers’ network is probabilistic. In a typical artificial neural net, if a node’s input values exceed some threshold, the node fires. But in the brain, increasing the strength of the signal traveling over an input neuron only increases the chances that an output neuron will fire. The same is true of the nodes in the researchers’ model. Again, this modification is crucial to enacting the winner-take-all strategy.

In the researchers’ model, the number of input and output neurons is fixed, and the execution of the winner-take-all computation is purely the work of a bank of auxiliary neurons. “We are trying to see the trade-off between the computational time to solve a given problem and the number of auxiliary neurons,” Parter explains. “We consider neurons to be a resource; we don’t want too spend much of it.”

Inhibition’s virtues

Parter and her colleagues were able to show that with only one inhibitory neuron, it’s impossible, in the context of their model, to enact the winner-take-all strategy. But two inhibitory neurons are sufficient. The trick is that one of the inhibitory neurons — which the researchers call a convergence neuron — sends a strong inhibitory signal if more than one output neuron is firing. The other inhibitory neuron — the stability neuron — sends a much weaker signal as long as any output neurons are firing.

The convergence neuron drives the circuit to select a single output neuron, at which point it stops firing; the stability neuron prevents a second output neuron from becoming active once the convergence neuron has been turned off. The self-feedback circuits from the output neurons enhance this effect. The longer an output neuron has been turned off, the more likely it is to remain off; the longer it’s been on, the more likely it is to remain on. Once a single output neuron has been selected, its self-feedback circuit ensures that it can overcome the inhibition of the stability neuron.

Without randomness, however, the circuit won’t converge to a single output neuron: Any setting of the inhibitory neurons’ weights will affect all the output neurons equally. “You need randomness to break the symmetry,” Parter explains.

The researchers were able to determine the minimum number of auxiliary neurons required to guarantee a particular convergence speed and the maximum convergence speed possible given a particular number of auxiliary neurons.

Adding more convergence neurons increases the convergence speed, but only up to a point. For instance, with 100 input neurons, two or three convergence neurons are all you need; adding a fourth doesn’t improve efficiency. And just one stability neuron is already optimal.

But perhaps more intriguingly, the researchers showed that including excitatory neurons — neurons that stimulate, rather than inhibit, other neurons’ firing — as well as inhibitory neurons among the auxiliary neurons cannot improve the efficiency of the circuit. Similarly, any arrangement of inhibitory neurons that doesn’t observe the distinction between convergence and stability neurons will be less efficient than one that does.

Assuming, then, that evolution tends to find efficient solutions to engineering problems, the model suggests both an answer to the question of why inhibitory neurons are found in the brain and a tantalizing question for empirical research: Do real inhibitory neurons exhibit the same division between convergence neurons and stability neurons?

“This computation of winner-take-all is quite a broad and useful motif that we see throughout the brain,” says Saket Navlakha, an assistant professor in the Integrative Biology Laboratory at the Salk Institute for Biological Studies. “In many sensory systems — for example, the olfactory system — it’s used to generate sparse codes.”

“There are many classes of inhibitory neurons that we’ve discovered, and a natural next step would be to see if some of these classes map on to the ones predicted in this study,” he adds.

“There’s a lot of work in neuroscience on computational models that take into account much more detail about not just inhibitory neurons but what proteins drive these neurons and so on,” says Ziv Bar-Joseph, a professor of computer science at Carnegie Mellon University. “Nancy is taking a global view of the network rather than looking at the specific details. In return she gets the ability to look at some larger-picture aspects. How many inhibitory neurons do you really need? Why do we have so few compared to the excitatory neurons? The unique aspect here is that this global-scale modeling gives you a much higher-level type of prediction.”

r3ds3rpent
7 years ago
The American Commute By Alasdair Rae.

The American Commute by Alasdair Rae.

r3ds3rpent
7 years ago
(Image Caption: The Prefrontal Cortex Connects To A Very Specific Region Of The Brainstem (the PAG) Through

(Image caption: The prefrontal cortex connects to a very specific region of the brainstem (the PAG) through prefrontal cortical neurons: those labeled in purple directly project to the PAG and control our instinctive behaviours. Credit: EMBL/Livia Marrone)

Neural connection keeps instincts in check

From fighting the urge to hit someone to resisting the temptation to run off stage instead of giving that public speech, we are often confronted with situations where we have to curb our instincts. Scientists at EMBL have traced exactly which neuronal projections prevent social animals like us from acting out such impulses. The study, published online in Nature Neuroscience, could have implications for schizophrenia and mood disorders like depression.

“Instincts like fear and sex are important, but you don’t want to be acting on them all the time,” says Cornelius Gross, who led the work at EMBL. “We need to be able to dynamically control our instinctive behaviours, depending on the situation.”

The driver of our instincts is the brainstem – the region at the very base of your brain, just above the spinal cord. Scientists have known for some time that another brain region, the prefrontal cortex, plays a role in keeping those instincts in check (see background information down below). But exactly how the prefrontal cortex puts a break on the brainstem has remained unclear.

Now, Gross and colleagues have literally found the connection between prefrontal cortex and brainstem. The EMBL scientists teamed up with Tiago Branco’s lab at MRC LMB, and traced connections between neurons in a mouse brain. They discovered that the prefrontal cortex makes prominent connections directly to the brainstem.

Gross and colleagues went on to confirm that this physical connection was the brake that inhibits instinctive behaviour. They found that in mice that have been repeatedly defeated by another mouse – the murine equivalent to being bullied – this connection weakens, and the mice act more scared. The scientists found that they could elicit those same fearful behaviours in mice that had never been bullied, simply by using drugs to block the connection between prefrontal cortex and brainstem.

These findings provide an anatomical explanation for why it’s much easier to stop yourself from hitting someone than it is to stop yourself from feeling aggressive. The scientists found that the connection from the prefrontal cortex is to a very specific region of the brainstem, called the PAG, which is responsible for the acting out of our instincts. However, it doesn’t affect the hypothalamus, the region that controls feelings and emotions. So the prefrontal cortex keeps behaviour in check, but doesn’t affect the underlying instinctive feeling: it stops you from running off-stage, but doesn’t abate the butterflies in your stomach.

The work has implications for schizophrenia and mood disorders such as depression, which have been linked to problems with prefrontal cortex function and maturation.

“One fascinating implication we’re looking at now is that we know the pre-frontal cortex matures during adolescence. Kids are really bad at inhibiting their instincts; they don’t have this control,” says Gross, “so we’re trying to figure out how this inhibition comes about, especially as many mental illnesses like mood disorders are typically adult-onset.”

r3ds3rpent
8 years ago
The American Commute By Alasdair Rae.

The American Commute by Alasdair Rae.

r3ds3rpent
8 years ago
WebGL Scroll Spiral | Codrops
A couple of decorative WebGL background scroll effects for websites powered by regl. The idea is to twist some images and hexagonal grid patterns on scroll.
r3ds3rpent
8 years ago
(Image Caption: The Prefrontal Cortex Connects To A Very Specific Region Of The Brainstem (the PAG) Through

(Image caption: The prefrontal cortex connects to a very specific region of the brainstem (the PAG) through prefrontal cortical neurons: those labeled in purple directly project to the PAG and control our instinctive behaviours. Credit: EMBL/Livia Marrone)

Neural connection keeps instincts in check

From fighting the urge to hit someone to resisting the temptation to run off stage instead of giving that public speech, we are often confronted with situations where we have to curb our instincts. Scientists at EMBL have traced exactly which neuronal projections prevent social animals like us from acting out such impulses. The study, published online in Nature Neuroscience, could have implications for schizophrenia and mood disorders like depression.

“Instincts like fear and sex are important, but you don’t want to be acting on them all the time,” says Cornelius Gross, who led the work at EMBL. “We need to be able to dynamically control our instinctive behaviours, depending on the situation.”

The driver of our instincts is the brainstem – the region at the very base of your brain, just above the spinal cord. Scientists have known for some time that another brain region, the prefrontal cortex, plays a role in keeping those instincts in check (see background information down below). But exactly how the prefrontal cortex puts a break on the brainstem has remained unclear.

Now, Gross and colleagues have literally found the connection between prefrontal cortex and brainstem. The EMBL scientists teamed up with Tiago Branco’s lab at MRC LMB, and traced connections between neurons in a mouse brain. They discovered that the prefrontal cortex makes prominent connections directly to the brainstem.

Gross and colleagues went on to confirm that this physical connection was the brake that inhibits instinctive behaviour. They found that in mice that have been repeatedly defeated by another mouse – the murine equivalent to being bullied – this connection weakens, and the mice act more scared. The scientists found that they could elicit those same fearful behaviours in mice that had never been bullied, simply by using drugs to block the connection between prefrontal cortex and brainstem.

These findings provide an anatomical explanation for why it’s much easier to stop yourself from hitting someone than it is to stop yourself from feeling aggressive. The scientists found that the connection from the prefrontal cortex is to a very specific region of the brainstem, called the PAG, which is responsible for the acting out of our instincts. However, it doesn’t affect the hypothalamus, the region that controls feelings and emotions. So the prefrontal cortex keeps behaviour in check, but doesn’t affect the underlying instinctive feeling: it stops you from running off-stage, but doesn’t abate the butterflies in your stomach.

The work has implications for schizophrenia and mood disorders such as depression, which have been linked to problems with prefrontal cortex function and maturation.

“One fascinating implication we’re looking at now is that we know the pre-frontal cortex matures during adolescence. Kids are really bad at inhibiting their instincts; they don’t have this control,” says Gross, “so we’re trying to figure out how this inhibition comes about, especially as many mental illnesses like mood disorders are typically adult-onset.”

r3ds3rpent
8 years ago
Model Sheds Light On Purpose Of Inhibitory Neurons

Model sheds light on purpose of inhibitory neurons

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory have developed a new computational model of a neural circuit in the brain, which could shed light on the biological role of inhibitory neurons — neurons that keep other neurons from firing.

The model describes a neural circuit consisting of an array of input neurons and an equivalent number of output neurons. The circuit performs what neuroscientists call a “winner-take-all” operation, in which signals from multiple input neurons induce a signal in just one output neuron.

Using the tools of theoretical computer science, the researchers prove that, within the context of their model, a certain configuration of inhibitory neurons provides the most efficient means of enacting a winner-take-all operation. Because the model makes empirical predictions about the behavior of inhibitory neurons in the brain, it offers a good example of the way in which computational analysis could aid neuroscience.

The researchers presented their results at the conference on Innovations in Theoretical Computer Science. Nancy Lynch, the NEC Professor of Software Science and Engineering at MIT, is the senior author on the paper. She’s joined by Merav Parter, a postdoc in her group, and Cameron Musco, an MIT graduate student in electrical engineering and computer science.

For years, Lynch’s group has studied communication and resource allocation in ad hoc networks — networks whose members are continually leaving and rejoining. But recently, the team has begun using the tools of network analysis to investigate biological phenomena.

“There’s a close correspondence between the behavior of networks of computers or other devices like mobile phones and that of biological systems,” Lynch says. “We’re trying to find problems that can benefit from this distributed-computing perspective, focusing on algorithms for which we can prove mathematical properties.”

Artificial neurology

In recent years, artificial neural networks — computer models roughly based on the structure of the brain — have been responsible for some of the most rapid improvement in artificial-intelligence systems, from speech transcription to face recognition software.

An artificial neural network consists of “nodes” that, like individual neurons, have limited information-processing power but are densely interconnected. Data are fed into the first layer of nodes. If the data received by a given node meet some threshold criterion — for instance, if it exceeds a particular value — the node “fires,” or sends signals along all of its outgoing connections.

Each of those outgoing connections, however, has an associated “weight,” which can augment or diminish a signal. Each node in the next layer of the network receives weighted signals from multiple nodes in the first layer; it adds them together, and again, if their sum exceeds some threshold, it fires. Its outgoing signals pass to the next layer, and so on.

In artificial-intelligence applications, a neural network is “trained” on sample data, constantly adjusting its weights and firing thresholds until the output of its final layer consistently represents the solution to some computational problem.

Biological plausibility

Lynch, Parter, and Musco made several modifications to this design to make it more biologically plausible. The first was the addition of inhibitory “neurons.” In a standard artificial neural network, the values of the weights on the connections are usually positive or capable of being either positive or negative. But in the brain, some neurons appear to play a purely inhibitory role, preventing other neurons from firing. The MIT researchers modeled those neurons as nodes whose connections have only negative weights.

Many artificial-intelligence applications also use “feed-forward” networks, in which signals pass through the network in only one direction, from the first layer, which receives input data, to the last layer, which provides the result of a computation. But connections in the brain are much more complex. Lynch, Parter, and Musco’s circuit thus includes feedback: Signals from the output neurons pass to the inhibitory neurons, whose output in turn passes back to the output neurons. The signaling of the output neurons also feeds back on itself, which proves essential to enacting the winner-take-all strategy.

Finally, the MIT researchers’ network is probabilistic. In a typical artificial neural net, if a node’s input values exceed some threshold, the node fires. But in the brain, increasing the strength of the signal traveling over an input neuron only increases the chances that an output neuron will fire. The same is true of the nodes in the researchers’ model. Again, this modification is crucial to enacting the winner-take-all strategy.

In the researchers’ model, the number of input and output neurons is fixed, and the execution of the winner-take-all computation is purely the work of a bank of auxiliary neurons. “We are trying to see the trade-off between the computational time to solve a given problem and the number of auxiliary neurons,” Parter explains. “We consider neurons to be a resource; we don’t want too spend much of it.”

Inhibition’s virtues

Parter and her colleagues were able to show that with only one inhibitory neuron, it’s impossible, in the context of their model, to enact the winner-take-all strategy. But two inhibitory neurons are sufficient. The trick is that one of the inhibitory neurons — which the researchers call a convergence neuron — sends a strong inhibitory signal if more than one output neuron is firing. The other inhibitory neuron — the stability neuron — sends a much weaker signal as long as any output neurons are firing.

The convergence neuron drives the circuit to select a single output neuron, at which point it stops firing; the stability neuron prevents a second output neuron from becoming active once the convergence neuron has been turned off. The self-feedback circuits from the output neurons enhance this effect. The longer an output neuron has been turned off, the more likely it is to remain off; the longer it’s been on, the more likely it is to remain on. Once a single output neuron has been selected, its self-feedback circuit ensures that it can overcome the inhibition of the stability neuron.

Without randomness, however, the circuit won’t converge to a single output neuron: Any setting of the inhibitory neurons’ weights will affect all the output neurons equally. “You need randomness to break the symmetry,” Parter explains.

The researchers were able to determine the minimum number of auxiliary neurons required to guarantee a particular convergence speed and the maximum convergence speed possible given a particular number of auxiliary neurons.

Adding more convergence neurons increases the convergence speed, but only up to a point. For instance, with 100 input neurons, two or three convergence neurons are all you need; adding a fourth doesn’t improve efficiency. And just one stability neuron is already optimal.

But perhaps more intriguingly, the researchers showed that including excitatory neurons — neurons that stimulate, rather than inhibit, other neurons’ firing — as well as inhibitory neurons among the auxiliary neurons cannot improve the efficiency of the circuit. Similarly, any arrangement of inhibitory neurons that doesn’t observe the distinction between convergence and stability neurons will be less efficient than one that does.

Assuming, then, that evolution tends to find efficient solutions to engineering problems, the model suggests both an answer to the question of why inhibitory neurons are found in the brain and a tantalizing question for empirical research: Do real inhibitory neurons exhibit the same division between convergence neurons and stability neurons?

“This computation of winner-take-all is quite a broad and useful motif that we see throughout the brain,” says Saket Navlakha, an assistant professor in the Integrative Biology Laboratory at the Salk Institute for Biological Studies. “In many sensory systems — for example, the olfactory system — it’s used to generate sparse codes.”

“There are many classes of inhibitory neurons that we’ve discovered, and a natural next step would be to see if some of these classes map on to the ones predicted in this study,” he adds.

“There’s a lot of work in neuroscience on computational models that take into account much more detail about not just inhibitory neurons but what proteins drive these neurons and so on,” says Ziv Bar-Joseph, a professor of computer science at Carnegie Mellon University. “Nancy is taking a global view of the network rather than looking at the specific details. In return she gets the ability to look at some larger-picture aspects. How many inhibitory neurons do you really need? Why do we have so few compared to the excitatory neurons? The unique aspect here is that this global-scale modeling gives you a much higher-level type of prediction.”

r3ds3rpent
8 years ago
French Presidential Election, 2017, Second Round Results By Department.

French Presidential Election, 2017, Second Round Results by department.

r3ds3rpent
8 years ago
Earth And Moon From Saturn

Earth and Moon from Saturn

via reddit

r3ds3rpent
8 years ago

Critical moment American History PAY ATTENTION President Trump’s Dizzying Series Of Interviews https://youtu.be/jSDj9E2dgWs

r3ds3rpent
8 years ago
# ‘tis But A Scratch | Python

# ‘tis but a scratch | Python

r3ds3rpent
8 years ago
Miss El Salvador

Miss El Salvador

r3ds3rpent
8 years ago
Las Historias Prohibidas Del Pulgarcito - Roque Dalton

Las historias prohibidas del pulgarcito - Roque Dalton

r3ds3rpent
8 years ago

Star from the Lizard Constellation Photobombs Hubble Observation

In space, being outshone is an occupational hazard. This NASA/ESA Hubble Space Telescope image captures a galaxy named NGC 7250. Despite being remarkable in its own right — it has bright bursts of star formation and recorded supernova explosions— it blends into the background somewhat thanks to the gloriously bright star hogging the limelight next to it. The bright object seen in this Hubble image is a single and little-studied star named TYC 3203-450-1, located in the constellation of Lacerta (The Lizard). The star is much closer than the much more distant galaxy. Only this way can a normal star outshine an entire galaxy, consisting of billions of stars. Astronomers studying distant objects call these stars “foreground stars” and they are often not very happy about them, as their bright light is contaminating the faint light from the more distant and interesting objects they actually want to study. In this case, TYC 3203-450-1 is million times closer than NGC 7250, which lies more than 45 million light-years away from us. If the star were the same distance from us as NGC 7250, it would hardly be visible in this image. Credit: ESA/Hubble & NASA NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

r3ds3rpent
8 years ago
JavaScript: The Keyword ‘This’ for Beginners – Hacker Noon
Understanding the keyword this in JavaScript, and what it is referring to, can be a little complicated at times. Fortunately, there are…
r3ds3rpent
8 years ago

Scientists Take Step Toward Mapping How the Brain Stores Memories

A new study led by scientists at The Scripps Research Institute (TSRI) sheds light on how the brain stores memories. The research, published recently in the journal eLife, is the first to demonstrate that the same brain region can both motivate a learned behavior and suppress that same behavior.

“We behave the way we do in a specific situation because we have learned an association—a memory—tying an environmental cue to a behavior,” said Nobuyoshi Suto, TSRI Assistant Professor of Molecular and Cellular Neuroscience, who co-led the study with TSRI Professor Friedbert Weiss and Bruce Hope, a principal investigator at the National Institutes of Health’s National Institute on Drug Abuse. “This study provides causal evidence that one brain region can store different memories.”

Scientists know that our memories are stored in specific areas of the brain, but there has been some debate over whether a single brain region can store different memories that control opposing behavior. For example, can the same region store the meanings of red and green traffic lights—the memories that make a driver stop a car at a red light, then hit the gas pedal at a green light?

Suto’s research focuses specifically on the brain circuits that control motivation. In the new study, he and his colleagues set out to examine how rats learn to press levers to get sugar water—and where they store those motivational memories.

The researchers first trained the rats to press a lever to get sugar water. The researchers then trained the rats to recognize two colored lights: one signaling the availability of sugar reward, and the other signaling the omission of this reward. As a consequence, the animals learned to change their behavior in response to these cues: the cue signaling availability promoted the lever-pressing, while the cues signaling omission suppressed this reward-seeking behavior.

Based on previous electrophysiology studies, Suto and his colleagues speculated that memories associated with these two lessons were both stored in a region of the brain called the infralimbic cortex.

“We’ve seen correlational evidence, where we see brain activity together with a behavior, and we connect the dots to say it must be this brain activity causing this behavior,” said Suto. “But such correlational evidence alone cannot establish the causality—proof that the specific brain activity is directly controlling the specific behavior.”

So scientists took their experiment a step further. Using a pharmacogenetics approach, the researchers selectively switched off specific groups of brain cells—called neural ensembles—that react to select cues signaling either reward availability or reward omission.

The experiments demonstrated that distinct neural ensembles in the same region directly controlled the promotion of reward-seeking or the suppression of that behavior. Without those neurons firing, the rats no longer performed the behavior motivated by the memories in those ensembles. At last, the scientists appeared to prove the causality.

Suto called the findings a step towards understanding how different memories are stored in the brain. He said the research could also be relevant for studying which neurons are activated to motivate—and prevent—drug relapse. He said he’d next like to look at what other regions in the brain these infralimbic cortex neurons may be communicating with. “Brain regions don’t exist in a vacuum,” he said. In addition, he also would like to determine the brain chemicals mediating the promotion or suppression of reward seeking.

r3ds3rpent
8 years ago
r3ds3rpent - Kode, Transistors and Spirit
r3ds3rpent
8 years ago
// Let It Go | Swift

// let it go | Swift

r3ds3rpent
8 years ago
Geos-1 Being Tested At ESTEC

Geos-1 being tested at ESTEC

A vintage view of ESA’s Geos-1 satellite being prepared for flight at ESA’s technical centre in the Netherlands, which was launched 40 years ago this month.

Keep reading

r3ds3rpent
8 years ago
What Movie Best Represent Each US State (according To Subtonix)?

What movie best represent each US state (according to subtonix)?

Explore Tumblr Blog
Search Through Tumblr Tags