Solar System: 5 Things To Know This Week

Solar System: 5 Things To Know This Week

Our solar system is huge, so let us break it down for you. Here are 5 things to know this week: 

1. Make a Wish

Solar System: 5 Things To Know This Week

The annual Leonids meteor shower is not known for a high number of “shooting stars” (expect as many as 15 an hour), but they’re usually bright and colorful. They’re fast, too: Leonids travel at speeds of 71 km (44 miles) per second, which makes them some of the fastest. This year the Leonids shower will peak around midnight on Nov. 17-18. The crescent moon will set before midnight, leaving dark skies for watching. Get more viewing tips HERE.

2. Back to the Beginning

Solar System: 5 Things To Know This Week

Our Dawn mission to the dwarf planet Ceres is really a journey to the beginning of the solar system, since Ceres acts as a kind of time capsule from the formation of the asteroid belt. If you’ll be in the Washington DC area on Nov. 19, you can catch a presentation by Lucy McFadden, a co-investigator on the Dawn mission, who will discuss what we’ve discovered so far at this tiny but captivating world. Find out how to attend HERE. 

3. Keep Your Eye on This Spot

Solar System: 5 Things To Know This Week

The Juno spacecraft is on target for a July 2016 arrival at the giant planet Jupiter. But right now, your help is needed. Members of the Juno team are calling all amateur astronomers to upload their telescopic images and data of Jupiter. This will help the team plan their observations. Join in HERE.

4. The Ice Volcanoes of Pluto

Solar System: 5 Things To Know This Week

The more data from July’s Pluto flyby that comes down from the New Horizons spacecraft, the more interesting Pluto becomes. The latest finding? Possible ice volcanoes. Using images of Pluto’s surface to make 3-D topographic maps, scientists discovered that some mountains on Pluto, such as the informally named Piccard Mons and Wright Mons, had structures that suggested they could be cryovolcanoes that may have been active in the recent geological past.

5. Hidden Storm

Solar System: 5 Things To Know This Week

Cameras aboard the Cassini spacecraft have been tracking an impressive cloud hovering over the south pole of Saturn’s moon Titan. But that cloud has turned out to be just the tip of the iceberg. A much more massive ice cloud system has been found lower in the stratosphere, peaking at an altitude of about 124 miles (200 kilometers).

Make sure to follow us on Tumblr for your regular dose of space: http://nasa.tumblr.com

More Posts from R3ds3rpent and Others

8 years ago
Earth And Moon From Saturn

Earth and Moon from Saturn

via reddit

7 years ago
Visualizing Growth In The U.S.: Take A Look At America’s Economic Growth [831 X 1200] CLICK HERE FOR

Visualizing Growth in the U.S.: Take a look at America’s Economic Growth [831 X 1200] CLICK HERE FOR MORE MAPS! thelandofmaps.tumblr.com

8 years ago
Model Sheds Light On Purpose Of Inhibitory Neurons

Model sheds light on purpose of inhibitory neurons

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory have developed a new computational model of a neural circuit in the brain, which could shed light on the biological role of inhibitory neurons — neurons that keep other neurons from firing.

The model describes a neural circuit consisting of an array of input neurons and an equivalent number of output neurons. The circuit performs what neuroscientists call a “winner-take-all” operation, in which signals from multiple input neurons induce a signal in just one output neuron.

Using the tools of theoretical computer science, the researchers prove that, within the context of their model, a certain configuration of inhibitory neurons provides the most efficient means of enacting a winner-take-all operation. Because the model makes empirical predictions about the behavior of inhibitory neurons in the brain, it offers a good example of the way in which computational analysis could aid neuroscience.

The researchers presented their results at the conference on Innovations in Theoretical Computer Science. Nancy Lynch, the NEC Professor of Software Science and Engineering at MIT, is the senior author on the paper. She’s joined by Merav Parter, a postdoc in her group, and Cameron Musco, an MIT graduate student in electrical engineering and computer science.

For years, Lynch’s group has studied communication and resource allocation in ad hoc networks — networks whose members are continually leaving and rejoining. But recently, the team has begun using the tools of network analysis to investigate biological phenomena.

“There’s a close correspondence between the behavior of networks of computers or other devices like mobile phones and that of biological systems,” Lynch says. “We’re trying to find problems that can benefit from this distributed-computing perspective, focusing on algorithms for which we can prove mathematical properties.”

Artificial neurology

In recent years, artificial neural networks — computer models roughly based on the structure of the brain — have been responsible for some of the most rapid improvement in artificial-intelligence systems, from speech transcription to face recognition software.

An artificial neural network consists of “nodes” that, like individual neurons, have limited information-processing power but are densely interconnected. Data are fed into the first layer of nodes. If the data received by a given node meet some threshold criterion — for instance, if it exceeds a particular value — the node “fires,” or sends signals along all of its outgoing connections.

Each of those outgoing connections, however, has an associated “weight,” which can augment or diminish a signal. Each node in the next layer of the network receives weighted signals from multiple nodes in the first layer; it adds them together, and again, if their sum exceeds some threshold, it fires. Its outgoing signals pass to the next layer, and so on.

In artificial-intelligence applications, a neural network is “trained” on sample data, constantly adjusting its weights and firing thresholds until the output of its final layer consistently represents the solution to some computational problem.

Biological plausibility

Lynch, Parter, and Musco made several modifications to this design to make it more biologically plausible. The first was the addition of inhibitory “neurons.” In a standard artificial neural network, the values of the weights on the connections are usually positive or capable of being either positive or negative. But in the brain, some neurons appear to play a purely inhibitory role, preventing other neurons from firing. The MIT researchers modeled those neurons as nodes whose connections have only negative weights.

Many artificial-intelligence applications also use “feed-forward” networks, in which signals pass through the network in only one direction, from the first layer, which receives input data, to the last layer, which provides the result of a computation. But connections in the brain are much more complex. Lynch, Parter, and Musco’s circuit thus includes feedback: Signals from the output neurons pass to the inhibitory neurons, whose output in turn passes back to the output neurons. The signaling of the output neurons also feeds back on itself, which proves essential to enacting the winner-take-all strategy.

Finally, the MIT researchers’ network is probabilistic. In a typical artificial neural net, if a node’s input values exceed some threshold, the node fires. But in the brain, increasing the strength of the signal traveling over an input neuron only increases the chances that an output neuron will fire. The same is true of the nodes in the researchers’ model. Again, this modification is crucial to enacting the winner-take-all strategy.

In the researchers’ model, the number of input and output neurons is fixed, and the execution of the winner-take-all computation is purely the work of a bank of auxiliary neurons. “We are trying to see the trade-off between the computational time to solve a given problem and the number of auxiliary neurons,” Parter explains. “We consider neurons to be a resource; we don’t want too spend much of it.”

Inhibition’s virtues

Parter and her colleagues were able to show that with only one inhibitory neuron, it’s impossible, in the context of their model, to enact the winner-take-all strategy. But two inhibitory neurons are sufficient. The trick is that one of the inhibitory neurons — which the researchers call a convergence neuron — sends a strong inhibitory signal if more than one output neuron is firing. The other inhibitory neuron — the stability neuron — sends a much weaker signal as long as any output neurons are firing.

The convergence neuron drives the circuit to select a single output neuron, at which point it stops firing; the stability neuron prevents a second output neuron from becoming active once the convergence neuron has been turned off. The self-feedback circuits from the output neurons enhance this effect. The longer an output neuron has been turned off, the more likely it is to remain off; the longer it’s been on, the more likely it is to remain on. Once a single output neuron has been selected, its self-feedback circuit ensures that it can overcome the inhibition of the stability neuron.

Without randomness, however, the circuit won’t converge to a single output neuron: Any setting of the inhibitory neurons’ weights will affect all the output neurons equally. “You need randomness to break the symmetry,” Parter explains.

The researchers were able to determine the minimum number of auxiliary neurons required to guarantee a particular convergence speed and the maximum convergence speed possible given a particular number of auxiliary neurons.

Adding more convergence neurons increases the convergence speed, but only up to a point. For instance, with 100 input neurons, two or three convergence neurons are all you need; adding a fourth doesn’t improve efficiency. And just one stability neuron is already optimal.

But perhaps more intriguingly, the researchers showed that including excitatory neurons — neurons that stimulate, rather than inhibit, other neurons’ firing — as well as inhibitory neurons among the auxiliary neurons cannot improve the efficiency of the circuit. Similarly, any arrangement of inhibitory neurons that doesn’t observe the distinction between convergence and stability neurons will be less efficient than one that does.

Assuming, then, that evolution tends to find efficient solutions to engineering problems, the model suggests both an answer to the question of why inhibitory neurons are found in the brain and a tantalizing question for empirical research: Do real inhibitory neurons exhibit the same division between convergence neurons and stability neurons?

“This computation of winner-take-all is quite a broad and useful motif that we see throughout the brain,” says Saket Navlakha, an assistant professor in the Integrative Biology Laboratory at the Salk Institute for Biological Studies. “In many sensory systems — for example, the olfactory system — it’s used to generate sparse codes.”

“There are many classes of inhibitory neurons that we’ve discovered, and a natural next step would be to see if some of these classes map on to the ones predicted in this study,” he adds.

“There’s a lot of work in neuroscience on computational models that take into account much more detail about not just inhibitory neurons but what proteins drive these neurons and so on,” says Ziv Bar-Joseph, a professor of computer science at Carnegie Mellon University. “Nancy is taking a global view of the network rather than looking at the specific details. In return she gets the ability to look at some larger-picture aspects. How many inhibitory neurons do you really need? Why do we have so few compared to the excitatory neurons? The unique aspect here is that this global-scale modeling gives you a much higher-level type of prediction.”

9 years ago
Motor Skills Affected By Alcohol In E-Cigarettes

Motor Skills Affected by Alcohol in E-Cigarettes

Some commercially available e-cigarettes contain enough alcohol to impact motor skills, a new Yale University School of Medicine study shows.

The research is in Drug and Alcohol Dependence. (full open access)

10 years ago
The Current Drought Conditions In The Western United States.

The current drought conditions in the Western United States.


Tags
9 years ago
Join The Movement To Make Two Years Of Community College As Free And Universal As High School Is Today
Join The Movement To Make Two Years Of Community College As Free And Universal As High School Is Today
Join The Movement To Make Two Years Of Community College As Free And Universal As High School Is Today

Join the movement to make two years of community college as free and universal as high school is today at HeadsUpAmerica.us/Act.

9 years ago
How I Feel When I Write Anything In C++

How I feel when I write anything in C++

8 years ago
French Presidential Election, 2017, Second Round Results By Department.

French Presidential Election, 2017, Second Round Results by department.

  • serendipityinthesheets
    serendipityinthesheets liked this · 6 years ago
  • r3ds3rpent
    r3ds3rpent reblogged this · 7 years ago
  • rougeaubebeaute
    rougeaubebeaute reblogged this · 8 years ago
  • rougeaubebeaute
    rougeaubebeaute liked this · 8 years ago
  • juliejuno
    juliejuno liked this · 8 years ago
  • the-highseas
    the-highseas liked this · 8 years ago
  • pinksinnamon
    pinksinnamon liked this · 8 years ago
  • aegistheia
    aegistheia reblogged this · 9 years ago
  • megaradcollectionwolfthings-blog
    megaradcollectionwolfthings-blog liked this · 9 years ago
  • chanyeolbeats
    chanyeolbeats liked this · 9 years ago
  • fandoms-and-witchcraft
    fandoms-and-witchcraft liked this · 9 years ago
  • fandoms-and-witchcraft
    fandoms-and-witchcraft reblogged this · 9 years ago
  • creepyshrooms
    creepyshrooms liked this · 9 years ago
  • hyperionink
    hyperionink liked this · 9 years ago
  • inaforestsomewhere
    inaforestsomewhere reblogged this · 9 years ago
  • crispyharmonybasement-blog
    crispyharmonybasement-blog liked this · 9 years ago
  • 11g
    11g liked this · 9 years ago
  • rikalinka
    rikalinka liked this · 9 years ago
  • linanlu
    linanlu liked this · 9 years ago
  • tigersandbees
    tigersandbees reblogged this · 9 years ago
  • tigersandbees
    tigersandbees liked this · 9 years ago
  • daviaana
    daviaana liked this · 9 years ago
  • glitterandgrunge
    glitterandgrunge liked this · 9 years ago
  • zkhash12
    zkhash12 liked this · 9 years ago
  • zulubunsen
    zulubunsen liked this · 9 years ago
  • tristanshoard
    tristanshoard reblogged this · 9 years ago
  • starhasarrived
    starhasarrived reblogged this · 9 years ago
  • starhasarrived
    starhasarrived liked this · 9 years ago
  • little-babe-blue
    little-babe-blue reblogged this · 9 years ago
  • thehumanwearingturtlenecks
    thehumanwearingturtlenecks liked this · 9 years ago
  • yamiwingel
    yamiwingel liked this · 9 years ago
  • kibaisepic
    kibaisepic reblogged this · 9 years ago
  • kibaisepic
    kibaisepic liked this · 9 years ago
  • stormy-poet
    stormy-poet liked this · 9 years ago
  • rulerofuranus
    rulerofuranus reblogged this · 9 years ago
  • rulerofuranus
    rulerofuranus liked this · 9 years ago
  • i-liked-pluto
    i-liked-pluto reblogged this · 9 years ago
  • i-liked-pluto
    i-liked-pluto liked this · 9 years ago
  • thabolm
    thabolm liked this · 9 years ago
  • howilikemusic
    howilikemusic liked this · 9 years ago
  • cannibalgh0st
    cannibalgh0st liked this · 9 years ago
  • bohemian-rhapslori
    bohemian-rhapslori reblogged this · 9 years ago
r3ds3rpent - Kode, Transistors and Spirit
Kode, Transistors and Spirit

Machine Learning, Big Data, Code, R, Python, Arduino, Electronics, robotics, Zen, Native spirituality and few other matters.

107 posts

Explore Tumblr Blog
Search Through Tumblr Tags