Neural networks

From Uncyclopedia, the content-free encyclopedia
Jump to navigation Jump to search
Where it all goes down.

A neural network is a collection of interconnected things, sometimes called neurons. Each neuron can send signals to other neurons via connections which cause those neurons to send signals to other neurons. Those neurons send signals to other neurons causing those neurons to send signals to other neurons. This process doesn't sound very interesting when written out in English, but in mathematics or on computers or even in reality, it's really interesting. Honest.

Advanced biological life-forms on Earth all have neural networks usually centred around a brain and/or central nervous system.

Signals between neurons are mediated by electrical, chemical and magical means. The speed of these signals enables advanced life-forms to perform incredibly complex tasks by performing an enormous number of computations a second. Curiously, if the same number of computations was performed by a biological neural network in one minute, then less computations will have actually been performed. These computations appear to be performed mostly in binary, but until we get a solid answer on whether or not there is a God we will never really know what the hell they are doing.

History[edit]

The first biological neural networks appeared on Earth not long after life appeared on Earth. The oldest fossil evidence of neural networks can be found in Spain, though the exact location has been kept a closely guarded secret as it's rumoured that contact with the fossils has an effect on the contactee much like that of the Krel machine in Forbidden Planet.

The first artificial neural networks were developed by Alan Turing during World War II. On visiting Bletchley Park in 1942, Winston Churchill was extremely impressed by what he saw. Churchill was overheard saying "One day, men will fall at the feet of these machines, meek and hollow and in awe of their intelligence and wisdom". Turing, clearly moved to tears, replied "that's the wireless set Prime Minister".

1960s[edit]

In your case, it probably looks more like this.

In the late 1960s, a neural network called the perceptron was proposed by L. Ron Hubbard. The perceptron was able to play chess, though always lost and rarely made valid moves. A perceptron was also used transplated into a cat. For a while the cat was able to twitch and bleed copiously but died within minutes. Perceptrons also ran several countries for a trial period in the late 1960s, including the UK, USA and Iran. No perceptible advantage was identified in these tests. In 1969 it was discovered that the humble perceptron was unable to solve the exclusive-OR (XOR) problem. This ended interest in the perceptron, but as Hubbard observed "most adult humans don't even know what the XOR problem is, let alone know how to solve it".

1970s[edit]

There was no interest in neural networks in the 1970s.

1980s, 1990s and today[edit]

By the mid 1980s, with the advent of cheaper computers and the invention of a learning algorithm, neural networks enjoyed a resurgence of interest. Students and academics across the world spent countless hours trying to get neural networks to perform tasks.

To this day numerous organisations including academics, the military, and corporations continue to invest large sums of money in future developments of this technology.

See also[edit]