Explained: Neural networks

cyborgs

Cultocracy note :

Neural networks have been around since the 1940’s , research was restricted by the limited computing power that was available at the time .

This meant that neural networks fell out of favor in the scientific community , at least that is the scientific institutions that exist in the public domain .

The ultimate aim is to combine the processing power of a computer and the perception and learning capability of a human .

Neural networks provide the only tool that could potentially decode the cognitive processes involved in the human brain .

Artificial neural networks (ANN’s) learn via a process termed backpropagation (backwards propagation of error) . In theory the system learns in much the same way as the human brain , i.e. it learns from it’s errors .

ANN’s need an extremely large amount of input data to continually ‘learn’ , each individual learning cycle for an ANN is termed an ‘epoch’ .

As human beings we are constantly presented with new ‘data’ in the form of sensory input , which we process and learn from on a daily basis , even if much of this ‘data’ seems trivial and monotonous .

Where can an ANN find the huge amounts of electronic data it will require in order to ‘learn’ ?

Data sets gleaned from ‘mass surveillance’ systems provide one source , the same mass surveillance systems that are sold to the public on grounds of ‘national security’ or ‘fighting terrorism’ .

Another more direct route involves a brain computer interface (BCI) .

Parallel BCI’s running in tandem provide an ideal information stream for machine learning and arTIficial Intelligence .

brain-hacked

IT = TI

ANN’s have a wide variety of applications , particularly in the military and finance industries .

Just like the human brain the inner workings of an ANN are a bit of a mystery , nobody really knows how they function .



Explained: Neural networks

Larry Hardesty – MIT Computer Science & Artificial Intelligence Lab

In the past 10 years, the best-performing artificial-intelligence systems — such as the speech recognizers on smartphones or Google’s latest automatic translator — have resulted from a technique called “deep learning.”

Deep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years. Neural networks were first proposed in 1944 by Warren McCullough and Walter Pitts, two University of Chicago researchers who moved to MIT in 1952 as founding members of what’s sometimes called the first cognitive science department.

Neural nets were a major area of research in both neuroscience and computer science until 1969, when, according to computer science lore, they were killed off by the MIT mathematicians Marvin Minsky and Seymour Papert, who a year later would become co-directors of the new MIT Artificial Intelligence Laboratory.

The technique then enjoyed a resurgence in the 1980s, fell into eclipse again in the first decade of the new century, and has returned like gangbusters in the second, fueled largely by the increased processing power of graphics chips.

“There’s this idea that ideas in science are a bit like epidemics of viruses,” says Tomaso Poggio, the Eugene McDermott Professor of Brain and Cognitive Sciences at MIT, an investigator at MIT’s McGovern Institute for Brain Research, and director of MIT’s Center for Brains, Minds, and Machines. “There are apparently five or six basic strains of flu viruses, and apparently each one comes back with a period of around 25 years. People get infected, and they develop an immune response, and so they don’t get infected for the next 25 years. And then there is a new generation that is ready to be infected by the same strain of virus. In science, people fall in love with an idea, get excited about it, hammer it to death, and then get immunized — they get tired of it. So ideas should have the same kind of periodicity!”

Weighty matters

Neural nets are a means of doing machine learning, in which a computer learns to perform some task by analyzing training examples. Usually, the examples have been hand-labeled in advance. An object recognition system, for instance, might be fed thousands of labeled images of cars, houses, coffee cups, and so on, and it would find visual patterns in the images that consistently correlate with particular labels.

Modeled loosely on the human brain, a neural net consists of thousands or even millions of simple processing nodes that are densely interconnected. Most of today’s neural nets are organized into layers of nodes, and they’re “feed-forward,” meaning that data moves through them in only one direction. An individual node might be connected to several nodes in the layer beneath it, from which it receives data, and several nodes in the layer above it, to which it sends data.

To each of its incoming connections, a node will assign a number known as a “weight.” When the network is active, the node receives a different data item — a different number — over each of its connections and multiplies it by the associated weight. It then adds the resulting products together, yielding a single number. If that number is below a threshold value, the node passes no data to the next layer. If the number exceeds the threshold value, the node “fires,” which in today’s neural nets generally means sending the number — the sum of the weighted inputs — along all its outgoing connections.

When a neural net is being trained, all of its weights and thresholds are initially set to random values. Training data is fed to the bottom layer — the input layer — and it passes through the succeeding layers, getting multiplied and added together in complex ways, until it finally arrives, radically transformed, at the output layer. During training, the weights and thresholds are continually adjusted until training data with the same labels consistently yield similar outputs.

Minds and machines

The neural nets described by McCullough and Pitts in 1944 had thresholds and weights, but they weren’t arranged into layers, and the researchers didn’t specify any training mechanism. What McCullough and Pitts showed was that a neural net could, in principle, compute any function that a digital computer could. The result was more neuroscience than computer science: The point was to suggest that the human brain could be thought of as a computing device.

Neural nets continue to be a valuable tool for neuroscientific research. For instance, particular network layouts or rules for adjusting weights and thresholds have reproduced observed features of human neuroanatomy and cognition, an indication that they capture something about how the brain processes information.

The first trainable neural network, the Perceptron, was demonstrated by the Cornell University psychologist Frank Rosenblatt in 1957. The Perceptron’s design was much like that of the modern neural net, except that it had only one layer with adjustable weights and thresholds, sandwiched between input and output layers.

Perceptrons were an active area of research in both psychology and the fledgling discipline of computer science until 1959, when Minsky and Papert published a book titled “Perceptrons,” which demonstrated that executing certain fairly common computations on Perceptrons would be impractically time consuming.

“Of course, all of these limitations kind of disappear if you take machinery that is a little more complicated — like, two layers,” Poggio says. But at the time, the book had a chilling effect on neural-net research.

“You have to put these things in historical context,” Poggio says. “They were arguing for programming — for languages like Lisp. Not many years before, people were still using analog computers. It was not clear at all at the time that programming was the way to go. I think they went a little bit overboard, but as usual, it’s not black and white. If you think of this as this competition between analog computing and digital computing, they fought for what at the time was the right thing.”

Periodicity

By the 1980s, however, researchers had developed algorithms for modifying neural nets’ weights and thresholds that were efficient enough for networks with more than one layer, removing many of the limitations identified by Minsky and Papert. The field enjoyed a renaissance.

But intellectually, there’s something unsatisfying about neural nets. Enough training may revise a network’s settings to the point that it can usefully classify data, but what do those settings mean? What image features is an object recognizer looking at, and how does it piece them together into the distinctive visual signatures of cars, houses, and coffee cups? Looking at the weights of individual connections won’t answer that question.

In recent years, computer scientists have begun to come up with ingenious methods for deducing the analytic strategies adopted by neural nets. But in the 1980s, the networks’ strategies were indecipherable. So around the turn of the century, neural networks were supplanted by support vector machines, an alternative approach to machine learning that’s based on some very clean and elegant mathematics.

The recent resurgence in neural networks — the deep-learning revolution — comes courtesy of the computer-game industry. The complex imagery and rapid pace of today’s video games require hardware that can keep up, and the result has been the graphics processing unit (GPU), which packs thousands of relatively simple processing cores on a single chip. It didn’t take long for researchers to realize that the architecture of a GPU is remarkably like that of a neural net.

Modern GPUs enabled the one-layer networks of the 1960s and the two- to three-layer networks of the 1980s to blossom into the 10-, 15-, even 50-layer networks of today. That’s what the “deep” in “deep learning” refers to — the depth of the network’s layers. And currently, deep learning is responsible for the best-performing systems in almost every area of artificial-intelligence research.

Under the hood

The networks’ opacity is still unsettling to theorists, but there’s headway on that front, too. In addition to directing the Center for Brains, Minds, and Machines (CBMM), Poggio leads the center’s research program in Theoretical Frameworks for Intelligence. Recently, Poggio and his CBMM colleagues have released a three-part theoretical study of neural networks.

The first part, which was published last month in the International Journal of Automation and Computing, addresses the range of computations that deep-learning networks can execute and when deep networks offer advantages over shallower ones. Parts two and three, which have been released as CBMM technical reports, address the problems of global optimization, or guaranteeing that a network has found the settings that best accord with its training data, and overfitting, or cases in which the network becomes so attuned to the specifics of its training data that it fails to generalize to other instances of the same categories.

There are still plenty of theoretical questions to be answered, but CBMM researchers’ work could help ensure that neural networks finally break the generational cycle that has brought them in and out of favor for seven decades.


ANN-Cartoon-Head

ANN-3d-Tortoise

ANN-Think-Bubbles

ANN-Eye



Related :

ai-black-cube

  1. A Basic Introduction To Neural Networks
  2. Neural networks
  3. How Do Artificial Neural Networks Learn?
  4. Conway’s Game of Life

intel-5-eyes-chip

  1. Intel – 5 Eyes Inside
  2. AI Controlled Brain Implants Tested by Military
  3. BAE & DSTL Operation , Experimentation , Implantation
  4. The Matrix Deciphered – Dr. Robert Duncan Pt. 3
  5. Nano Technology News – December 2017
  6. Can We Copy the Brain?
  7. Targeted Individual – Torture & Terrorism
  8. Privacy International show that UK intelligence agencies may analyse our Facebook and Twitter accounts
  9. Human Brains Will be Uploaded to Machines – Professor Brian Cox
  10. Man & machine will be melded into One within 20yrs – IBM expert

bb-dream-synthetic-scheme

  1. Biometric Data Collection & AI – Pre-Crime ? Profiteering ? Or Prediction ?
  2. CIA Biometric Data Collection – Express Lane
  3. Amazon , C.I.A , Big Data & Artificial Intelligence – Future Warfare
  4. The Military-Industrial Complex’s Secret War for Our Data
  5. The AI that can read your mind
  6. 20 Billion Nanoparticles Talk to the Brain using Electricity
  7. EMF’s , Nano Particles & Eugenics
Advertisements
This entry was posted in Mind Control, Psychology, Psychotronic Warfare, State Corruption, State Surveillance & Control, Targeted Individuals. Bookmark the permalink.

2 Responses to Explained: Neural networks

  1. truth1 says:

    Cultocracy, I was delighted by this particular blog. I saved your summary which had gems like this: “In theory the system learns in much the same way as the human brain , i.e. it learns from it’s errors.”
    Complicated as networks are, they come down to simple operations, of which there are million or billions. and learning from errors are, in theory how we learn, if we chose to learn. It is a choice.

    I thought I would mention a couple great books, The Wisdom of Your Subconscious Mind May 1, 1973 by John K. Williams and his earlier one of the 1950s or 60s I think it was. both have the same material, but the latter one add more stuff. And these are fairly simple and easy to comprehend. I discovered this book cleaning up behind my deceased mother, an extreme hoarder. I found the book and took it home, but did not immediately read it, I was renovating the house and sometimes could not figure out a way to do something. I gave up out of frustration And tackle something else, In 2 or 3 days, the solution would come to me, If I could not remember a name, the more mere effort to remember would put wheels in motion and it would come to me sooner or later, often in the morning waking up. This is how the mind works. John goes into many people finding this out, from the past and present.

    Many other things guided me as well. the sub conscious works for us if we want it to.

    I have Minsky’s “Society of Mind” and “the Emotion Machine.” I see him referenced here. I got to process this article more, later. I’ll be napping soon.

    I appreciate your work as always. I wish more showed up and had something to say, perhaps indicating at least a little bit of knowledge in what you cover.
    to readers out there, Minsky and Williams are bound to be very information and interesting. I strongly recommend them in order to grasp their concepts with simplicity and ease.
    I review Minsky’s books on my site, toward the bottom of the linked page:
    http://truth1.org/1-books-health.htm Dealing with psychology. I am surprise I never got around to reviewing John Williams’ book. I will have to correct that someday.

    Like

    • cultocracy says:

      Hello again truth1 , there is no doubt that the subconscious mind has a profound effect on our waking lives . To quote from ‘The Wisdom of Your Subconscious Mind’ – “Without an equal growth of Mercy, Pity, Peace and Love, science herself may destroy all that makes human life majestic and tolerable.”
      It would appear that this statement describes the pivotal point at which mankind finds itself at present , scientific endeavors have been perverted and subverted for war , destruction , power and profit .
      Moreover science is now being used by dark powers to disrupt and manipulate the mass subconscious in a very direct way .
      Cutting edge mind manipulation technologies are now in the hands of maniacs and degenerates who do not possess the intellect to use them in a wise manner , a bit like handing a troop of monkeys a cache of modern weaponry (no disrespect to monkeys) .
      Children and boxes of matches .

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s