What is feature engineering for machine learning
Deep learning replaces feature engineering
Deep learning is a form of feature engineering with the aim of learning features from (little processed) raw data. For this purpose, the raw data is generalized over several superimposed layers, hence the term deep learning. An example: The training data consists of photos / images. If a neural network is trained with images of faces, individual neurons of the first hidden layer are maximally active if a special edge is present in the photo. In a sense, this is the key stimulus for the neurons of the first layer.
- L-Bank, Karlsruhe
- OEDIV KG, Bielefeld
The neurons of the next layer, on the other hand, respond to the presence of sections of the face, such as a nose or an eye. The neurons of the next layer, in turn, are maximally active when prototypes of faces are applied to the input of the neural network. A feature hierarchy is learned, with higher layers corresponding to more abstract, higher-value features.
This also makes it clear why the decision-making function is easier on the higher representations. If, for example, a neuron of the 3rd layer, which stands for a face prototype, is active, this means that a face can be seen in the image. If a decision has to be made on the activities of the 1st neuron layer, this is much more difficult, since special edge combinations have to be recognized as a face.
Where does the basic principle of deep learning come from?
The basic idea of learning characteristics hierarchically over many layers comes from the cognitive sciences, among other things: it was shown a long time ago that the information received by the eyes is processed in layers in the visual cortex and transferred into higher representations. The neurons are also arranged in layers in the visual cortex of the brain. In the higher layers, the key stimuli also become more and more complex.
In the past, neural networks with many hidden layers could not learn properly. Among other things, the amount of data was too small and the computing power was too low. Therefore, in practice, mostly only neural networks with only one hidden layer and very few neurons were used. This only changed in 2006, when researchers working with Professor Geoffrey Hinton, Toronto, presented a training algorithm with which layer-by-layer feature transformations can be learned. This publication sparked renewed strong interest in neural networks in the research community.
- How do I change the PC RAM
- How is this fraudulent Wikipedia page created
- Is the leather industry bad for nature
- What is ShowMe's technology stack
- How to Market Fireworks
- How old is science
- Sold Avid Sibelius
- What did Benjamin Franklin say about humility
- Malonic acid is water-soluble
- AB blood is useless
- Which cinemas in Australia show Asian films
- Where is that in Iceland
- Are girls afraid of their first kiss?
- Why is faster in parallel operation
- What does Tanupriya mean
- Who writes Quora's guidelines and guidelines
- What are the educational questions
- What was Elvis Presley's favorite guitar
- What kind of creature is a cassowary
- Who has the coolest gadgets
- What do photographers actually do
- Is Cadillac Escalade a good car
- Spiders have a consciousness
- Do I need winter tires in Connecticut