What are neural networks?
Neural networks are crucial tools in the field of artificial intelligence, conceptually and functionally modeled after our brains. Although they are not the sole basis of artificial intelligence, as techniques such as Markov chains and Bayesian algorithms also play a role, neural networks represent a promising path towards achieving true machine learning.
The functioning of neural networks is similar to that of neurons in the human brain, which numbers 100 billion. These neurons receive electrical signals and, after processing, decide to increase or decrease the intensity of those signals. The systems we are designing in artificial intelligence assume similar connections, called synapses.
How does a neural network pick up signals?
-
Inputs: In humans, inputs are senses such as sight, hearing and touch. In machines, inputs are data that are fed into an algorithm.
-
Data Processing: The network is trained by receiving data and correlating it with expected results. For example, by studying the relationship between a student's sleep and study hours and his or her performance on exams, the network can predict the outcome based on this information.
Is artificial intelligence statistical?
Largely yes, AI incorporates numerous statistical algorithms along with advanced image processing algorithms and, of course, neural networks. The goal is to have the neural network organize the information to obtain practical results, such as autonomous driving.
Neural networks do not operate with conventional mathematical approximations; instead, they modify data in a range (0-100) with different functions and try to compare it with the original result repeatedly, discarding those neurons that do not work and boosting those that do.
What are activation functions?
Activation functions are vital to the processing of neural networks. There are several, including:
- Step function: Distributes data into 1's or 0's.
- Sign function: Classifies data as -1, 0 or 1.
- Sigmoid function: distributes the data along a curve from 0 to 1, evaluating the different probabilities.
By working in this way, a network tested with these data points accumulates experiences, and the useful structures remain, constituting the "memory" or learning acquired by it.
What types of neural networks exist?
Among the many configurations of neural networks, the following stand out:
- Feedforward Network: A network where data flows in one direction only.
- Recurrent Neural Networks (RNNs): Networks that can learn temporal sequences of events.
- Markov Chains: They are used for forecasts based on prior probabilities (such as text predictions).
Thanks to these complex structures, systems such as autonomous cars combine AI with predefined algorithms and sensors to act decisively. They detect, for example, traffic lights using filters that highlight bright colors such as red, green and yellow, looking for their characteristic patterns.
Why is artificial intelligence so resource-intensive?
The main reason is due to the need for high computational power to process massively parallel tasks, particularly through GPUs (Graphics Processing Units) that optimize these parallel processing operations. Thus, modern machine learning technology is fueled not only by mathematical advances, but also by colossal advances in computational power.
If you find this information fascinating and want to learn more about how these networks work and are developed in practical applications, we encourage you to continue learning! Platzi's Software Engineering Fundamentals course guides you through these concepts, offering comprehensive and accessible learning. Remember that all learning requires patience and perseverance, so go ahead!
Want to see more contributions, questions and answers from the community?