In a "Feed Forward Network", information must flow from input to output only in one direction.
Here's a step-wise explanation:
1. Feed Forward Network:
- A Feed Forward Neural Network, also known as a Multilayer Perceptron (MLP), is a type of artificial neural network architecture.
- In a Feed Forward Network, data or information flows in one direction, from the input layer to one or more hidden layers and finally to the output layer.
- It's characterized by its acyclic graph structure, where there are no loops or recurrent connections.
2. Recurrent Neural Network (RNN):
- In contrast, a Recurrent Neural Network (RNN) is a different type of neural network architecture.
- RNNs are designed to handle sequences of data and have connections that allow information to loop back on itself, creating a feedback loop.
- This recurrent nature of RNNs enables them to model sequences, time-series data, and dependencies over time.
So, in a Feed Forward Network, information strictly flows from input to output without any feedback loops, making it a suitable choice for tasks where the order of data processing is critical but doesn't involve feedback or temporal dependencies.