Why attention?
Resource saving
• Only need sensors where relevant bits are(e.g. fovea vs. peripheral vision)
• Only compute relevant bits of information
(e.g. fovea has many more ‘pixels’ than periphery)
Variable state manipulation
• Manipulate environment (for all grains do: eat)• Learn modular subroutines (not state)
Some forms of attention that you might not notice:
- Watson Nadaraya Estimator
- Pooling
- Iterative Pooling
Ref:
No comments:
Post a Comment