Welcome to this hands-on workshop exploring the exciting intersection of Neuromorphic Computing (NC) and Reinforcement Learning (RL)!
This repository contains the Jupyter Notebook (neuro-1.ipynb
) which will guide you through the fundamental concepts of both fields, culminating in a practical project where we combine them. We focus on intuitive explanations, the underlying mathematical ideas, and practical coding examples using Python.
No prior knowledge of NC or RL is strictly required, just some basic Python programming experience.
By completing this workshop, you will be able to:
- 🧠 Understand the biological inspiration and core concepts of Neuromorphic Computing.
- ⚡ Simulate basic Spiking Neuron Models (like Leaky Integrate-and-Fire) using Brian2.
- 🕸️ Build and simulate simple Spiking Neural Networks (SNNs).
- 🤖 Grasp the fundamentals of Reinforcement Learning (Agents, Environments, Rewards, Policies).
- 📈 Implement a basic RL algorithm (Q-Learning).
- 🤝 Explore how SNNs can be integrated with RL for potential benefits like energy efficiency.
- 🛠️ Implement a simple project combining SNNs and RL for a pattern recognition task.
- Neuromorphic Computing: Biological Inspiration, Spiking Neurons, SNNs, Event-Driven Computation.
- Neuron Models: Leaky Integrate-and-Fire (LIF).
- SNN Simulation: Using the Brian2 library.
- Synapses & Basic Plasticity: Connections, Weights, STDP (overview).
- Information Encoding: Rate Coding, Poisson Spike Trains.
- Reinforcement Learning: Agent-Environment Loop, States, Actions, Rewards, Policies.
- RL Algorithms: Q-Learning (Tabular).
- Core RL Concepts: Value Functions (Q-Value), Discount Factor (gamma), Learning Rate (alpha), Exploration vs. Exploitation (Epsilon-Greedy).
- Integration: Using SNN output features as states for an RL agent.
- Module 1: Neuromorphic Computing Fundamentals (≈ 1.5 hours)
- What is NC? Biological inspiration.
- The LIF Neuron Model.
- Simulating a single neuron with Brian2.
- Exercise 1: Explore Neuron Behavior.
- Module 2: Building Simple Spiking Networks (≈ 1.5 hours)
- Connecting neurons: Synapses in Brian2.
- Encoding information into spikes.
- Example: A Feedforward Network (Input -> LIF Output).
- Exercise 2: Network Dynamics.
- Module 3: Reinforcement Learning Fundamentals (≈ 1.5 hours)
- What is RL? The Agent-Environment Loop.
- Key Concepts (States, Actions, Rewards, Policy, Value Functions, MDPs).
- Q-Learning Algorithm Explained.
- Example: Tabular Q-Learning in a Grid World.
- Exercise 3: Tune Q-Learning Parameters.
- Module 4: Bridging Neuromorphic Computing and RL (≈ 1.5 hours)
- Why combine NC and RL? Potential benefits.
- Challenges in integration.
- Approach: SNN as a Feature Extractor for RL.
- Project: SNN-Enhanced Agent for a Pattern Recognition Task.
- Exercise 4: Explore the SNN-RL System.
- Wrap-up and Further Learning
- Python 3: Ensure you have a working Python 3 installation (e.g., via Anaconda or python.org).
- Basic Python Programming: Familiarity with variables, data types, loops, functions, and basic NumPy.
- Jupyter Notebook/Lab: To run the
.ipynb
file interactively.
- Clone the repository (optional):
git clone <repository-url> cd <repository-directory>
- Create a Virtual Environment (Recommended):
python -m venv venv # Activate it: # Windows: venv\Scripts\activate # macOS/Linux: source venv/bin/activate
- Install Required Libraries:
Run the following command in your terminal (or uncomment and run the first cell in the notebook):
pip install brian2 numpy matplotlib notebook ipywidgets
- Optional (for RL comparison/extension):
pip install gymnasium
- Ensure you have installed the necessary libraries (see Installation).
- Activate your virtual environment if you created one.
- Navigate to the directory containing the notebook in your terminal.
- Launch Jupyter Lab or Jupyter Notebook:
jupyter lab # OR jupyter notebook
- Open the
neuro-1.ipynb
file in the Jupyter interface. - Run the cells sequentially, following the instructions and explanations provided within the notebook.
- Engage with the exercises to solidify your understanding!
Hands-on exercises are included within the notebook at the end of key sections (marked Exercise X
). These are designed to help you experiment with the concepts and code, reinforcing what you've learned. Code spaces are provided for you to implement your solutions.
This project is licensed under the MIT License - see the LICENSE file (if included) or the badge at the top for details.
Happy Learning! 🎉