This project focuses on multimodal learning using Sentinel-2 and ICESat-2 datasets and machine learning models, including LSTM and U-Net. The goal is to process and analyze satellite imagery and related data for various applications.
mm-transformer-model/
├── S2_tif/
├── grid_images/ # Resized images
├── csv/ # CSV files (ATLO3 data)
├── clean/
├── src/
│ ├── MMDL.ipynb
│ ├── converter.py
│ └── util.py
├── .gitignore
├── LICENSE
├── README.md
└── requirements.txt
-
Clone the repository:
git clone https://github.com/nathan-g1/mm-transformer-model.git cd mm-transformer-model
-
Create and activate a virtual environment (conda recommended)
conda create --name <env_name> python=3.10.15 conda activate <env_name>
-
Install the required dependencies:
pip install -r requirements.txt
The following image shows the contents of MMDL.ipynb
notebook:
- Make sure to download the Sentinel-2 and ICESat-2 datasets and place them in the
S2_tif
andcsv
directories, respectively. - The data preparation scripts are located in the
src
directory. The NotebookMMDL.ipynb
contains the sectionBatch and generator
to handle data preprocessing and feature engineering.
- The
src/MMDL.ipynb
notebook contains code for training multimodal models using three techniques. - Each steps for preparing and training the models is detailed in sections of the notebook.
- Utility functions are available in
src/util.py
for tasks such as renaming rows in CSV files and reading file names from directories.
- Fork the repository.
- Create a new branch (
git checkout -b feature-branch
). - Commit your changes (
git commit -m 'Add new feature'
). - Push to the branch (
git push origin feature-branch
). - Create a new Pull Request.
This project is licensed under the MIT License.