On-spot training to enhance the performance of traditional machine learning algorithms, applied to the prediction of breast cancer malignity from ultrasound images
This image was generated with Pollinations AI API
⚠️: The hereby provided software is a low-level academic project developed in the context of the course “Machine Learning in Health Care”, held in Spring Term 2024 by professor Christian Salvatore and professor Claudia Cava.
_The code is entirely written with the sole purpose of taking part in **Automatic Diagnosis of Breast Cancer IUSS 23-24** Kaggle competition, and MUST NOT be used for diagnostics. The authors are not responsible of any misuse or out-of-scope use of the software._
In this project, developed as a possible solution to the **Automatic Diagnosis of Breast Cancer | IUSS 23-24** Kaggle competition, we explored how on-spot training could enhance traditional machine learning methods performance on tabular data, applying this to the prediction of breast cancer malignity from ultrasound images. |
To reproduce our results, make sure to go through the following steps:
First of all, clone this GitHub repository:
git clone https://github.com/AstraBert/breastcancer_contextml
Now go in the cloned folder and install all the needed dependencies:
cd breastcancer_contextml
python3 -m pip install -r scripts/requirements.txt
You will also have to pull Qdrant Docker image:
docker pull qdrant/qdrant:latest
Once the installation is complete, we can begin building!🚀
The first piece of preprocessing, i.e. image feature extraction, has already been done (there are no images in this repository): the results, obtained through pyradiomics, are saved in extracted_features.csv. We have 547 training instances with 102 features, but:
Thus we apply PCA (or Principal Component Analysis) to capture the features that encompass most of the variability in the dataset and we resample the training instances so that there is equilibrium between the two classes, using SMOTE (Synthetic Minority Oversampling Technique).
python3 scripts/preprocessing.py
Now we have all the training data, consisting of 775 instances and 16 features, in combined_pca.csv and all the test data in extracted_test_pca.csv.
In this step we perform:
docker run -p 6333:6333 -v $(pwd)/qdrant_storage:/qdrant_storage:z qdrant/qdrant
python3 scripts/qdrant_collection.py
In this step:
python3 scripts/contextual_machine_learning.py
The hereby presented software is open-source and distributed under MIT license.
As stated before, the project was developed for learning purposes and must be used only in this sense.