Hi Aman,
This is a nice piece of work. I have a few comments:
- I want sure exactly which dataset you used, as there are several on Kaggle. So I used this one: https://tinyurl.com/dn8jn6jx.
This dataset contains 100 normal head CT slices and 100 other scans are for patients with hemorrhage. Each slice comes from a different patient. There is no distinction between kinds of hemorrhage seen in the scans. The images were taken from an internet search and are of differing size and resolution. The main idea of using this dataset is to develop ways to predict imaging findings even in a context of limited data of varying quality.
I couldnt exactly replicate the way you handled the data. In my MMA version it requires you to specify the Input and Output for each item in the training dataset. Perhaps you are using an earlier version of MMA.
My main criticism is that you chose to focus exclusively on the results for the positive cases, where you achieved 100% accuracy. But this leaves unaddressed the question of false positives.
Also, I believe you are using the same set of data for both validation and testing, which may explain why you were able to achieve 100% accuracy!
In my version I looked and results for both positive and negative scans and found an overall accuracy rate of 90%, which is exactly in line with the results reported on Kaggle.
Let me know if you would be interested in collaborating if you decide you want to do further work.
Jonathan