Projects
- Spectral Normalization is the current state-of-the-art method for enforcing the Lipschitz constraint on the discriminator of the GAN. I was interested specially in the sensitivity of the discriminator to minor changes in the image input to it. While stability to small changes in the image is a desirable property of the discriminator, it’s not satisfied by the discriminator and some interesting trends are observed.
- Contribution:
- Detection & analysis of the problems with Spectral Normalized GANs & attempts to improve them to produce meaningful & coherent generations.
- Designed a method to quantitatively estimate local and global coherence captured by the discriminator.
- Github link
- This an ongoing project where we use a different method to detect objects than the one used in Object-based Visual Reasoning . The paper uses pre-trained Faster RCNN architecture to detect CLEVR objects while we use traditional CV tools to detect objects, thus reducing training time and increasing the speed of reasoning at test time.
- This was also the final project of Deep learning course at BITS Goa. I, being a Teaching Assistant for the course, mentored about 15 groups on the same.
- Github link
- This project was aimed to bring out the evolution of cooperation in our society by examining the cooperative hunting scenarios in lions extensively studied by biologists over the years. The objective was to simulate the random behaviour of animals and show how their actions converge/diverge under different conditions.
- Contribution:
- Through Nash Q-Learning we were able to teach multiple predators the desired behavior and were able to simulate three different scenarios under different conditions - one in which the animals fight, one in which they cooperate, and one in which they mix both these strategies.
- Github Link
- Project component of Neural Network and Fuzzy Logic course where we had to classify sports videos from the UCF Dataset according to the action being performed.
- Contribution:
- Built a minimal model using Autoencoders, ConvNets, and LSTMs. The bagged model we submitted was of the least size.
- Github Link