Date | Presenter | Topic | |
---|---|---|---|
25-09-2024 | Aman Kumar Singh | Human activity imitation by a robotic manipulator in the ROS environment by using YOLO and DBSCAN | |
Abstract: Modern automation increasingly relies on robots capable of precise and flexible manipulation in unknown environments. This work presents a collaborative pick-and-place system using the Mycobot 280 Jetson manipulator integrated with ROS MoveIt, designed for complex object handling using vision. |
Data | Presenter | Topic | |
---|---|---|---|
06-06-2024 | Atif Hassan | Spend less time tuning and more time chilling: Some tips and tricks to achieve good performance on any Deep Learning Task (Download ppt) |
|
Abstract: In this talk, we'll be discussing about practical training methods that can help you achieve improved performance in no time. |
|||
21-03-2024 | Siddharth Jaiswal | Fairness, Biasness and Everything in Between (Source Material) | |
Abstract: The talk was on Biasness and its different forms that plague the current world. |
|||
08-03-2024 | Animesh Sachan | An Introduction to Graph Machine Learning - I (Download ppt) | |
Abstract: Deep learning has revolutionized many machine learning tasks in recent years. Traditional machine learning models predominantly handle data in Euclidean spaces, but the surge in complex, interconnected data types necessitates a paradigm shift towards non-Euclidean domains. This is where graph machine learning emerges as a pivotal innovation, providing robust frameworks for analyzing and interpreting data that inherently exists in the form of graphs or networks. Graph learning proves effective for many tasks, such as classification, link prediction, and matching. Generally, graph learning methods extract relevant features of graphs by taking advantage of machine learning algorithms. This talk aims to demystify the prominent architectures within graph machine learning, its wide-ranging applications, and the methodologies for employing Graph Neural Networks (GNNs) to address specific problems. |
|||
29-02-2024 | Ashraf Haroon Rashid | SSEnse: Stochastic Method For Discovering Sparse Ensembles of Machine Learning Models (Download References) | |
Abstract: Ensemble learning has proven to be highly efficient in producing robust learners. Modern ensemble learning methods can be computationally cumbersome during training and inference due to the presence of large number of individual machine learning (ML) or deep learning (DL) models. To this end, many different techniques have been proposed to reduce computational overhead during the practical implementation of such large ensemble models. These techniques belong to the broad domain of Static Ensemble Selection (SES) or Ensemble Pruning in which a sparser subset of models is chosen as a representative ensemble. The SES techniques produce a sparse ensemble by using specially designed algorithms and objective functions for a specific metric like classification accuracy or prediction diversity. These techniques sometimes also make specific hard assumptions on other properties of the objectives like smoothness and convexity. However, with the advent of modern AI into myriad domains, numerous other metrics like Fairness, Precision, Confidence, Trustworthiness etc. continue to emerge. As a result, there is a need for these metrics to be taken into consideration while selecting a subset of models from the ensemble. The existing SES techniques also ignore several aspects that may be important to the user while selecting a subset like the computational budget, the tolerance for performance degradation and the amount of required sparsity. In this work, we propose Sparse Stochastic Ensembles (SSEnse), a framework that obtains a sparser ensemble while being sensitive to the aforementioned user settings. SSEnse is a generic framework that uses a novel Stochastic Process (SP) to select a sparser subset while optimizing any metric of choice given by the user, as long as the metric lies in the interval [0, 1], without any restrictions on designing special objective function or hard assumptions on smoothness, convexity or other such properties of the given metric. |
|||
15-02-2024 | Ashraf Haroon Rashid | Ensemble Learning: The Dark Horse of Modern Machine Learning Landscape (Download References) | |
Background: Ensemble learning's history traces back to the late 20th century, with the early 1990s marking its inception. The term "ensemble learning" gained prominence in the mid-1990s, with researchers exploring methods to combine diverse models for better generalization. The pioneering work of Leo Breiman in 1996 on bagging (bootstrap aggregating) introduced the idea of training multiple models on bootstrapped subsets of the data, reducing overfitting and improving stability. On the other hand AdaBoost, introduced in 1995, became a landmark algorithm, showcasing the power of boosting in enhancing weak learners. Shortly after, boosting algorithms, proposed by Robert Schapire and Yoram Singer, emerged to sequentially train models for correct classification of previously misclassified instances. In the early 2000s, Random Forests, introduced by Leo Breiman, gained prominence as an ensemble technique using decision trees and decorrelating them for improved performance. As machine learning advanced, ensemble methods found applications in diverse fields, showcasing their robustness and adaptability. Ensemble learning continued to evolve, adapting to the rise of deep learning. Details of the talk: This research talk delves into the rich history of ensemble learning, tracing its evolution as a pivotal concept shaping the landscape of modern machine learning. Beginning with its roots, we explore the foundations laid by early pioneers and unravel the journey of ensemble learning's integration into the domain of machine learning methodologies. A particular focus is placed on the transformative influence of ensemble learning through bootstrapping and boosting techniques, elucidating their role in enhancing model robustness and performance. These techniques not only contribute to the diversity within ensembles but have played a crucial role in the development of powerful algorithms. Furthermore, the talk investigates how ensemble learning serves as a fundamental building block for contemporary deep neural networks, such as residual networks (ResNets) and dense neural networks (DenseNets). Another intriguing aspect explored is the pervasive presence of ensemble learning in various adnaced facets of deep learning like distributed learning, and federated learning. Through this exploration, the talk aims to unveil the hidden influence of ensemble learning, providing a comprehensive understanding of its historical significance and its integral role in molding the landscape of modern machine learning. |
|||
08-02-2024 | Tejas Pote | Sparsity in Deep Neural Networks and the Lottery Ticket Hypothesis (paper, paper, paper) (Download ppt) | |
Abstract: Deep Neural Networks deliver the promise of overwhelming generalization performance over a wide variety of tasks like image classification, language modeling, etc. However, the growing size of the networks poses concerns of increasing computational and storage overheads. Besides, parameter redundancy in modern deep networks is well established given their overparameterized nature. To this end, a number of pruning techniques are widely employed for training neural networks which drop redundant model parameters based on various criteria and deliver comparable model performance. In this talk, we discuss the Lottery Ticket Hypothesis, which empirically proves the existence of subnetworks (winning tickets) within dense, randomly-initialized, feed-forward networks and outlines a training procedure to arrive at the winning solution by training from scratch. The method is successful in reducing parameter counts by over 90% and as well converging faster than the dense network. The talk shall revolve around an overview of the method along with the experimental details. We also plan to discuss the theoretical aspects of the technique and develop deeper insights on 'what makes the lottery tickets win'. |