The Foundations block comprises of two courses where we get our hands dirty with Statistics and Code, head-on. These two courses set our foundations so that we sail through the rest of the journey with minimal hindrance.
Python for AI & ML
This course will let us get comfortable with the Python programming language used for Artificial Intelligence and Machine Learning. We start with a high-level idea of Object-Oriented Programming and later learn the essential vocabulary(/keywords), grammar(/syntax) and sentence formation(/usable code) of this language. This course will drive you from introducing AI and ML to the core concepts using one of the most popular and advanced programming languages, Python.
- Python Basics
Python is a widely used high-level programming language and has a simple, easy-to-learn syntax that highlights readability. This module will help you drive through all the fundamentals of programming in Python, and at the end, you will execute your first Python program.
- Jupyter notebook – Installation & function
You will learn to implement Python for AI and ML using Jupyter Notebook. This open-source web application allows us to create and share documents containing live code, equations, visualisations, and narrative text.
- Python functions, packages and routines
Functions and Packages are used for code reusability and program modularity, respectively. This module will help you understand and implement Functions and Packages in Python for AI.
- Pandas, NumPy, Matplotlib, Seaborn
This module will give you a deep understanding of exploring data sets using Pandas, NumPy, Matplotlib, and Seaborn. These are the most widely used Python libraries.
- Working with data structures,arrays, vectors & data frames
Data Structures are one of the most significant concepts in any programming language. They help in the arrangement of leader-board games by ranking each player. They also help in speech and image processing for AI and ML. In this module, you will learn Data Structures like Arrays, Lists, Tuples, etc. and learn to implement Vectors and Data Frames in Python.
Here we learn the terms and concepts vital to Exploratory Data Analysis and Machine Learning in general. From the very basics of taking a simple average to the advanced process of finding statistical evidence to confirm or deny conjectures and speculations, we will learn a specific set of tools required to analyze and draw actionable insights from data.
- Descriptive Statistics
The study of data analysis by describing and summarising several data sets is known as Descriptive Analysis. It can either be a sample of a region’s population or the marks achieved by 50 students. This module will help you understand Descriptive Statistics in Python for Machine Learning.
- Inferential Statistics
This module will let you explore fundamental concepts of using data for estimation and assessing theories using Python.
- Probability & Conditional Probability
Probability is a mathematical tool used to study randomness like the possibility of an event occurrence in a random experiment. Conditional Probability is the possibility of an event occurring given that several other events have also occurred. In this module, you will learn about Probability and Conditional Probability in Python for Machine Learning.
- Probability Distributions – Types of distribution – Binomial, Poisson & Normal distribution
A statistical function reporting all the probable values that a random variable takes within a specific range is known as a Probability Distribution. This module will teach you about Probability Distributions and various types like Binomial, Poisson, and Normal Distribution in Python.
- Hypothesis Testing
This module will teach you about Hypothesis Testing in Machine Learning using Python. Hypothesis Testing is a necessary procedure in Applied Statistics for doing experiments based on the observed/surveyed data.
The next module is the Machine Learning online course that will teach us all the Machine Learning techniques from scratch, and the popularly used Classical ML algorithms that fall in each of the categories.
In this course we learn about Supervised ML algorithms, working of the algorithms and their scope of application – Regression and Classification.
- Multiple Variable Linear regression
Linear Regression is one of the most popular ML algorithms used for predictive analysis in Machine Learning, resulting in producing the best outcomes. It is a technique assuming a linear relationship between the independent variable and dependent variable.
- Multiple regression
Multivariate Regression is a supervised machine learning algorithm involving multiple data variables for analysis. It is used for predicting one dependent variable using various independent variables. This module will drive you through all the concepts of Multiple Regression used in Machine Learning.
- Logistic regression
Logistic Regression is one of the most popular ML algorithms, like Linear Regression. It is a simple classification algorithm to predict the categorical dependent variables with the assistance of independent variables. This module will drive you through all the concepts of Logistic Regression used in Machine Learning.
- K-NN classification
k-NN Classification or k-Nearest Neighbours Classification is one of the most straightforward machine learning algorithms for solving regression and classification problems. You will learn about the usage of this algorithm through this module.
- Naive Bayes classifiers
Naive Bayes Algorithm is used to solve classification problems using Baye’s Theorem. This module will teach you about the theorem and solving the problems using it.
- Support vector machines
Support Vector Machine or SVM is also a popular ML algorithm used for regression and classification problems/challenges. You will learn how to implement this algorithm through this module.
We learn what Unsupervised Learning algorithms are, working of the algorithms and their scope of application – Clustering and Dimensionality Reduction.
- K-means clustering
K-means clustering is a popular unsupervised learning algorithm to resolve the clustering problems in Machine Learning or Data Science. In this module, you will learn how the algorithm works and later implement it.
- Hierarchical clustering
Hierarchical Clustering is an ML technique or algorithm to build a hierarchy or tree-like structure of clusters. For example, it is used to combine a list of unlabeled datasets into a cluster in the hierarchical structure. This module will teach you the working and implementation of this algorithm.
- High-dimensional clustering
High-dimensional Clustering is the clustering of datasets by gathering thousands of dimensions.
- Dimension Reduction-PCA
Principal Component Analysis for Dimensional Reduction is a technique to reduce the complexity of a model like eliminating the number of input variables for a predictive model to avoid overfitting. Dimension Reduction-PCA is a well-known technique in Python for ML, and you will learn everything about this method in this module.
In this Machine Learning online course, we discuss supervised standalone models’ shortcomings and learn a few techniques, such as Ensemble techniques to overcome these shortcomings.
- Decision Trees
Decision Tree is a Supervised Machine Learning algorithm used for both classification and regression problems. It is a hierarchical structure where internal nodes indicate the dataset features, branches represent the decision rules, and each leaf node indicates the result.
- Random Forests
Random Forest is a popular supervised learning algorithm in machine learning. As the name indicates, it comprises several decision trees on the provided dataset’s several subsets. Then, it calculates the average for enhancing the dataset’s predictive accuracy.
Bagging, also known as Bootstrap Aggregation, is a meta-algorithm in machine learning used for enhancing the stability and accuracy of machine learning algorithms, which are used in statistical classification and regression.
As the name suggests, Boosting is a meta-algorithm in machine learning that converts robust classifiers from several weak classifiers. Boosting can be further classified as Gradient boosting and ADA boosting or Adaptive boosting.
Learn various concepts that will be useful in creating functional machine learning models like model selection and tuning, model performance measures, ways of regularisation, etc.
- Feature engineering
Feature engineering is transforming data from the raw state to a state where it becomes suitable for modelling. It converts the data columns into features that are better at representing a given situation in terms of clarity. Quality of the component in distinctly representing an entity impacts the model’s quality in predicting its behaviour. In this module, you will learn several steps involved in Feature Engineering.
- Model selection and tuning
This module will teach you which model best suits architecture by evaluating every individual model based on the requirements.
- Model performance measures
In this module, you will learn how to optimise your machine learning model’s performance using model evaluation metrics.
- Regularising Linear models
In this module, you will learn the technique to avoid overfitting and increase model interpretability.
- ML pipeline
This module will teach you how to automate machine learning workflows using the ML Pipeline. You can operate the ML Pipeline by enabling a series of data to be altered and linked together in a model, which can be tested and evaluated to achieve either a positive or negative result.
- Bootstrap sampling
Bootstrap Sampling is a machine learning technique to estimate statistics on population by examining a dataset with replacement.
- Grid search CV
Grid search CV is the process of performing hyperparameter tuning to determine the optimal values for any machine learning model. The performance of a model significantly depends on the importance of hyperparameters. Doing this process manually is a tedious task. Hence, we use GridSearchCV to automate the tuning of hyperparameters.
- Randomized search CV
Randomized search CV is used to automate the tuning of hyperparameters similar to Grid search CV. Randomized search CV is provided for a random search, and Grid search CV is provided for a grid search.
- K fold cross-validation
K-fold cross-validation is a way in ML to improve the holdout method. This method guarantees that our model’s score does not depend on how we picked the train and test set. The data set is divided into k number of subsets, and the holdout method is repeated k number of times.
In this Machine Learning online course, we learn what recommendation systems are, their applications, critical approaches to building them – Popularity based systems, Collaborative filtering, Singular Value Decomposition, etc.
- Introduction to Recommendation systems
As the name suggests, recommendation systems help predict some products’ future preference and recommend the best-suited items to the user. In this module, you will learn how to use these systems to help people choose the best products.
- Content based recommendation system
First, we collect the data from the user explicitly or implicitly. Then, we create a user profile based on this data that is later used to suggest to the user. The user provides us with more information or takes more actions based on the recommendation, enhancing the system’s accuracy. This technique is known as Content-based Recommendation System.
- Popularity based model
Popularity based model is a type of recommendation system that works based on popularity or anything that is currently trending.
- Collaborative filtering (User similarity & Item similarity)
Collaborative Filtering is a joint usage of algorithms where there are several ways to identify similar users or items to suggest the best recommendations.
- Hybrid models
A Hybrid Model is a combination of multiple classification models and clustering techniques. You will learn how to use a hybrid model in this module.
The next module is the Artificial Intelligence online course that will teach us from the introduction to Artificial Intelligence to taking us beyond the traditional ML into Neural Nets’ realm. We move on to training our models with Unstructured Data like Text and Images from the regular tabular data.
Introduction to Neural Networks and Deep Learning
In this Artificial Intelligence online course, we start with the motive behind using the terms Neural network and look at the individual constituents of a neural network. Installation of and building familiarity with TensorFlow library, appreciate the simplicity of Keras and build a deep neural network model for a classification problem using Keras. We also learn how to tune a Deep Neural Network.
- Gradient Descent
Gradient Descent is an iterative process that finds the minima of a function. It is an optimisation algorithm that finds the parameters or coefficients of a function’s minimum value. However, this function does not always guarantee to find a global minimum and can get stuck at a local minimum. In this module, you will learn everything you need to know about Gradient Descent.
- Introduction to Perceptron & Neural Networks
Perceptron is an artificial neuron, or merely a mathematical model of a biological neuron. A Neural Network is a computing system based on the biological neural network that makes up the human brain. In this module, you will learn all the neural networks’ applications and go much deeper into the perceptron.
- Batch Normalization
Normalisation is a technique to change the values of numeric columns in the dataset to a standard scale, without distorting differences in the ranges of values. In Deep Learning, rather than just performing normalisation once in the beginning, you’re doing it all over the network. This is called batch normalisation. The output from the activation function of a layer is normalised and passed as input to the next layer.
- Activation and Loss functions
Activation Function is used for defining the output of a neural network from several inputs. Loss Function is a technique for prediction error of neural networks.
- Hyper parameter tuning
This module will drive you through all the concepts involved in hyperparameter tuning, an automated model enhancer provided by AI training.
- Deep Neural Networks
An Artificial Neural Network (ANN) having several layers between the input and output layers is known as a Deep Neural Network (DNN). You will learn everything about deep neural networks in this module.
- Tensor Flow & Keras for Neural Networks & Deep Learning
TensorFlow is created by Google, which is an open-source library for numerical computation and wide-ranging machine learning. Keras is a powerful, open-source API designed to develop and evaluate deep learning models. This module will teach you how to implement TensorFlow and Keras from scratch. These libraries are widely used in Python for AIML.
In this Computer Vision course, we will learn how to process and work with images for Image classification using Neural Networks. Going beyond plain Neural Networks, we will also learn a more advanced architecture – Convolutional Neural Networks.
- Introduction to Image data
This module will teach you how to process the image and extract all the data from it, which can be used for image recognition in deep learning.
- Introduction to Convolutional Neural Networks
Convolutional Neural Networks (CNN) are used for image processing, classification, segmentation, and many more applications. This module will help you learn everything about CNN.
- Famous CNN architectures
In this module, you will learn everything you need to know about several CNN architectures like AlexNet, GoogLeNet, VGGNet, etc.
- Transfer Learning
Transfer learning is a research problem in deep learning that focuses on storing knowledge gained while training one model and applying it to another model.
- Object detection
Object detection is a computer vision technique in which a software system can detect, locate, and trace objects from a given image or video. Face detection is one of the examples of object detection. You will learn how to detect any object using deep learning algorithms in this module.
- Semantic segmentation
The goal of semantic segmentation (also known as dense prediction) in computer vision is to label each pixel of the input image with the respective class representing a specific object/body.
- Instance Segmentation
Object Instance Segmentation takes semantic segmentation one step ahead in a sense that it aims towards distinguishing multiple objects from a single class. It is considered as a Hybrid of Object Detection and Semantic Segmentation tasks.
- Other variants of convolution
This module will drive you several other essential variants in Convolutional Neural Networks (CNN).
- Metric Learning
Metric Learning is a task of learning distance metrics from supervised data in a machine learning manner. It focuses on computer vision and pattern recognition.
- Siamese Networks
A Siamese neural network (sometimes called a twin neural network) is an artificial neural network that contains two or more identical subnetworks which means they have the same configuration with the same parameters and weights. This module will help you find the similarity of the inputs by comparing the feature vectors of subnetworks.
- Triplet Loss
In learning a projection where the inputs can be distinguished, the triplet loss is similar to metric learning. The triplet loss is used for understanding the score vectors for the images. You can use the score vectors of face descriptors for verifying the faces in Euclidean Space
Natural Language Processing
Learn how to work with natural language processing with Python using traditional machine learning methods. Then, deep dive into the realm of Sequential Models and state of the art language models.
- Introduction to NLP
Natural language processing applies computational linguistics to build real-world applications that work with languages comprising varying structures. We try to teach the computer to learn languages, and then expect it to understand it, with suitable, efficient algorithms. This module will drive you through the introduction to NLP and all the essential concepts you need to know.
- Preprocessing text data
Text preprocessing is the method to clean and prepare text data. This module will teach you all the steps involved in preprocessing a text like Text Cleansing, Tokenization, Stemming, etc.
- Bag of Words Model
Bag of words is a Natural Language Processing technique of text modelling. In technical terms, we can say that it is a method of feature extraction with text data. This approach is a flexible and straightforward way of extracting features from documents. In this module, you will learn how to keep track of words, disregard the grammatical details, word order, etc.
TF is the term frequency (TF) of a word in a document. There are several ways of calculating this frequency, with the simplest being a raw count of instances a word appears in a document. IDF is the inverse document frequency(IDF) of the word across a set of documents. This suggests how common or rare a word is in the entire document set. The closer it is to 0, the more common is the word.
An N-gram is a series of N-words. They are broadly used in text mining and natural language processing tasks.
Word2vec is a method to create word embeddings by using a two-layer neural network efficiently. It was developed by Tomas Mikolov et al. at Google in 2013 to make the neural-network-based training of the embedding more efficient and since then has become the de facto standard for developing pre-trained word embedding.
GloVe (Global Vectors for Word Representation) is an unsupervised learning algorithm, which is an alternate method to create word embeddings. It is based on matrix factorisation techniques on the word-context matrix.
- POS Tagging & Named Entity Recognition
We have learned the differences between the various parts of speech tags such as nouns, verbs, adjectives, and adverbs in elementary school. Associating each word in a sentence with a proper POS (part of speech) is known as POS tagging or POS annotation. POS tags are also known as word classes, morphological classes, or lexical tags. NER, short for, Named Entity Recognition is a standard Natural Language Processing problem which deals with information extraction. The primary objective is to locate and classify named entities in text into predefined categories such as the names of persons, organisations, locations, events, expressions of times, quantities, monetary values, percentages, etc.
- Introduction to Sequential models
A sequence, as the name suggests, is an ordered collection of several items. In this module, you will learn how to predict what letter or word appears using the Sequential model in NLP.
- Need for memory in neural networks
This module will teach you how critical is the need for memory in Neural Networks.
- Types of sequential models – One to many, many to one, many to many
In this module, you will go through all the types of Sequential models like one-to-many, many-to-one, and many-to-many.
- Recurrent Neural networks (RNNs)
An artificial neural network that uses sequential data or time-series data is known as a Recurrent Neural Network. It can be used for language translation, natural language processing (NLP), speech recognition, and image captioning.
- Long Short Term Memory (LSTM)
LSTM is a type of Artificial Recurrent Neural Network that can learn order dependence in sequence prediction problems.
Great Recurrent Unit (GRU) is a gating mechanism in RNN. You will learn all you need to about the mechanism in this module.
- Applications of LSTMs
You will go through all the significant applications of LSTM in this module.
- Sentiment analysis using LSTM
An NLP technique to determine whether the data is positive, negative, or neutral is known as Sentiment Analysis. The most commonly used example is Twitter.
- Time series analysis
Time-Series Analysis comprises methods for analysing data on time-series to extract meaningful statistics and other relevant information. Time-Series forecasting is used to predict future values based on previously observed values.
- Neural Machine Translation
Neural Machine Translation (NMT) is a task for machine translation that uses an artificial neural network, which automatically converts source text in one language to the text in another language.
- Advanced Language Models
This module will teach several other widely used and advanced language models used in NLP.
This block will teach some additional modules involved in this Python for AIML online course.
This block of Python for AIML online course will teach you all about the Exploratory Data Analysis like Preprocessing, Missing values, etc.
- Data, Data Types, and Variables
This module will drive you through some essential data types and variables.
- Central Tendency and Dispersion
Central tendency is expressed by median and mode. Dispersion is described by data that is distributed around this central tendency. Dispersion is represented by a range, deviation, variance, standard deviation and standard error.
- 5 point summary and skewness of data
5 point summary is a large set of descriptive statistics, which provides information about a dataset. Skewness characterises the degree of asymmetry of a distribution around its mean.
- Box-plot, covariance, and Coeff of Correlation
This module will teach you how to solve the problems of Box-plot, Covariance, and Coefficient of Correlation using Python.
- Univariate and Multivariate Analysis
Univariate Analysis and Multivariate Analysis are used for statistical comparisons.
- Encoding Categorical Data
You will learn how to encode and transform categorical data using Python in this module.
- Scaling and Normalization
In Scaling, you change the range of your data. In normalisation, you change the shape of the distribution of your data.
- What is Preprocessing?
The process of cleaning raw data for it to be used for machine learning activities is known as data pre-processing. It’s the first and foremost step while doing a machine learning project. It’s the phase that is generally most time-taking as well. In this module, you will learn why is preprocessing required and all the steps involved in it.
- Imputing missing values
Missing values results in causing problems for machine learning algorithms. The process of identifying missing values and replacing them with a numerical value is known as Data Imputation.
- Working with Outliers
An object deviating notably from the rest of the objects, is known as an Outlier. A measurement or execution error causes an Outlier. This module will teach you how to work with Outliers.
- “pandas-profiling” Library
The pandas-profiling library generates a complete report for a dataset, which includes data type information, descriptive statistics, correlations, etc.
Time Series Forecasting
This block will teach you how to predict future values based on the previously experimented values using Python.
- Introduction to forecasting data
In this module, you will learn how to collect data and predict the future value of data focusing on its unique trends. This technique is known as Forecasting data.
- Definition and properties of Time Series data
This module will teach you about the introduction of time series data and cover all the time-series properties.
- Examples of Time Series data
You will learn some real-time examples of time series data in this module.
- Features of Time Series data
You will learn some essential features of time series data in this module.
- Essentials for Forecasting
In this module, you will go through all the essentials required to perform Forecasting of your data.
- Missing data and Exploratory analysis
Exploratory Data Analysis, or EDA, is essentially a type of storytelling for statisticians. It allows us to uncover patterns and insights, often with visual methods, within data. In this module, you will learn the basics of EDA with an example.
- Components of Time Series data
In this module, you will go through all the components required for Time-series data.
- Naive, Average and Moving Average Forecasting
Naive Forecasting is the most basic technique to forecast your data like stock prices. Whereas, Moving Average Forecasting is a technique to predict future value based on past values.
- Decomposition of Time Series into Trend, Seasonality and Residual
This module will teach you how to decompose the time series data into Trend, Seasonality and Residual.
- Validation set and Performance Measures for a Time Series model
In this module, you will learn how to evaluate your machine learning models on time series data by measuring their performance and validation them.
- Exponential Smoothing method
A time series forecasting method used for univariate data is known as the Exponential Smoothing method, one of the most efficient forecasting methods.
- ARIMA Approach
ARIMA stands for Auto Regression Integrated Moving Average and is used to forecast time series following a seasonal pattern and a trend. It has three key aspects, namely: Auto Regression (AR), Integration (I), and Moving Average (MA).
Pre Work for Deep Learning
This block will teach you all the prerequisites you need to know before learning Deep Learning.
- Mathematics for Deep Learning (Linear Algebra)
This module will drive you through all the essential concepts like Linear Algebra required for implementing Mathematicsto implement mathematics.
- Functions and Convex optimization
Convex Optimization is like the heart of most of the ML algorithms. It prefers studying the problem of minimising convex functions to convex sets. This module will teach you how to use all the functions and convex optimisation for your ML algorithms.
- Loss Function
Loss Function is a technique for prediction error of neural networks.
- Introduction to Neural Networks and Deep Learning
This module will teach you everything you need to know about the introduction to Neural Networks and Deep Learning.
This block will teach you how to deploy your machine learning models using Docker, Kubernetes, etc.
- Model Serialization
Serialization is a technique to convert data structures or object state into a format like JSON, XML, which can later be stored or transmitted and reconstructed.
- Updatable Classifiers
This module will teach you how to use updatable classifiers for machine learning models.
- Batch mode
Batch mode is a network roundtrip-reduction feature. It is used to batch up data-related operations to perform them in coarse-grained chunks.
- Real-time Productionalization (Flask)
In this module, you will learn how to improve your Machine Learning model’s productivity Using Flask.
- Docker Containerization – Developmental environment
Docker is one of the most popular tools to create, deploy, and run applications with the help of containers. Using containers, you can package up an application with all the necessary parts like libraries and other dependencies, and ship it all together as one package.
- Docker Containerization – Productionalization
In this module, you will learn how to improve the productivity of deploying your Machine Learning models.
Kubernetes is a tool similar to Docker that is used to manage and handle containers. In this module, you will learn how to deploy your models using Kubernetes.
Visualization using Tensor board
This block will teach you how TensorBoard provides the visualization and tooling required for machine learning experimentation.
Callbacks are powerful tools that help customise a Keras model’s behaviour during training, evaluation, or inference.
TensorBoard is a free and open-source tool that provides measurements and visualizations required during the machine learning workflow. This module will teach you how to use the TensorBoard library using Python for Machine Learning.
- Graph Visualization and Visualizing weights, bias & gradients
In this module, you will learn everything you need to know about Graph Visualization and Visualizing weights, bias & gradients.
- Hyperparameter tuning
This module will drive you through all the concepts involved in hyperparameter tuning, an automated model enhancer provided by AI training.
- Occlusion experiment
Occlusion experiment is a method to determine which image patches contribute to the maximum level to the output of a neural network.
- Saliency maps
A saliency map is an image, which displays each pixel’s unique quality. This module will cover how to use a saliency map in deep learning.
- Neural style transfer
Neural style transfer is an optimization technique that takes two images, a content image and a style reference image, later blends them together. Now, the output image resembles the content image but displayed in the style of the style reference image.
GANs (Generative Adversarial Networks)
This block will teach you how to implement GANs (Generative Adversarial Networks) in Machine Learning.
- Introduction to GANs
Generative adversarial networks, also known as GANs, are deep generative models. Like most generative models they use a differential function represented by a neural network known as a Generator network. GANs also consist of another neural network called Discriminator network. This module covers everything about the introduction to GANs.
Autoencoder is a type of neural network where the output layer has the same dimensionality as the input layer. In simpler words, the number of output units in the output layer is equal to the number of input units in the input layer. An autoencoder replicates the input to the output in an unsupervised manner and is sometimes referred to as a replicator neural network.
- Deep Convolutional GANs
Deep Convolutional GANs works as both Generator and Discriminator. You will learn how to use Deep Convolutional GANs with an example.
- How to train and common challenges in GANs
In this module, you will learn how to train GANs and identify common challenges in GANs.
- Semi-supervised GANs
The Semi-Supervised GAN is used to address semi-supervised learning problems.
- Practical Application of GANs
In this module, you will learn all the essential and practical applications of GANs.
This block will cover all the essential aspects of Reinforcement Learning used in various Machine Learning applications.
- What is reinforcement learning?
We need technical assistance to simplify life, improve productivity and to make better business decisions. To achieve this goal, we need intelligent machines. While it is easy to write programs for simple tasks, we need a way to build machines that carry out complex tasks. To achieve this is to create machines that are capable of learning things by themselves. Reinforcement learning does this.
- Reinforcement learning framework
You will learn some essential frameworks used for Reinforcement learning in this module.
- Value-based methods – Q-learning
The ‘Q’ in Q-learning stands for quality. It is an off-policy reinforcement learning algorithm, which always tries to identify the best action to take provided the current state.
- Exploration vs Exploitation
Here, you will discover all the key differences between Exploration and Exploitation used in Reinforcement learning.
SARSA stands for State-Action-Reward-State-Action. It is an on-policy reinforcement learning algorithm, which always tries to identify the best action to take from another state.
- Q Learning vs SARSA
Here, you will discover all the key differences between Q Learning and SARSA used in Reinforcement learning.
Hands-on Projects : Classifying silhouettes of vehicles
Classify a given silhouette as one of three types of vehicles, using a set of features extracted from the silhouette. You can view the vehicle from one of many different angles. The data contains features extracted from the silhouette of vehicles from different angles. Four “Corgie” model vehicles were used for the experiment: a double-decker bus, Chevrolet van, Saab 9000, and an Opel Manta 400 cars. This particular combination of vehicles was chosen with the expectation that the bus, van, and either one of the cars would be readily distinguishable. Still, it would be more challenging to distinguish between cars.
You will get your hands dirty with a real-time project under industry experts’ guidance from introducing you to Python to the introduction to artificial intelligence and machine learning and everything in between Python for AIML. Successful completion of the project will earn you a post-graduate certification in artificial intelligence and machine learning.