[Thesis Tutorials II] Understanding Word2vec for Word Embedding II

Previously, we talked about Word2vec model and its Skip-gram and Continuous Bag of Words (CBOW) neural networks.

Regularly, when we train any of Word2vec models, we need huge size of data. The number of words in the corpus could be millions, as you know, we want from Word2vec to build vectors representation to the words so that we can use it in NLP tasks and feed these vectors to any discriminative models. For example, in Machine Translation task, we need very huge size of words to be able to cover the language’s words and context.

Recalling Skip-gram model
You see that each of output layer in the above neural network has V dimension, which is the corpus unique vocabulary size. If we have 1M of unique words, then the size of each output layer would be 1M neuron unit. As we said in the previous post, to be able to get the words, we need to apply Softmax activation function to the output layer, so that we change the vector to probabilistic values and get the index of the maximum probability which corresponds to a 1-hot-encoding word vector.

To apply Softmax activation function, we use the following formula
The denominator is the summation of the output exponential values. The complexity of this operation is O(V) (regardless the exp() complexity), where V is the number of words in the corpus. So, this operation will have a bad performance if the vocabulary size is too high. Also, in Skip-gram, we need to do this in multiple Softmax output layer. This would be totally costly and waste alot of time when training the Word2vec models.

There was a need to make a new technique to reduce the Softmax complexity and make the training more efficient. There exist 2 different techniques for optimization, Hierarchical Softmax and Negative Sampling. They are mentioned in the original paper of Word2vec models.


Recalling our dataset from the previous post

D = {
“This battle will be my masterpiece”,
“The unseen blade is the deadliest”
};

V = {this, battle, will, be, my, masterpiece, the, unseen, blade, is, deadliest}, ||V|| = 11


Hierarchical Softmax

In general, the target of Softmax layer is to maximize the probability of a word given its context P(word | context). Assume that from our dataset and using CBOW model, we want to predict the word “unseen” given the context “The”, “blade” and “is”. So, our probability formula would be P(unseen | the, blade, is).

Regularly, we use a flat Softmax layer to get that probability after applying the Softmax formula and get the probabilities. But in Hierarchical Softmax we don’t use a flat layer, but instead, we use a balanced Huffman tree layer, like the following.

So, to get the probability of the word “unseen”, we need to calculate (from the root) the following probability multiplication: P(right) * P(left) * P(right) * P(right).

or P(left) * P(right) * P(right) * P(right)

The question now, how we could get these probabilities?

The idea of getting these probabilities is the same idea of logistic regression. Recalling a simple logistic regression which uses Sigmoid function in its output neuron


In logistic regression, we get the probability of Y given the features set X, P(Y | X). You get this probability by multiplying the features vector with the weights matrix, then apply the Sigmoid function to get the probability. Sigmoid(X * W + b).

Well, we will do the same with our tree. Our tree nodes would be the output of a logistic function and we connect each node of the tree by an associated weight matrix so that we do the multiplication between the hidden layer values of the neural network and the weights to get the probability of each node of the tree.

The input to the tree nodes are the hidden layer values of the neural network (the layer before the softmax). You can imagine it as we have now a new neural network that has the hidden layer values as input and the output is each node of the tree.

For each tree layer, we get the probabilities of the nodes which will lead us to know what is the predicted word by choosing the nodes with the maximum probability value in each tree layer.

You can see that the complexity is reduced from O(N) to O(log(N)) which is a great performance improvement. Also, we don’t need to calculate the values of the 2 nodes of the subtree. You need only to get the probability of one of them and get the other by subtracting 1 from the other probability as P(right) + P(left) = 1.


Negative Sampling

Negative sampling is another approach for training the neural network in much more optimized approach. Actually, Negative Sampling is the most popular approach when training Word2vec models and it is implemented in Tensorflow’s Word2vec.

The idea of negative sampling is quit easy to understand and in implementation. Recalling our Softmax layer, you will see that the number of neurons are the same of the number of corpus’s words. After getting the error, we update the whole weights of the neural network. If you thought about that, actually, we don’t have to do this. We know that we have all the corpus words in the training items, so, the words weights will be certainly updated during the training phase.

Using this truth, we can instead update some of the words weights for each training example. These words would contain the positive word (the actual word that shall be predicted) and other words that are considered to be negative samples (incorrect words that shall not predicted). If you assumed that the number of negative samples are 10, then the softmax layer has only 11 neuron unit (10 negative + 1 positive) and the error will be used to update only 11 * hidden_layer_size weights instead of V * hidden_layer_size.

The question now is, how to select these negative samples?

Traditionally, choosing these negative samples depends on the words frequency in the given corpus. The higher frequency of the word means the higher chance of choosing it as a negative sample.

The following formula is used to determine the probability of selecting the word as a negative sample.

Where c is a constant that is selected by the model creator. After applying that, we choose the maximum N words, where N is the number of the negative samples.


References

– https://arxiv.org/pdf/1411.2738.pdf
– http://cs224d.stanford.edu/syllabus.html?utm_content=buffer891d6&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer
– https://www.tensorflow.org/tutorials/word2vec
– http://mccormickml.com/2017/01/11/word2vec-tutorial-part-2-negative-sampling/

Drawings are created using www.draw.io .

Generative Adversarial Networks [2] – Camouflage your Predator!

You can imagine the situation by learning without any prior knowledge about something, you just do mistakes and you let others correct them to you. Continuing this process will lead you to learn how to do that thing, although you may still don’t know what are you doing?!

saad

This was a comment from my professional friend Moustapha Saad on my recent post which was an overview about Generative Adversarial Networks (GANs). I find that his description is very true. It describes the true nature of GANs, so, I decided to use it as a title of this post.

This post is considered to be the part 2 of the previous post. In the previous post, we talked about the general idea of GANs, the discriminative and the generative models which cooperate with each other to improve themselves. In this post, we will focus on how the generative models work in general and some different types of generative models that could be used when building GANs.


Maximum Likelihood Generative Models

Assume that we have a dataset that contains m training samples, these samples follow an unknown probability distribution, it could be Gaussian, Bernoulli or any other probability distribution, actually, we don’t care about knowing such distribution. Our focus is to be able to generate similar data that has the same distribution of the training dataset. We can formulate the problem as a Maximum Likelihood Estimation problem.

The idea of Maximum Likelihood is to make a model that has some parameters which make it be able to generate the same probability distribution, P(X;Θ). As we we have m samples in the training data, we can generalize the Maximum Likelihood all over the data, Πm=1Pmodel(Xi;Θ).

Now, the target is to find the Θ that Maximize the likelihood, so, our mathematical representation of this problem would be Θ := arg Θ max Πm=1Pmodel(Xi;Θ)

To avoid any numerical problems, it is always better to transform to the log space, as you know, log(x * y) = log(x) + log(y), we turns the above equation to sum instead of product. So, our final Empirical Distribution equation to obtain the Maximum Likelihood of the training data would be Θ := arg Θ max Σm=1log Pmodel(Xi;Θ).

We call the above model as an Empirical Distribution, as we don’t pretty certain about the distribution we get after getting the parameter Θ. The question is, how to know whether the obtained distribution is really the actual distribution of the training data?

Kullback–Leibler Divergence (KL-divergence) is a measurement that could be used to answer the above question, and it can be used in both the discrete and continuous probability distributions. Given 2 probability distribution, P and Q, the KL-divergence is measured by DKL(P || Q) = Σi P(i) log (P(i) / Q(i)).

The target is to minimize the KL-divergence, the less difference between the distributions of P and Q means that they are close to each other. So, recall our Maximum Likelihood, after getting the empirical distribution, we measure the KL-divergence between the empirical distribution and the actual distribution, in other words, minimizing the KL-divergence means maximizing the Likelihood of the training dataset.


Explicit vs. Implicit Maximum Likelihood Models

After knowing the general idea of Maximum Likelihood models and their target, we need to be more specified when we consider talking about Maximum Likelihood generative models. Actually, there are 2 different types of Maximum Likelihood models, Explicit and Implicit models.

Explicit Models: Models that fully depend on the training data, and try to learn the parameters of P(X;Θ), which are the mean and variance, using the regular gradient or any other optimization technique. To be able to make the training efficient, we need to design a model that makes use of the training samples, also, the collected training samples should be generalized enough to cover all variables of the probability distribution and this may be hard in some real-life data, finally, the explicit models require some complex computational power to learn the parameters.

Implicit Models: In these models, we don’t actually use the data to learn the true distribution parameters, we assume that the distribution parameters are hidden and we try to predict it using some observed variables from the data. These observed variables have no rules to be determined, they are extracted by the researcher or by an expert who may have a pre-knowledge about the data. You can think of it as an inference stochastic process that could be done using Markov Chains, you use some observable data to be able to get the hidden variables (called Latent variables), refer to this post to know more about Markov Chains. We use these latent variables (which makes a latent space) to generate a similar data based on the given training samples. Regularly, implicit models could be used for dimension reduction of the data without any prior knowledge about the data distribution.

The question now, to what model the generative model in GANs belongs to?

Recalling GANs figure,

You find that GAN uses the latent space to be able to generate data that is considered to be in similar distribution of the real samples. So, it is an implicit model, but it doesn’t use any observable data like Markov Chains do, it deals directly with the training sample, in other words, it treats the samples of the training data as the latent variables, and change their distribution by adding some noise to the sample features. That’s why it is considered to be an implicit model, it doesn’t use the data distribution, and it doesn’t depend on any observable data, like we do with Markov Chains.

To clarify more, you can imagine the situation by learning without any prior knowledge about something, you just do mistakes and you let others correct them to you. Continuing this process will lead you to learn how to do that thing, although you may still don’t know what are you doing?!

Actually, there are many different types of Maximum Likelihood models in general, the following figure (taken from Ian Goodfellow tutorial, you fill find it in the references) is a taxonomy of  different types of models.

tax


The Generative Model’s Camouflage Process

In GANs, the generative model tries to take some real samples from the training data, without any prior knowledge about its distribution, and replacing some actual features with other fake features (camouflage). As the generative model is a Deep Neural Network, the fakes features could be added in the hidden layers after feedforward the input to the neural network. Alternatively, the noise could be added before input the features to the neural network, and let the network produce the fake sample after feedforwarding the noisy fake sample. We can represent the generative model in mathematical function by f(X, Noise) = Z.

The discriminator (Predator) then receives 2 signals, the first one is X, which is the real sample, and Z, which is the fake sample. The role of the discriminator is to assign 2 probabilities, using softmax for example. If it gives a probability that is close to 1 to the real sample, which means that it perfectly distinguishes between the real and fakes samples, then it backpropagates the error (calculated using cross entropy) to the generative model to fine-tune its parameters (neural network weights) to enhance its ability to produce a more tricky sample. If it gives a probability that is close to 1 to the fake sample, which means that it sadly doesn’t distinguish between the real and fakes samples, then it backpropagates the error to the discriminator model to fine-tune its parameters to enhance its ability to avoid the fake sample.


Existed GANs with Popular Generative Models

Commonly, Denoising Auto-Encoders are used nowadays when building GANs. There exists an interesting project about Face Completion using Generative Adversarial Denoising Autoencoder, you can find the project website here http://www.cc.gatech.edu/~hays/7476/projects/Avery_Wenchen/. Also, there are other models that are called Adversarial Autoencoder (AAEs) https://arxiv.org/pdf/1511.05644.pdf and Deep Convolutional Generative Adversarial Networks https://github.com/carpedm20/DCGAN-tensorflow , but I don’t know much about them.

The idea of the Adversarial Learning is dominating nowadays and it is considered to be a hot topic in Deep Learning that could be used in multiple fields, such as Speech Recognition and Computer Vision.

 

References

– https://arxiv.org/pdf/1701.00160.pdf
– https://openreview.net/pdf?id=B16Jem9xe
– https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence
– http://www.kdnuggets.com/2017/01/generative-adversarial-networks-hot-topic-machine-learning.html
– http://www.theanalysisfactor.com/what-is-a-latent-variable/
– http://www.cc.gatech.edu/~hays/7476/projects/Avery_Wenchen/
– https://arxiv.org/pdf/1406.2661.pdf
– http://cs.stanford.edu/people/karpathy/gan/

Overview: Generative Adversarial Networks – When Deep Learning Meets Game Theory

Before going into the main topic of this article, which is about a new neural network model architecture called Generative Adversarial Networks (GANs), we need to illustrate some definitions and models in Machine Learning and Artificial Intelligence in general.

Discriminative Models: Models that predict a hidden observation (called class) given some evidence (called features). In other words, we have some features and observations about an entity, and we want to predict its class, category or label. You can imagine the model as a function that has features as input and produces an output. The criteria that is used to produce the output depends on the model architecture and nature.

So, the discriminative model could be described in mathematical formula by f(x1, x2, x3, …, xn) = y, where n is the number of features, and the target of the function is to get the conditional probability P(y|x1, x2, x3, …, xn).

Support Vector Machines (SVMs) and Feedforward Neural Networks (FNNs) are examples of discriminative models that are used for classification.

Generative Models: Given some features, the models target is to learn how these features are produced, it tries to learn the distribution of the features. Assume that we have features x1, x2, x3, …, xn where n is the number of features, the model targets to learn the joint probability distribution of the features with the classes.

We can formulate this in mathematics by the joint probability P(x1, x2, x3, …, xn, y). After learning this distribution, we can estimate the conditional probability of the discriminative models to get the probability of the class given these features with their distribution.

Restricted Boltzmann Machines (RBMs) and Hidden Markov Models (HMMs) are examples of generative models. Note that, Vanilla Auto-encoders (AEs) aren’t considered to be a generative model, all what they do is just reconstruction of the features, on the other hand, Variational Auto-encoders (VAEs) belongs to generative models family.

Nash Equilibrium: A conceptual term that is used in Game Theory to describe a game situation in which the game players are satisfied by the decision he/she makes after revealing the other players strategies, and each player has no intention to change the strategy after knowing the other strategies as they didn’t effect on the strategy he/she used to win the game.

For example, assume we have a game in which each player has 2 choices to choose between them, and the 2 choices are known that are right and have the same effect regarding to the game points or rewards. The first player strategy maybe to choose the first choice, while the other player’s choice is the second one. After revealing that, each player is satisfied by the strategy he/she took because the other’s choice hasn’t effect badly to him/her.

Minimax: An algorithm that belongs to Game Theory and Statistics. The algorithm is used in games in which the game participants are 2 players, and each player tries to win the game by minimizing the worst case that is provided by the other player move, in other words, the player Minimize the Maximum move of the other player.

You can imagine the game of Chess, in which each player tries to win by making the best available move while the other player tries to minimize this move which is considered to be the best move by his side. Minimax is commonly used when making an AI-bot agent in Chess, Tic-tak-toc and Connect-4 games, you can generalize in the decision-making rule-based games.


Generative Adversarial Networks (GANs)

GANs consists of 2 models, a discriminative model (D) and a generative model (G). These models are participants on the training phase which looks like a game between them, and each model tries to better than the other.

The target of the generative model is to generate samples that are considered to be fake and are supposed to have the same distribution of the original data samples, on the other hand, the discriminative’s target is to enhance itself to be able to recognize the real samples among the fake samples generated by the generative model.

It looks like a game, in which each player (model) tries to be better than the other, the generative model tries to generate samples that deceives and tricks the discriminative model, while the discriminative model tries to get better in recognizing the real data and avoid the fake samples. It is as mentioned before, it is the same idea of the Minimax algorithm, in which each player targets to fail the other and minimize the supposed loss.

This game continues till we get a state, in which each model becomes an expert on what it is doing, the generative model increases its ability to get the actual data distribution and produces data like it, and the discriminative becomes expert in identifying the real samples, which increases the system’s classification task. In such case, we know that it reached that in which each model satisfied by its output (strategy), which is called Nash Equilibrium in Game Theory.

During the training phase, the loss function, that calculates the error, is used to update the 2 models parameters (learning the weights), also, the model can’t change the other’s parameters, the parameters are locally updated in each model using the global error.


This was an overview of this new arising model. I am still learning it and looking forward to using it in many applications, specifically in Natural Language Processing field.

References

https://arxiv.org/pdf/1701.00160v1.pdf
– http://www.kdnuggets.com/2017/01/generative-adversarial-networks-hot-topic-machine-learning.html
– https://en.wikipedia.org/wiki/Generative_adversarial_networks
– https://en.wikipedia.org/wiki/Minimax
– https://en.wikipedia.org/wiki/Nash_equilibrium
Artificial Intelligence: A Modern Approach book by Stuart Russell and Peter Norvig

Another LSTM Tutorial

The figures are taken from this great blog post by Christopher Olah


Recurrent Neural Networks

Recurrent Neural Networks (RNN) is a type of Neural Networks (NN) that is commonly used in problems that depend on sequential data. In sequential data, we should assume that the data is dependent to each other. For example, if we have a sentence that contains some words, to be able to predict any word of it, we need to memorize the previous words, because the sentence words are naturally homogeneous, in grammar or part-of-speech (POS), with each other.

Traditionally in the regular Multi-layer Perceptron (MLP) Neural Networks, we assume that the data is independent to each other, which is a wrong with some data like text or sound.

 RNNs have capabilities to “memorize” the previous data, as it contains self-loops. It saves the state and use it with the new data. This helps the network to take care of the dependencies between the data and take them into consideration when predicting the output.

The following is a 1-unit of a RNN, A . It has an input X and output h.

  

As you have figured, it is very similar to the traditional neuron in MLP, but it differs in the self-loop at the unit A. The self-loop holds the previous state of the neuron and fed it with the new input.

We can unroll (unfold) the figured model. You can see that it isn’t much different with the traditional MLP model.

 1

Through time, the self-loop values stores the previous experience of the previous data, and use it with the new input to obtain the predicted output. This helps to memorize the dependencies between the data together.

Long-term Dependencies Problems with Regular RNNs
As you have figured, there are no restrictions when saving the memory of the previous data, the network keeps saving the previous data without identifying whether this memory would be helpful in the next iterations or not. However, practically, this isn’t always right in the real sequential data.

Assume that we have a sentence like that. “I live in France, I like playing football with my friends and going to the school, I speak french”

Assume that we want to predict the word “french”, we don’t need to look to the previous two terms, ” I like playing football with my friends” and “going to the school”, we need only to know that “I live in France”, and ignore the unnecessary context that may confuse the network while training it.

Practically, Regular RNN can’t connect the related information and dependencies together, specifically if the information has some noise within it that could avoid us from the actual target.

This problem is the main reason that pushed the scientists to develop and invent a new variant of the RNN model, called Long short-term Memory (LSTM). LSTM can solve this problem, because it controls the “memorizing” process within its units using something like “gates”.


What is LSTM?

LSTM is a kind of RNN that is revolutionary used on many fields on computer science such as Natural Language Processing and Speech Recognition. Because of its capabilities of avoiding the problem of “long-term dependencies”

Like any other RNN, LSTM has the same idea of the self-loops. But, LSTM shines from the other RNN in that each unit (neuron) contains some layers and gates that are specified by the user. Each of these layers and gates controls the output and the state of the neuron.

LSTM Effectiveness 

Regularly, when human read a paragraph or a topic, they can easily extract the dependencies between the sentences that formulate the text. In stories and novels, you can match between the events that happen in the story, and extract the much important events and connect them together to be able to understand the story end.

Human brains can identify and memorize the importance and dependencies between words of the sentence, and determines the POS tags of them. If you see the a subject in the beginning of the sentence, then your brain most likely predict that the next word has a great chance to be a verb(VP) or a noun phrase(NP) the describes the subject, because you memorize that the previous word is a subject, and you don’t need to look what context is before the subject, as you determined that subject is your beginning of the a new context to predict the the next word POS.

This is how LSTMs works, they simulate the same process of this ability in our brains to be able to connect the important or related objects together, and forget the unnecessary objects in the context.

LSTM Unit Structure

 2

This is a standard structure of LSTM unit. It contains:-

  • 2 input (previous state C_t-1 and previous output h_t-1)
  • 4 layers (3 sigmoid and 1 tanh activations)
  • 5 point operators (3 multiplications, 1 addition and 1 tanh operators)
  • 2 output (current state C_t and current output h_t)

 The most important thing in LSTM is the state. The state represents the information stored in the since the training begins. We control the memorizing process by updating this state. If we want to forget the previous state, then we make it 0, if we want to control the amount of memorized information, then we update the values of it during the training process. Next, we will discuss how is the output and update are done.

The user can change this structure according the problem they want to solve. This is just an example of a standard LSTM unit.

Detailed Explanation of LSTM Processing
We can divide the internal processing within the unit into 3 groups. Each of these groups performs layers and operators processes to produce the current state and the output.

 21

 

Group 1.1: Input with the previous output for forgetting

 6

This sigmoid activation layer is called “forget gate layer”, because it decides whether it forgets the previous state, in this case, the activation output would be 0 for each element of the state vector, or we use the previous state, in this case the elements values would be higher than 0 and less or equal than 1.

Firstly, concatenate the input with the previous output, finally, apply the activation to the weighted sum of the input.
7

 

Group 1.2: Forget gate layer with the previous state

8

 

To know what we need to forget from the previous state, we multiply the output of the “forget gate layer” with the previous state (element by element multiplication). If it produces a vector that is full of zeros, it means we want to forget all of the previous memories and initiate a new memory from the current input. This goes as follows.

9

 

Group 2.1: Input with previous output for the new information

 10


Firstly, we need to add the new input that would be used to update the state. We called the sigmoid activation layer by “input gate layer”. And it decides which values of the state vector would be updated.

Secondly, we need to generate a new state, called candidate state, that would be added to the previous state (which could be full of 0s or values depends on group 1.1 output).

Finally, we add them together to generate the current state of the unit which holds the concluded information.
12

 

Group 2.2: Scaling new state with the input gate scalar.

 13

We multiply the new generated state with the input gate layer. This is like a scaling process, because we edit the values of the new generated state by the needed update factors to get the new information, which would be added to the previous state to get the whole state.
15

Adding groups 1 and 2 to get the current new state

 16

To generate the new current state, we add the new generate state to the previous state. This state would be fed to the next unit (current unit with the next input).

 17

Group 3.1: Getting the unit output

 19

 

When we produce the output, we need to use the state and input to help the next unit on what it will use to produce its state.

Firstly, we use the weighted sum of the concatenated previous output and input, then apply the sigmoid function. This decides which parts of the state we need to output.

Secondly, we use tanh operator to make sure that the values of the current state is within -1 and 1.

Finally, we get the unit output that has some parts of the state and input.

 20

Conclusion

Nowadays, LSTM is used in very wide fields in computer science, in Machine Learning specifically. It practically proved itself in some hot research topics such as Language Modeling, Sentiment Analysis, Speech Recognition, Text Summarization and Question Answering.

References

http://colah.github.io/posts/2015-08-Understanding-LSTMs
https://en.wikipedia.org/wiki/Long_short-term_memory
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.248.4448&rep=rep1&type=pdf
http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/

[Java] Hopfield Network Implementation

Why Java?

Well, actually I have a deep love with this language, I enjoy my time writing a Java code. Also, it is good for me to write some OOP projects to not forget its concepts. Getting used to Python and other scripted languages may effect badly to my designing and OOP skills.

So, from time to time, I like to code some Java code. And here we go!

This is a light simple Java implementation of Hopfield Recurrent Neural Network.

Supported Features (cont. updated)

  • Data Normalization
  • Network Visualization using GraphStream

An example of a hopfield network visualization with pattern of size 5 (recognizing pattern of 5 elements)

– Here you can see the repository of implementation.

– I am working on a brief tutorial that explains the hopfield network on the README file

 

[Python] U-V Decomposition using Swarm Optimization

Recently, I have implemented U-V decomposition technique for Recommendation Systems using Particle Swarm Optimization.

U-V decomposition is an optimization problem for a matrix. Here, our matrix elements represent some users review about movies.

M

We have 5 users and 5 movies. Each row represents a user’s review for each movie. There are some missing values from the matrix which means that the user hasn’t put his/her review on that movie.

You can check the code here https://github.com/AhmedHani/CS624-ML/tree/master/Implementations/U-V%20Decomposition . Later, I will write another post to explain my code.

Input: Matrix 5×5 with unknown reviews

Intermediate output: Matrix U and V which multiplication gives a new matrix without unknown reviews

Output: Matrix 5×5 without unknown values.

 

References 

– The matrix example is taken from the book http://infolab.stanford.edu/~ullman/mmds/book.pdf at chapter 9

 

Introduction to Ant Colony Optimization (ACO)

Ants are awesome, their co-operation and coordination are really impressive, these behaviors pushed the scientists to discover and study more about ants, and how these creatures communicate with each other to preform these behaviors which help them to accomplish their tasks and missions.

Stigmergy, an indirect communication that the ants use to communicate with each other, you can imagine it as a kind of mechanism that an ant affects the environment(the ground) by it to tell other ants to do a specific task or move, this helps the ants to achieve self-organization.

Biologists found something interesting, some of ants species use something called Trail of pheromones, ants spread them on the path they walk on, other ants sense this trail and follow the same path as an influence for them. By using this trail of pheromones, ants can follow each other to reach a source of food, then return back to their colony. The amount of the pheromones is directly proportional to the intensity of the ants, which means that the higher of pheromones on a path, the more of ants walk on that path.

Scientists decide to do an experiment to fully understand the trail of pheromones, they put some ants on a dynamic environment in which new paths added to it in the run time to see the ants’ behaviors, see the following figure

EXPThe figure contains before-after adding a new shorter path from nest to food source, initially ants were on the nest, before adding a shorter path there was only one path to get the source food, so that all ants walked on it, after each of ants’ walk they left the trail of pheromones, and that helped ants to follow each other. After adding a new shorter path, it was expected that the ants will change their path immediately, but they continue use the longer path, after many trips from the nest to the source food, the ants changed their way to the shorter path.

Scientists explained that ants always follow the trail of pheromones, the longer path has much pheromones than the shorter path so the ants continued to follow that path, after many iterations, they changed their path to the shorter one because some of ants explored the new path and found it shorter, so more and more ants took a decision to use the new path, and for each trip on it they left the “Trail of pheromones” which made all ants to follow the new path, regarding the pheromones on the longer path, the scientists found that the pheromones always evaporate after several minutes and that made all ants to ignore the longer path.

Well, it the time to simulate this in a mathematical perspective by creating a probabilistic model (Stochastic model), assume that the probability that an ant at a decision position i ∈ [p1, p2] chooses a path a where a ∈ [s, l] is pia(t), s denotes short path and l is the long path, the amount of pheromones at time t is denoted as φia(t)

The probability that an ant reaches from the nest to the source using short or long path is pis(t) + pil(t) = 1, according to the experiment, they concluded that the amount of the pheromones is proportional to the intensity of the ants, so the probability that an ant chooses the short path is:

Formula

where α = 2 which is concluded from the experiment .. as the amount of pheromones changes and evolves during the time, this evolution can be described using differential equations:

is / dt = ΨPjs(tts) + ΨPis(t)            ======> (i = a, j = b; i = b, j = a)
il / dt = ΨPjl(t – (r . ts)) + ΨPil(t)          ======> (i = a, j = b; i = b, j = a)

Ψ: is a constant
r: is the ratio between the long and short paths

Note that, this model represents that ants are spreading pheromones in both trips (from nest to food, and from food to nest)

These equations inspired the scientists to use ants’ behaviors to get a minimum cost path in a simple graph.

Build A Simple Ant Colony

Problems appeared when they tried to use ants’ behavior (described above), we can conclude them in two points ..

[1]Loops generation
– Ants in their trip, they spread pheromones forward (from nest to food) and backward (from food to nest),
this may causes a loop that traps the ants, due to forward pheromone trail, increasing amount of pheromones in loop’s path attracts all ants to follow only the loop’s paths.

[2]Ignoring short paths
– If an ant survive from the loop, the pheromones trails on the short paths will be evaporated, that makes the ant ignores them

The solution is to create some behaviors that help them to find minimum cost path in simple graph without only depending on trails of pheromones, these behaviors are..

[1]Probabilistic solution constructed by pheromone trails
[2]Deterministic backward path with pheromone updating (loops are eliminated)
[3]Evaluation for a generated solution in each iteration

Next, we will discuss each of these behaviors.

[1]Probabilistic solution constructed by pheromone trails
– In the forward trip, ant chooses probabilistically a path according to the amount of pheromones on this path which has been spreaded by the other ants, ants in the forward trip don’t spread any pheromones.

[2]Deterministic backward path with pheromone updating (loops are eliminated)
– When an ant is in the destination node (source food), before it starts its backward trip, it memorizes the paths and eliminates all loops from them, ants in the backward trip spread trails of pheromones. This with the probabilistic solution eliminate the loops generation problem.

[3]Evaluation for a generated solution in each iteration
– Ants in their trip, they memorize the paths they talk and the cost from the start node to the end node, depends on that, they spread the pheromones according to the quality of the solution, the higher amount of pheromones the better of the solution.
– Controlling the pheromones evaporation rate leads to avoid the poor solutions

Implementing all these behaviors leads to build a simple ant colony optimization algorithm (S-ACO) to find the minimum cost path on a simple graph. The next post will about this algorithm.

References: Marco Dorigo and Thomas Stützle ant colony optimization book

Steering Behaviors

Well, what are the Steering Behaviors ?
They are collection of techniques that helps the autonomous characters move in realistic manner, by using some forces that are combined together to produce a normal and natural movement.
This technique contains many behaviors, but i will explain the following:-

  • Seek.
  • Flee And Arrival.
  • Wandering.
  • Pursuit and Evade.

Seek
test

The direction of the velocity controls where the plane is heading or going to while its length controls how much it will move every frame, so, the greater of the length, the faster the plane moves.
To measure the wanted velocity vector that help the plane to reach to its target calculated as follows

Vector3  Normal = Vector3.Normalize(GameObject.Position - TargetPosition);
Vector3 DesiredVelocity = Normal * MaxVelocity;

This for only for calculating the velocity that the Enemy plane should use to reach the player, but without the steering the Plane will go in “Straight” Routes, which won’t be realistic, so we should add a “Steering force” that  makes the plane smoothly adjust its velocity, avoiding sudden path change , when the player plane changes its direction, the enemy “Gradually” changes its velocity vector to reach the player’s new position.

test

Those forces calculated as follows :-

Vector3  Normal = Vector3.Normalize(TargetPosition - GameObject.Position);
Vector3 DesiredVelocity = Normal * MaxVelocity;
return (DesiredVelocity - GameObject.LinearVelocity);

After adding all these forces, there will be a path that leads the enemy to the player positions every time.

Flee

This behavior uses the same forces that help the enemy plane seeking the Player, but it differs that it helps the enemy to Run Away from the player.

test

The desired velocity that helps the enemy to run away is calculated by subtracting the enemy’s position from the player’s position which produces a New vector that goes from the Player’s vector  towards the enemy.

Vector3  Normal = Vector3.Normalize(GameObject.Position - TargetPosition);
Vector3 DesiredVelocity = Normal * MaxVelocity;
return (DesiredVelocity - GameObject.LinearVelocity);

It is obvious that Flee Steering  =  -Seek Steering.

Now we should add these forces to create a “Fleeing Path” …
test

Arrival

This technique prevents the enemy from moving through the player, by slowing down its velocity when it reaches a specific area until it stops at the player .

This behavior consists of 2 phases or cases,

  • ⦁ When the enemy is outside the player area
  • ⦁ When it is in that area

The First phase is like the “Seeking behavior”, move to the player position with a capable velocity, the second phase when it reaches the player’s area.

test

When the enemy enters the slowing area its velocity is decreased linearly to ZERO, and this achieved by adding “Arrival Steering” , this addition reduces the enemy’s velocity till it stops.
but we should make sure that the enemy’s velocity will turn ZERO immediately , it should be decreased gradually.
to calculate the desired velocity and the distance to player :-

Vector3 DesiredVelocity  = TargetPosition - GameObject.Position;
float Distance = DesiredVelocity.Length();

Then we should check if the Enemy is in far from the Player or not(Distance to target  > 0), else we will return a new vector3()  :-

if(Distance > 0)
{
float Speed = (Distance / DeclerationVelocity * .3f);
     Speed = TruncateFloat(Speed, MaxVelocity);
     Vector3 DesiredVelocity = DistanceToTarget * Speed / Distance;
     return (DesiredVelocity - GameObject.LinearVelocity);
}
else
{
     return new Vector3();
}

P.S: the enemy’s original vector doesn’t change , but the addition vector affects to it that turns it to NULL, but when it is outside the Slowing area it will use its original velocity.

Wandering

The wandering steering help the enemy to act a realistic movement , which will make the player know that there is an enemy around and should take care about it.

The idea of implementing the wandering behaviors is producing a small random and apply to enemy’s current direction vector, these small displacement prevents the enemy’s plane to change its direction and rout immediately by changing its angle in every frame when the plane go right or left.

Now, we can use a circle in front of the enemy’s plane like that :-

test

The displacement force will interfere with the enemy’s plane route to calculate the Wander Force .
First of all we should determine the position of the Circle which will be in front of the Plane, and that can be done by make a copy of the velocity vector which mean that the center of the circle will in the same direction of the velocity vector, then it is normalized and multiplied by a scalar value.
Second, we should determine the displacement force , we can use a 2D vector instead of 3D, because the wandering is about moving the plane Right or left , it is known that greater of the circle radius the stronger the Wandering steering.
By implementing this behavior the plane could move randomly in the space.

WanderTarget = new Vector2(RandomClambed() * WanderJitter, RandomClambed() * WanderJitter);
WanderTarget.Normalize();
WanderTarget *= WanderRadius;
Vector2 DesiredLocalTarget = WanderTarget + new Vector2(WanderDistance, 0);

Pursuit and Evade

A pursuit is a process of following a target to catch it, but it differs from Seeking that pursuit is predicting where the target will be in the future, by implementing this technique it is possible to adjust current trajectory to avoid unnecessary paths.

test

The pursuit is like seeking behavior, the only difference that the pursuer won’t seek the target itself, but its future position, so the implementation won’t be very different, the following figure shows what we want to do :-

test

To know the future position of the target , we will use “Euler Integration” :

Position = Position + Velocity;

But we want to know the position in the future, so we will give the equation a factor of Number Game Updates (N), So this is the final equation :

Position = Position + Velocity * N;

If the value of N is high, this will lead to a good prediction, if N is close to Zero this means that the pursuer enemy won’t get the player position in the future.
After that we call the Seeking technique to get the path to future position.

return Seek(((TargetPosition + Target.LinearVelocity) * LookAheadTime));

LookAheadtime is a variable that has the value of Number of game updates.
the full implementation is as follow :-


Vector3 targetPos = aTarget.Position;
 //Check if face to face;
 Vector3 DistanceToVectorTarget = targetPos - GameObject.Position);
 double MyHeading = Vector3.Dot(Heading, aTarget.Heading);
 double TargetHeading = Vector3.Dot(aTarget.Heading, Heading);
 if((TargetHeading > 0) && (MyHeading < -0.95))
 {
return Seek(aTarget.Position);
 }
 float Speed = Magnitude(GameObject.LinearVelocity);
 //Predict the Direction to the target and head to it
 float LookAheadTime = (DistanceToVectorTarget.Length() / (aTarget.SteeringBehaviors.MaxVelocityFunction + Speed));
 LookAheadTime += TurnAroundTheTime(targetPos);

return Seek(((targetPos + aTarget.LinearVelocity) * LookAheadTime));

Evade

The opposite of the pursuit behavior , the enemy should flee away of the future position of the player.

test

 

To implement this behavior, we will do the same as Pursuit behavior, but this time we will call the Flee Function :-
The implementation is as follow :-


 Vector3 targetPos = aTarget.Position;
 //Check if face to face;
 Vector3 DistanceToVectorTarget = targetPos - GameObject.Position);
 double MyHeading = Vector3.Dot(Heading, aTarget.Heading);
 double TargetHeading = Vector3.Dot(aTarget.Heading, Heading);
 if((TargetHeading > 0) && (MyHeading < -0.95))
 {
     return Seek(aTarget.Position);
 }
 float Speed = Magnitude(GameObject.LinearVelocity);
 //Predict the Direction to the target and head to it
 float LookAheadTime = (DistanceToVectorTarget.Length() / (aTarget.SteeringBehaviors.MaxVelocityFunction + Speed));
 LookAheadTime += TurnAroundTheTime(targetPos);

return Flee(((targetPos + aTarget.LinearVelocity) * LookAheadTime));

There are other techniques in steering behaviors, but implementing the behaviors that I explained in this topic will be enough to generate a good realistic movement for the opponent units.

References :

http://www.red3d.com/cwr/steer/gdc99/