Python for NLP: Word Embeddings for Deep Learning in Keras

This is the 16th article in my series of articles on Python for NLP. In my previous article I explained how N-Grams technique can be used to develop a simple automatic text filler in Python. The N-Gram model is basically a way to convert text data into numeric form so that it can be used by statistical algorithms.

Before N-Grams, I explained the bag of words and TF-IDF approaches, which can also be used to generate numeric feature vectors from text data. Till now we have been using machine learning approaches to perform different NLP tasks such as text classification, topic modeling, sentimental analysis, text summarization, etc. In this article we will start our discussion about deep learning techniques for NLP.

Deep learning approaches consist of different types of densely connected neural networks. These approaches have been proven efficient to solve several complex tasks such as self-driving cars, image generation, image segmentation, etc. Deep learning approaches have also been proven quite efficient for NLP tasks.

In this article, we will study word embeddings for NLP tasks that involve deep learning. We will see how word embeddings can be used to perform simple classification task using deep neural network in Python's Keras Library.

Problems with One-Hot Encoded Feature Vector Approaches

A potential drawback with one-hot encoded feature vector approaches such as N-Grams, bag of words and TF-IDF approach is that the feature vector for each document can be huge. For instance, if you have a half million unique words in your corpus and you want to represent a sentence that contains 10 words, your feature vector will be a half million dimensional one-hot encoded vector where only 10 indexes will have 1. This is a wastage of space and increases algorithm complexity exponentially resulting in the curse of dimensionality.

Word Embeddings

In word embeddings, every word is represented as an n-dimensional dense vector. The words that are similar will have similar vectors. Word embeddings techniques such as GloVe and Word2Vec have proven to be extremely efficient for converting words into corresponding dense vectors. The vector size is small and none of the indexes in the vector is actually empty.

Implementing Word Embeddings with Keras Sequential Models

The Keras library is one of the most famous and commonly used deep learning libraries for Python that is built on top of TensorFlow.

Keras support two types of APIs: Sequential and Functional. In this section we will see how word embeddings are used with Keras Sequential API. In the next section, I will explain how to implement the same model via the Keras functional API.

To implement word embeddings, the Keras library contains a layer called Embedding(). The embedding layer is implemented in the form of a class in Keras and is normally used as a first layer in the sequential model for NLP tasks.

The embedding layer can be used to perform three tasks in Keras:

  • It can be used to learn word embeddings and save the resulting model
  • It can be used to learn the word embeddings in addition to performing the NLP tasks such as text classification, sentiment analysis, etc.
  • It can be used to load pre trained word embeddings and use them in a new model

In this article, we will see the second and third use-case of the Embedding layer. The first use-case is a subset of the second use-case.

Let's see how the embedding layer looks:

embedding_layer = Embedding(200, 32, input_length=50)

The first parameter in the embedding layer is the size of the vocabulary or the total number of unique words in a corpus. The second parameter is the number of the dimensions for each word vector. For instance, if you want each word vector to have 32 dimensions, you will specify 32 as the second parameter. And finally, the third parameter is the length of the input sentence.

The output of the word embedding is a 2D vector where words are represented in rows, whereas their corresponding dimensions are presented in columns. Finally, if you wish to directly connect your word embedding layer with a densely connected layer, you first have to flatten your 2D word embeddings into 1D. These concepts will become more understandable once we see word embedding in action.

Custom Word Embeddings

As I said earlier, Keras can be used to either learn custom word embedding or it can be used to load pre-trained word embeddings. In this section, we will see how the Keras Embedding Layer can be used to learn custom word embeddings.

We will perform simple text classification tasks that will use word embeddings. Execute the following script to download the required libraries:

from numpy import array
from keras.preprocessing.text import one_hot
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers.embeddings import Embedding

Next, we need to define our dataset. We will be using a very simple custom dataset that will contain reviews above movies. The following script creates our dataset:

corpus = [
    # Positive Reviews

    'This is an excellent movie',
    'The move was fantastic I like it',
    'You should watch it is brilliant',
    'Exceptionally good',
    'Wonderfully directed and executed I like it',
    'It's a fantastic series',
    'Never watched such a brilliant movie',
    'It is a Wonderful movie',

    # Negative Reviews

    "horrible acting",
    'waste of money',
    'pathetic picture',
    'It was very boring',
    'I did not like the movie',
    'The movie was horrible',
    'I will not recommend',
    'The acting is pathetic'
]

Our corpus has 8 positive reviews and 8 negative reviews. The next step is to create a label set for our data.

sentiments = array([1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0])

You can see that the first 8 items in the sentiment array contain 1, which correspond to positive sentiment. The last 8 items are zero that correspond to negative sentiment.

Earlier we said that the first parameter to the Embedding() layer is the vocabulary, or number of unique words in the corpus. Let's first find the total number of words in our corpus:

from nltk.tokenize import word_tokenize

all_words = []
for sent in corpus:
    tokenize_word = word_tokenize(sent)
    for word in tokenize_word:
        all_words.append(word)

In the script above, we simply iterate through each sentence in our corpus and then tokenize the sentence into words. Next, we iterate through the list of all the words and append the words into the all_words list. Once you execute the above script, you should see all the words in the all_words dictionary. However, we do not want the duplicate words.

We can retrieve all the unique words from a list by passing the list into the set function, as shown below.

unique_words = set(all_words)
print(len(unique_words))

In the output you will see "45", which is the number of unique words in our corpus. We will add a buffer of 5 to our vocabulary size and will set the value of vocab_length to 50.

The Embedding layer expects the words to be in numeric form. Therefore, we need to convert the sentences in our corpus to numbers. One way to convert text to numbers is by using the one_hot function from the keras.preprocessing.text library. The function takes a sentence and the total length of the vocabulary and returns the sentence in numeric form.

embedded_sentences = [one_hot(sent, vocab_length) for sent in corpus]
print(embedded_sentences )

In the script above, we convert all the sentences in our corpus to their numeric form and display them on the console. The output looks like this:

[[31, 12, 31, 14, 9], [20, 3, 20, 16, 18, 45, 14], [16, 26, 29, 14, 12, 1], [16, 23], [32, 41, 13, 20, 18, 45, 14], [15, 28, 16, 43], [7, 9, 31, 28, 31, 9], [14, 12, 28, 46, 9], [4, 22], [5, 4, 9], [23, 46], [14, 20, 32, 14], [18, 1, 26, 45, 20, 9], [20, 9, 20, 4], [18, 8, 26, 34], [20, 22, 12, 23]]

You can see that our first sentence contained five words, therefore we have five integers in the first list item. Also, notice that the last word of the first sentence was "movie" in the first list item, and we have digit 9 at the fifth place of the resulting 2D array, which means that "movie" has been encoded as 9 and so on.

The embedding layer expects sentences to be of equal size. However, our encoded sentences are of different sizes. One way to make all the sentences of uniform size is to increase the length of all the sentences and make it equal to the length of the largest sentence. Let's first find the largest sentence in our corpus and then we will increase the length of all the sentences to the length of the largest sentence. To do so, execute the following script:

word_count = lambda sentence: len(word_tokenize(sentence))
longest_sentence = max(corpus, key=word_count)
length_long_sentence = len(word_tokenize(longest_sentence))

In the sentence above, we use a lambda expression to find the length of all the sentences. We then use the max function to return the longest sentence. Finally the longest sentence is tokenized into words and the number of words are counted using the len function.

Next to make all the sentences of equal size, we will add zeros to the empty indexes that will be created as a result of increasing the sentence length. To append the zeros at the end of the sentences, we can use the pad_sequences method. The first parameter is the list of encoded sentences of unequal sizes, the second parameter is the size of the longest sentence or the padding index, while the last parameter is padding where you specify post to add padding at the end of sentences.

Execute the following script:

padded_sentences = pad_sequences(embedded_sentences, length_long_sentence, padding='post')
print(padded_sentences)

In the output, you should see sentences with padding.

[[31 12 31 14  9  0  0]
 [20  3 20 16 18 45 14]
 [16 26 29 14 12  1  0]
 [16 23  0  0  0  0  0]
 [32 41 13 20 18 45 14]
 [15 28 16 43  0  0  0]
 [ 7  9 31 28 31  9  0]
 [14 12 28 46  9  0  0]
 [ 4 22  0  0  0  0  0]
 [ 5  4  9  0  0  0  0]
 [23 46  0  0  0  0  0]
 [14 20 32 14  0  0  0]
 [18  1 26 45 20  9  0]
 [20  9 20  4  0  0  0]
 [18  8 26 34  0  0  0]
 [20 22 12 23  0  0  0]]

You can see zeros at the end of the padded sentences.

Now we have everything that we need to create a sentiment classification model using word embeddings.

We will create a very simple text classification model with an embedding layer and no hidden layers. Look at the following script:

model = Sequential()
model.add(Embedding(vocab_length, 20, input_length=length_long_sentence))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))

In the script above, we create a Sequential model and add the Embedding layer as the first layer to the model. The length of the vocabulary is specified by the vocab_length parameter. The dimension of each word vector will be 20 and the input_length will be the length of the longest sentence, which is 7. Next, the Embedding layer is flattened so that it can be directly used with the densely connected layer. Since it is a binary classification problem, we use the sigmoid function as the loss function at the dense layer.

Next, we will compile the model and print the summary of our model, as shown below:

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
print(model.summary())

The summary of the model is as follows:

Layer (type)                 Output Shape              Param #
=================================================================
embedding_1 (Embedding)      (None, 7, 20)             1000
_________________________________________________________________
flatten_1 (Flatten)          (None, 140)               0
_________________________________________________________________
dense_1 (Dense)              (None, 1)                 141
=================================================================
Total params: 1,141
Trainable params: 1,141
Non-trainable params: 0

You can see that the first layer has 1000 trainable parameters. This is because our vocabulary size is 50 and each word will be presented as a 20 dimensional vector. Hence the total number of trainable parameters will be 1000. Similarly, the output from the embedding layer will be a sentence with 7 words where each word is represented by a 20 dimensional vector. However, when the 2D output is flattened, we get a 140 dimensional vector (7 x 20). The flattened vector is directly connected to the dense layer that contains 1 neuron.

Now let's train the model on our data using the fit method, as shown below:

model.fit(padded_sentences, sentiments, epochs=100, verbose=1)

The model will be trained for 100 epochs.

We will train and test the model using the same corpus. Execute the following script to evaluate the model performance on our corpus:

loss, accuracy = model.evaluate(padded_sentences, sentiments, verbose=0)
print('Accuracy: %f' % (accuracy*100))

In the output, you will see that model accuracy is 1.00 i.e. 100 percent.

Note: In real world applications, train and test sets should be different. We will see an example of that when we perform text classification on some real world data in an upcoming article.

Loading Pre Trained Word Embeddings

In the previous section we trained custom word embeddings. However, we can also use pre trained word embeddings.

Several types of pre-trained word embeddings exist, however we will be using the GloVe word embeddings from Stanford NLP since it is the most famous one and commonly used. The word embeddings can be downloaded from this link.

Free eBook: Git Essentials

Check out our hands-on, practical guide to learning Git, with best-practices, industry-accepted standards, and included cheat sheet. Stop Googling Git commands and actually learn it!

The smallest file is named "Glove.6B.zip". The size of the file is 822 MB. The file contains 50, 100, 200, and 300 dimensional word vectors for 400k words. We will be using the 100 dimensional vector.

The process is quite similar. First we have to import the required libraries:

from numpy import array
from keras.preprocessing.text import one_hot
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers.embeddings import Embedding

Next, we have to create our corpus followed by the labels.

corpus = [
    # Positive Reviews

    'This is an excellent movie',
    'The move was fantastic I like it',
    'You should watch it is brilliant',
    'Exceptionally good',
    'Wonderfully directed and executed I like it',
    'It's a fantastic series',
    'Never watched such a brilliant movie',
    'It is a Wonderful movie',

    # Negative Reviews

    "horrible acting",
    'waste of money',
    'pathetic picture',
    'It was very boring',
    'I did not like the movie',
    'The movie was horrible',
    'I will not recommend',
    'The acting is pathetic'
]
sentiments = array([1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0])

In the last section, we used the one_hot function to convert text to vectors. Another approach is to use the Tokenizer function from the keras.preprocessing.text library.

You simply have to pass your corpus to the Tokenizer's fit_on_text method.

word_tokenizer = Tokenizer()
word_tokenizer.fit_on_texts(corpus)

To get the number of unique words in the text, you can simply count the length of word_index dictionary of the word_tokenizer object. Remember to add 1 with the vocabulary size. This is to store the dimensions for the words for which no pre-trained word embeddings exist.

vocab_length = len(word_tokenizer.word_index) + 1

Finally, to convert sentences to their numeric counterpart, call the texts_to_sequences function and pass it the whole corpus.

embedded_sentences = word_tokenizer.texts_to_sequences(corpus)
print(embedded_sentences)

In the output, you will see the sentences in their numeric form:

[[14, 3, 15, 16, 1], [4, 17, 6, 9, 5, 7, 2], [18, 19, 20, 2, 3, 21], [22, 23], [24, 25, 26, 27, 5, 7, 2], [28, 8, 9, 29], [30, 31, 32, 8, 33, 1], [2, 3, 8, 34, 1], [10, 11], [35, 36, 37], [12, 38], [2, 6, 39, 40], [5, 41, 13, 7, 4, 1], [4, 1, 6, 10], [5, 42, 13, 43], [4, 11, 3, 12]]

The next step is to find the number of words in the longest sentence and then to apply padding to the sentences having shorter lengths than the length of the longest sentence.

from nltk.tokenize import word_tokenize

word_count = lambda sentence: len(word_tokenize(sentence))
longest_sentence = max(corpus, key=word_count)
length_long_sentence = len(word_tokenize(longest_sentence))

padded_sentences = pad_sequences(embedded_sentences, length_long_sentence, padding='post')

print(padded_sentences)

The padded sentences look like this:

[[14  3 15 16  1  0  0]
 [ 4 17  6  9  5  7  2]
 [18 19 20  2  3 21  0]
 [22 23  0  0  0  0  0]
 [24 25 26 27  5  7  2]
 [28  8  9 29  0  0  0]
 [30 31 32  8 33  1  0]
 [ 2  3  8 34  1  0  0]
 [10 11  0  0  0  0  0]
 [35 36 37  0  0  0  0]
 [12 38  0  0  0  0  0]
 [ 2  6 39 40  0  0  0]
 [ 5 41 13  7  4  1  0]
 [ 4  1  6 10  0  0  0]
 [ 5 42 13 43  0  0  0]
 [ 4 11  3 12  0  0  0]]

We have converted our sentences into padded sequences of numbers. The next step is to load the GloVe word embeddings and then create our embedding matrix that contains the words in our corpus and their corresponding values from GloVe embeddings. Run the following script:

from numpy import array
from numpy import asarray
from numpy import zeros

embeddings_dictionary = dict()
glove_file = open('E:/Datasets/Word Embeddings/glove.6B.100d.txt', encoding="utf8")

In the script above, in addition to loading the GloVe embeddings, we also imported a few libraries. We will see the use of these libraries in the upcoming section. Here, notice that we loaded the glove.6B.100d.txt file. This file contains 100 dimensional word embeddings. We also created an empty dictionary that will store our word embeddings.

If you open the file, you will see a word at the beginning of each line followed by a set of 100 numbers. The numbers form the 100 dimensional vector for the word at the beginning of each line.

We will create a dictionary that will contain words as keys and the corresponding 100 dimensional vectors as values, in the form of an array. Execute the following script:

for line in glove_file:
    records = line.split()
    word = records[0]
    vector_dimensions = asarray(records[1:], dtype='float32')
    embeddings_dictionary [word] = vector_dimensions

glove_file.close()

The dictionary embeddings_dictionary now contains words and corresponding GloVe embeddings for all the words.

We want the word embeddings for only those words that are present in our corpus. We will create a two dimensional NumPy array of 44 (size of vocabulary) rows and 100 columns. The array will initially contain zeros. The array will be named as embedding_matrix

Next, we will iterate through each word in our corpus by traversing the word_tokenizer.word_index dictionary that contains our words and their corresponding index.

Each word will be passed as a key to the embedding_dictionary to retrieve the corresponding 100 dimensional vector for the word. The 100 dimensional vector will then be stored at the corresponding index of the word in the embedding_matrix. Look at the following script:

embedding_matrix = zeros((vocab_length, 100))
for word, index in word_tokenizer.word_index.items():
    embedding_vector = embeddings_dictionary.get(word)
    if embedding_vector is not None:
        embedding_matrix[index] = embedding_vector

Our embedding_matrix now contains pre-trained word embeddings for the words in our corpus.

Now we are ready to create our sequential model. Look at the following script:

model = Sequential()
embedding_layer = Embedding(vocab_length, 100, weights=[embedding_matrix], input_length=length_long_sentence, trainable=False)
model.add(embedding_layer)
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))

The script remains the same, except for the embedding layer. Here in the embedding layer, the first parameter is the size of the vocabulary. The second parameter is the vector dimension of the output vector. Since we are using pre-trained word embeddings that contain 100 dimensional vectors, we set the vector dimension to 100.

Another very important attribute of the Embedding() layer that we did not use in the last section is weights. You can pass your pre-trained embedding matrix as default weights to the weights parameter. And since we are not training the embedding layer, the trainable attribute has been set to False.

Let's compile our model and see the summary of our model:

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
print(model.summary())

We are again using adam as the optimizer to minimize the loss. The loss function being used is binary_crossentropy. And we want to see the results in the form of accuracy so acc has been passed as the value for the metrics attribute.

The model summary is as follows:

Layer (type)                 Output Shape              Param #
=================================================================
embedding_1 (Embedding)      (None, 7, 100)            4400
_________________________________________________________________
flatten_1 (Flatten)          (None, 700)               0
_________________________________________________________________
dense_1 (Dense)              (None, 1)                 701
=================================================================
Total params: 5,101
Trainable params: 701
Non-trainable params: 4,400
_________________________________________________________________

You can see that since we have 44 words in our vocabulary and each word will be represented as a 100 dimensional vector, the number of parameters for the embedding layer will be 44 x 100 = 4400. The output from the embedding layer will be a 2D vector with 7 rows (1 for each word in the sentence) and 100 columns. The output from the embedding layer will be flattened so that it can be used with the dense layer. Finally the dense layer is used to make predictions.

Execute the following script to train the algorithms:

model.fit(padded_sentences, sentiments, epochs=100, verbose=1)

Once the algorithm is trained, run the following script to evaluate the performance of the algorithm.

loss, accuracy = model.evaluate(padded_sentences, sentiments, verbose=0)
print('Accuracy: %f' % (accuracy*100))

In the output, you should see that accuracy is 1.000 i.e. 100%.

Word Embeddings with Keras Functional API

In the last section, we saw how word embeddings can be used with the Keras sequential API. While the sequential API is a good starting point for beginners, as it allows you to quickly create deep learning models, it is extremely important to know how Keras Functional API works. Most of the advanced deep learning models involving multiple inputs and outputs use the Functional API.

In this section, we will see how we can implement an embedding layer with Keras Functional API.

The rest of the script remains similar as it was in the last section. The only change will be in the development of a deep learning model. Let's implement the same deep learning model as we implemented in the last section with Keras Functional API.

from keras.models import Model
from keras.layers import Input

deep_inputs = Input(shape=(length_long_sentence,))
embedding = Embedding(vocab_length, 100, weights=[embedding_matrix], input_length=length_long_sentence, trainable=False)(deep_inputs) # line A
flatten = Flatten()(embedding)
hidden = Dense(1, activation='sigmoid')(flatten)
model = Model(inputs=deep_inputs, outputs=hidden)

In the Keras Functional API, you have to define the input layer separately before the embedding layer. In the input layer you have to simply pass the length of the input vector. To specify that previous layer as input to the next layer, the previous layer is passed as a parameter inside the parenthesis, at the end of the next layer.

For instance, in the above script, you can see that deep_inputs is passed as a parameter at the end of the embedding layer. Similarly, embedding is passed as input at the end of the Flatten() layer and so on.

Finally, in the Model(), you have to pass the input layer, and the final output layer.

Let's now compile the model and take a look at the summary of the model.

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
print(model.summary())

The output looks like this:

Layer (type)                 Output Shape              Param #
=================================================================
input_1 (InputLayer)         (None, 7)                 0
_________________________________________________________________
embedding_1 (Embedding)      (None, 7, 100)            4400
_________________________________________________________________
flatten_1 (Flatten)          (None, 700)               0
_________________________________________________________________
dense_1 (Dense)              (None, 1)                 701
=================================================================
Total params: 5,101
Trainable params: 701
Non-trainable params: 4,400

In the model summary, you can see the input layer as a separate layer before the embedding layer. The rest of the model remains the same.

Finally, the process to fit and evaluate the model is same as the one used in Sequential API:

model.fit(padded_sentences, sentiments, epochs=100, verbose=1)
loss, accuracy = model.evaluate(padded_sentences, sentiments, verbose=0)

print('Accuracy: %f' % (accuracy*100))

In the output, you will see an accuracy of 1.000 i.e. 100 percent.

Conclusion

To use text data as input to the deep learning model, we need to convert text to numbers. However unlike machine learning models, passing sparse vectors of huge sizes can greatly affect deep learning models. Therefore, we need to convert our text to small dense vectors. Word embeddings help us convert text to dense vectors.

In this article we saw how word embeddings can be implemented with Keras deep learning library. We implemented the custom word embeddings as well as used pre trained word embeddings to solve simple classification tasks. Finally, we also saw how to implement word embeddings with Keras Functional API.

Last Updated: November 16th, 2023
Was this article helpful?

Improve your dev skills!

Get tutorials, guides, and dev jobs in your inbox.

No spam ever. Unsubscribe at any time. Read our Privacy Policy.

Usman MalikAuthor

Programmer | Blogger | Data Science Enthusiast | PhD To Be | Arsenal FC for Life

Project

Image Captioning with CNNs and Transformers with Keras

# artificial intelligence# deep learning# python# nlp

In 1974, Ray Kurzweil's company developed the "Kurzweil Reading Machine" - an omni-font OCR machine used to read text out loud. This machine...

David Landup
David Landup
Details
Project

Building Your First Convolutional Neural Network With Keras

# python# machine learning# keras# tensorflow

Most resources start with pristine datasets, start at importing and finish at validation. There's much more to know. Why was a class predicted? Where was...

David Landup
David Landup
Details

© 2013-2024 Stack Abuse. All rights reserved.

AboutDisclosurePrivacyTerms