Simple Question Answering model on Tamil and Hindi

kaushal talapady
8 min readSep 15, 2021

--

Overview

The question answering an important NLP problem, the Kaggle recently started competition which gives dataset of type SQUAD2, The questions answers are in languages hindi and tamil.

In this blog I am going to explain how to build a simple BERT model to solve this problem.

I have used transformer library and XLM ROBERTA as base model for the solving the question answering problem since it is able to work with multiple languages including tamil and hindi.

EDA

First step to solve the problem is to look into data which is of type squad2, having four parts context , question , answer text, answer start

The training data is shown below the:

The columns of dataset is the:

ID : unique id for each data point

context: context for the squad2

question : question for squad2

answer text : answer for question

answer start: start of the answer in context

language : language of the context and question

The context length vs language is the

The above shows the range of the length of context is shown above. Clearly the average context length for tamil language is greater than hindi

The word count vs language:

But the word count for both tamil and hindi seem similar.

quention length vs language:

The above diagram it shows the question length and language the result is that the length of questions is quiet similar.

The questions words vs language

The above diagram clealy shows the difference between number of words in tamil and hindi questions.

Model training

First step is to load various libraries for the training the model.

import transformers 
import torch
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

Then we load the pre-trained tokenizer

squad_v2 = False
model_checkpoint = "deepset/xlm-roberta-large-squad2"
batch_size = 4
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)

The next step is to prepare the data for training by tokenizing questions ,answers and context

And then first step is to strip white spaces in question then tokenize the context and answers.

Since in some of the context will be long hence it will be truncated in order to be fed to the model we need to map features generated after tokenizing to the context

Along with it we also need to map the starting and ending position in the of answer in the context. We do it by using offset mapping.

Then map the tokens for the answer in the context to each example

def prepare_train_features(examples):
# Some of the questions have lots of whitespace on the left, which is not useful and will make the
# truncation of the context fail (the tokenized question will take a lots of space). So we remove that
# left whitespace
examples["question"] = [q.lstrip() for q in examples["question"]]

# Tokenize our examples with truncation and padding, but keep the overflows using a stride. This results
# in one example possible giving several features when a context is long, each of those features having a
# context that overlaps a bit the context of the previous feature.
tokenized_examples = tokenizer(
examples["question" if pad_on_right else "context"],
examples["context" if pad_on_right else "question"],
truncation="only_second" if pad_on_right else "only_first",
max_length=max_length,
stride=doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)

# Since one example might give us several features if it has a long context, we need a map from a feature to
# its corresponding example. This key gives us just that.
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
# The offset mappings will give us a map from token to character position in the original context. This will
# help us compute the start_positions and end_positions.
offset_mapping = tokenized_examples.pop("offset_mapping")

# Let's label those examples!
tokenized_examples["start_positions"] = []
tokenized_examples["end_positions"] = []

for i, offsets in enumerate(offset_mapping):
# We will label impossible answers with the index of the CLS token.
input_ids = tokenized_examples["input_ids"][i]
cls_index = input_ids.index(tokenizer.cls_token_id)

# Grab the sequence corresponding to that example (to know what is the context and what is the question).
sequence_ids = tokenized_examples.sequence_ids(i)

# One example can give several spans, this is the index of the example containing this span of text.
sample_index = sample_mapping[i]
answers = examples["answers"][sample_index]
# If no answers are given, set the cls_index as answer.
if len(answers["answer_start"]) == 0:
tokenized_examples["start_positions"].append(cls_index)
tokenized_examples["end_positions"].append(cls_index)
else:
# Start/end character index of the answer in the text.
start_char = answers["answer_start"][0]
end_char = start_char + len(answers["text"][0])

# Start token index of the current span in the text.
token_start_index = 0
while sequence_ids[token_start_index] != (1 if pad_on_right else 0):
token_start_index += 1

# End token index of the current span in the text.
token_end_index = len(input_ids) - 1
while sequence_ids[token_end_index] != (1 if pad_on_right else 0):
token_end_index -= 1

# Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).
if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):
tokenized_examples["start_positions"].append(cls_index)
tokenized_examples["end_positions"].append(cls_index)
else:
# Otherwise move the token_start_index and token_end_index to the two ends of the answer.
# Note: we could go after the last offset if the answer is the last word (edge case).
while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:
token_start_index += 1
tokenized_examples["start_positions"].append(token_start_index - 1)
while offsets[token_end_index][1] >= end_char:
token_end_index -= 1
tokenized_examples["end_positions"].append(token_end_index + 1)

return tokenized_examples

next load the pretrained by model

from transformers import AutoModelForQuestionAnswering, TrainingArguments, Trainer

model = AutoModelForQuestionAnswering.from_pretrained(model_checkpoint)

set the arguments for training

args = TrainingArguments(
f"chaii-qa",
evaluation_strategy = "epoch",
save_strategy = "epoch",
learning_rate=3e-5,
warmup_ratio=0.1,
gradient_accumulation_steps=8,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=1,
weight_decay=0.01,
)

set up the trainer object and train the model

trainer = Trainer(
model,
args,
train_dataset=tokenized_train_ds,
eval_dataset=tokenized_valid_ds,
data_collator=data_collator,
tokenizer=tokenizer,
)
trainer.train()

Inference

The we need to prepare the test data as we prepared the data for training except the answer start and end points

def prepare_validation_features(examples):
# Some of the questions have lots of whitespace on the left, which is not useful and will make the
# truncation of the context fail (the tokenized question will take a lots of space). So we remove that
# left whitespace
examples["question"] = [q.lstrip() for q in examples["question"]]

# Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results
# in one example possible giving several features when a context is long, each of those features having a
# context that overlaps a bit the context of the previous feature.
tokenized_examples = tokenizer(
examples["question" if pad_on_right else "context"],
examples["context" if pad_on_right else "question"],
truncation="only_second" if pad_on_right else "only_first",
max_length=max_length,
stride=doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)

# Since one example might give us several features if it has a long context, we need a map from a feature to
# its corresponding example. This key gives us just that.
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")

# We keep the example_id that gave us this feature and we will store the offset mappings.
tokenized_examples["example_id"] = []

for i in range(len(tokenized_examples["input_ids"])):
# Grab the sequence corresponding to that example (to know what is the context and what is the question).
sequence_ids = tokenized_examples.sequence_ids(i)
context_index = 1 if pad_on_right else 0

# One example can give several spans, this is the index of the example containing this span of text.
sample_index = sample_mapping[i]
tokenized_examples["example_id"].append(examples["id"][sample_index])

# Set to None the offset_mapping that are not part of the context so it's easy to determine if a token
# position is part of the context or not.
tokenized_examples["offset_mapping"][i] = [
(o if sequence_ids[k] == context_index else None)
for k, o in enumerate(tokenized_examples["offset_mapping"][i])
]

return tokenized_examples

Then we predict the output using model

test_predictions = trainer.predict(test_feats_small)
test_features.set_format(type=test_features.format["type"], columns=list(test_features.features.keys()))

Next we process the raw output

def postprocess_qa_predictions(examples, features, raw_predictions, n_best_size = 20, max_answer_length = 30):
all_start_logits, all_end_logits = raw_predictions
# Build a map example to its corresponding features.
example_id_to_index = {k: i for i, k in enumerate(examples["id"])}
features_per_example = collections.defaultdict(list)
for i, feature in enumerate(features):
features_per_example[example_id_to_index[feature["example_id"]]].append(i)

# The dictionaries we have to fill.
predictions = collections.OrderedDict()

# Logging.
print(f"Post-processing {len(examples)} example predictions split into {len(features)} features.")

# Let's loop over all the examples!
for example_index, example in enumerate(tqdm(examples)):
# Those are the indices of the features associated to the current example.
feature_indices = features_per_example[example_index]

min_null_score = None # Only used if squad_v2 is True.
valid_answers = []

context = example["context"]
# Looping through all the features associated to the current example.
for feature_index in feature_indices:
# We grab the predictions of the model for this feature.
start_logits = all_start_logits[feature_index]
end_logits = all_end_logits[feature_index]
# This is what will allow us to map some the positions in our logits to span of texts in the original
# context.
offset_mapping = features[feature_index]["offset_mapping"]

# Update minimum null prediction.
cls_index = features[feature_index]["input_ids"].index(tokenizer.cls_token_id)
feature_null_score = start_logits[cls_index] + end_logits[cls_index]
if min_null_score is None or min_null_score < feature_null_score:
min_null_score = feature_null_score

# Go through all possibilities for the `n_best_size` greater start and end logits.
start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()
end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()
for start_index in start_indexes:
for end_index in end_indexes:
# Don't consider out-of-scope answers, either because the indices are out of bounds or correspond
# to part of the input_ids that are not in the context.
if (
start_index >= len(offset_mapping)
or end_index >= len(offset_mapping)
or offset_mapping[start_index] is None
or offset_mapping[end_index] is None
):
continue
# Don't consider answers with a length that is either < 0 or > max_answer_length.
if end_index < start_index or end_index - start_index + 1 > max_answer_length:
continue

start_char = offset_mapping[start_index][0]
end_char = offset_mapping[end_index][1]
valid_answers.append(
{
"score": start_logits[start_index] + end_logits[end_index],
"text": context[start_char: end_char]
}
)

if len(valid_answers) > 0:
best_answer = sorted(valid_answers, key=lambda x: x["score"], reverse=True)[0]
else:
# In the very rare edge case we have not a single non-null prediction, we create a fake prediction to avoid
# failure.
best_answer = {"text": "", "score": 0.0}

# Let's pick our final answer: the best one or the null answer (only for squad_v2)
predictions[example["id"]] = best_answer["text"]

return predictions

The raw prediction is to converted useful one, the raw prediction is start and end coordinates of the answer in the context which is used to get tokens of words in the answer then is converted to text prediction using tokenizer .

final_test_predictions = postprocess_qa_predictions(test_dataset, test_features, test_predictions.predictions)
sub['PredictionString'] = sub['id'].apply(lambda r: final_test_predictions[r])
sub.head()

Thats it we have built a simple q&a model.

you can check my github for the more info about code

--

--

kaushal talapady
kaushal talapady

No responses yet