RAM is getting crahsed while applying .split() to pandas dataframe

Solution for RAM is getting crahsed while applying .split() to pandas dataframe
is Given Below:

I want to tokenize the text in “BULLET_POINTS” column separate by space, for that I used the bellow code. But it leads to huge usage of RAM and getting crashed. I’m using Google Colab. Here “df_Train” is a pandas dataframe.

# import re
def tokenization(text):
    tokens = text.split()
    return tokens
#applying function to the column
df_Train.loc[:,'BULLET_POINTS']= df_Train.loc[:,'BULLET_POINTS'].apply(lambda x: tokenization(x))
df_Train.head()