Configuring output data for Multi Input – Multi Output LSTM with Keras

Solution for Configuring output data for Multi Input – Multi Output LSTM with Keras
is Given Below:

I am developing a model to predict target X and Y coordinates from feature columns that include another vehicle’s X and Y, the range of the target to the vehicle and the target’s bearing. This can be visualised as a Pythagoras triangle. My training dataset consists of 1075 csv files, where each csv has 20 time steps as the vehicle moves closer to the target. The csv files all contain the input features (x1, y1, range and bearing), and also the correct output features I would like to predict (x2, y2). I load these csv files into python and separate input from output as shown, while also standardising the dataframes to have 21 time steps only:

path, dirs, files = next(os.walk("./log files/"))
file_count = len(files)

inputsList = []
USVpositionsList = []

for i in range(file_count):
    temp_df = pd.read_csv("./log files/"+files[i])
    temp_df.drop(temp_df.columns.difference(['act_auv_x', 'act_auv_y', 'range_report', 'bearing']), 1, inplace=True)
    train_y = pd.read_csv("./log files/"+files[i])
    train_y.drop(train_y.columns.difference(['act_usv_x','act_usv_y']), 1, inplace=True)


I then turn the lists into arrays with concat, and noramlise the features:

scaler = MinMaxScaler(feature_range=(0, 1))

concatInputs = pd.concat(inputsList)
concatInputs = scaler.fit_transform(concatInputs)

concatUSV = pd.concat(USVpositionsList).values
concatUSV = scaler.fit_transform(concatUSV)

From here, I reshape my input data into (Samples, timesteps, features). This takes the form as (1075, 21, 4). Here is where my problem begins. I would like the sequences to be fed in as individual samples of (21, 4) and the outputs to be of the form (21, 2). That is, for each timestep in a given sample, I am trying to use 4 features to predict 2 output features. I also use the train_test_split function with a set randomise factor to keep the pairing between inputs and outputs:

reshaped = concatInputs.reshape(1075, 21, 4)

X_train, X_test= train_test_split(reshaped, test_size=0.33, random_state=42)
Y_train, Y_test = train_test_split(concatUSV, test_size=0.33, random_state=42)

I then develop the LSTM layers and feed the inputs in. I should note that these parameters are not optimised and I have a way to go understanding LSTM theory.

model = Sequential()
model.add(LSTM(4, input_shape=(X_train.shape[1:]), activation='relu'))

model.compile(loss="mean_squared_error", optimizer="adam"), Y_train, epochs=100, batch_size=21, verbose=2)

# make predictions
trainPredict = model.predict(X_train)
testPredict = model.predict(X_test)
# invert predictions
trainPredict = scaler.inverse_transform(trainPredict)
Y_train = scaler.inverse_transform([Y_train])
testPredict = scaler.inverse_transform(testPredict)
Y_test = scaler.inverse_transform([Y_test])

Running this model, I get the error:

Data cardinality is ambiguous:
x sizes: 720
y sizes: 15125

Please provide data which shares the same first dimension.

Do I have to make my outputs 3d as well? ie: (720, 21, 2) after train/test split. I am not entirely sure how to ensure that the samples fed through as inputs will correspond to the desired outputs for that sample, if my outputs list ‘concatUSV’ remains 2D. I have set batch size = 21 in order to try and match the desired length of output sequences. To eliminate any confusion, if given the model test sequence of (21, 4) I would ideally like an output of shape (21, 2). Any help on this, as well as how to configure the dense layers are appreciated. Thank you.