Is the answer correct?

def run_bilstm(self, input_sequences, is_training):
input_embeddings, sequence_lengths = self.get_input_embeddings(input_sequences)
dropout_keep_prob = 0.5 if is_training else 1.0
cell = self.make_lstm_cell(dropout_keep_prob)
rnn = tf.keras.layers.RNN(cell, return_sequences=True ,
go_backwards=True , return_state=True)

    Bi_rnn= tf.keras.layers.Bidirectional(
    input_embeddings = tf.compat.v1.placeholder(
            tf.float32, shape=(None, 10, 12))
    outputs = Bi_rnn(input_embeddings)
    return outputs

why do we need this line?
input_embeddings = tf.compat.v1.placeholder(
tf.float32, shape=(None, 10, 12))

It is defining the fixed style dimension of embedding that will always be fed to Bi_rnn. To use those values, we will call feed_dict, as a placeholder tensor can’t be fed directly. It is a tensor used as a handle for feeding a value but not evaluated directly.

Isn’t it overriding the first input_embeddings variable we have?

Hi @Ali_Alsawad, Thanks for reaching out to us.
We passed input_embeddings as the required argument in object Bi_rnn.

Hope it will help!

1 Like

Hi @Ali_Alsawad A placeholder is a variable in Tensorflow to which data will be assigned sometime later on. It enables us to create processes or operations without the requirement for data. Data is fed into the placeholder as the session starts, and the session is run. We can feed data into TensorFlow graphs using placeholders.

input_embeddings variable is being assigned inputs via TensorFlow Placeholder.

Hope it will help!