We Are Going To Discuss About InvalidArgumentError: required broadcastable shapes at loc(unknown). So lets Start this Python Article.
InvalidArgumentError: required broadcastable shapes at loc(unknown)
- How to solve InvalidArgumentError: required broadcastable shapes at loc(unknown)
I faced this problem when the number of Class Labels did not match with the shape of the Output Layer's output shape.
For example, if there are 10 Class Labels and we have defined the Output Layer as:output = tf.keras.layers.Conv2D(5, (1, 1), activation = "softmax")(c9)
As the number of Class Labels (10
) is not equal to the Output shape (5
).
Then, we will get this error.
Ensure that the number of class labels matches with the Output Layer's output shape. - InvalidArgumentError: required broadcastable shapes at loc(unknown)
I faced this problem when the number of Class Labels did not match with the shape of the Output Layer's output shape.
For example, if there are 10 Class Labels and we have defined the Output Layer as:output = tf.keras.layers.Conv2D(5, (1, 1), activation = "softmax")(c9)
As the number of Class Labels (10
) is not equal to the Output shape (5
).
Then, we will get this error.
Ensure that the number of class labels matches with the Output Layer's output shape.
Solution 1
I faced this problem when the number of Class Labels did not match with the shape of the Output Layer’s output shape.
For example, if there are 10 Class Labels and we have defined the Output Layer as:
output = tf.keras.layers.Conv2D(5, (1, 1), activation = "softmax")(c9)
As the number of Class Labels (10
) is not equal to the Output shape (5
).
Then, we will get this error.
Ensure that the number of class labels matches with the Output Layer’s output shape.
Original Author Pulkit Ratna Ganjeer Of This Content
Solution 2
I found several issues here. The model was intended to be used for semantic segmentation with several classes (this is why I had changed the output layer activation to "softmax"
and set "sparse_categorical_crossentropy"
loss). Hence, in the ImageDataGenerators, class_mode
has to be set to None
. classes
are not to be provided. Instead, I needed to insert the manually classified images as y
. I guess beginners make a lot of beginner mistakes.
Original Author Pulkit Ratna Ganjeer Of This Content
Solution 3
I got the same issue because I used a number of n_classes in the model (for the output layer) different from the actual number of classes in the labels/masks array. I see you have a similar issue here: you have 13 classes, but your output layer is given only 1. The best way is to avoid hard-coding the number of classes, and only pass a variable (like n_classes) in the model, then declare this variable before calling the model. For instance n_classes = y_Train.shape[-1] or n_classes = len(np.unique(y_Train))
Original Author Maurice Of This Content
Solution 4
Try to check whether ks.layers.concatenate layers
‘ inputs are of equal dimension. For example ks.layers.concatenate([u7, c3])
, here check u7 and c3 tensors are of same shape to be concatenated except the axis input to the function ks.layers.concatenate
. Axis = -1
default, that’s the last dimension. To illustrate if you are giving ks.layers.concatenate([u7,c3],axis=0)
, then except the first axis of both u7 and c3 all other axes’ dimension should match exactly, example, u7.shape = [3,4,5], c3.shape = [6,4,5].
Original Author muru Of This Content
Conclusion
So This is all About This Tutorial. Hope This Tutorial Helped You. Thank You.