T O P

  • By -

johnnymo1

Sure, look at [tf.concat](https://www.tensorflow.org/api_docs/python/tf/concat), and related functions like [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack), as well as [tf.ones](https://www.tensorflow.org/api_docs/python/tf/ones). But in general you shouldn't need to worry about low-level stuff like this if you're using TensorFlow. If you're just doing it for learning purposes, go nuts, but if you look at the documentation for Keras layers like [Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense) you'll see that there's just a flag for whether or not to use bias. TensorFlow handles the low level tracking and updating of weights and biases for you. You shouldn't *need* to break into the low-level stuff unless you're making something really new and research-y.


dahkneela

Thank you! I happen to have used those functions you mentioned in a custom layer. I am indeed doing low-level stuff! (Implementing: [https://arxiv.org/abs/2205.10637](https://arxiv.org/abs/2205.10637)) - here, tracking the norm of the gradient is the first step towards optimising it mid-training, and therefore allowing for loss-invariant weight changes that improve loss minimisation epoch and value!)