educative.io

Removing the Bias

Hello,

In the part where we are re-introducing the bias to handle multiple variables the author states, (I’m using tilde instead of y-hat):
"So far, we have implemented this prediction formula:

~y​=x1​∗w1​+x2​∗w2​+x3​∗w3​

Now we want to add the bias back to the system, like this:

~y=x1∗w1+x2∗w2+x3∗w3+b"

They then state to add a dummy row in our input matrix of all 1’s represented by x0, and provide the following solution.

"We can rewrite the formula like this:

~y=x1∗w1+x2∗w2+x3∗w3+x0+b

Now there’s no difference between bias and weights."

However, there is still a clear difference in the bias and the weights, the bias is still added to an input variable while the others are multiplied by weights. Additionally, it reads, at least to me, as if the bias is only added to a single input variable rather than the matrix’s entirety (unless NumPy’s broadcasting is applied here as well). Lastly, if it is still addition having an entire row of one’s would augment the bias by 1 for each variable clearly.

With that being said, can any one provide some clarification on if:
1. Is the bias added to a single row? Or is it multiplied?
2. Does it apply to the entire matrix or just that one row?

Thanks!

I think that the proposed solution was meant to be

~y=x1∗w1+x2∗w2+x3∗w3+x0*b

Where x0 is a dummy input variable valued always 1

=> x1∗w1+x2∗w2+x3∗w3+x0*b = ~y=x1∗w1+x2∗w2+x3∗w3+ b

and b can be thought as w0.


Course: Fundamentals of Machine Learning for Software Engineers - Learn Interactively
Lesson: Put It All Together - Fundamentals of Machine Learning for Software Engineers