educative.io

Is loading model inside the DoFn scalable?

Hi, in the implementation of this course, is the model cached? How does loading model as a side input compare to calling a model as a api endpoint with http request?


Type your question above this line.

Course: https://www.educative.io/collection/10370001/6068402050301952
Lesson: https://www.educative.io/collection/page/10370001/6068402050301952/6179603266666496

Hi @Y_C,

Yes, you are right that we cached the model in this course. If you can go through this lesson - Model Training, we train and save the model. And in the next lesson, we are using the model that we saved.

I hope I have answered your query; please let me know if you still have any confusion.

Thank You :blush:

Where is the cache implemented? Do you mean loading the model from storage bucket everytime == cache? That’s not scalable if every worker needs to load the file everytime a predict request comes.