I thought that for cache to be useful in speeding up lookup it should be placed in the Application Servers’ RAM, that is, each Application Server will hold its own URL cache store locally. In the solution to this question however the URL cache is placed on a dedicated group of servers instead, and as I see it, defeating the purpose of fast lookup as now we add network communication round-trip time (which is even worse than local disk lookups). So my question is why would cache servers lookup would be any better/faster than just fetching the entries directly from the DB servers themselves?
Type your question above this line.
Course: https://www.educative.io/collection/5668639101419520/5649050225344512
Lesson: https://www.educative.io/collection/page/5668639101419520/5649050225344512/5668600916475904