What's the point of cache servers?

I thought that for cache to be useful in speeding up lookup it should be placed in the Application Servers’ RAM, that is, each Application Server will hold its own URL cache store locally. In the solution to this question however the URL cache is placed on a dedicated group of servers instead, and as I see it, defeating the purpose of fast lookup as now we add network communication round-trip time (which is even worse than local disk lookups). So my question is why would cache servers lookup would be any better/faster than just fetching the entries directly from the DB servers themselves?

Type your question above this line.


1 Like

Hi @shishio,

The design suggests that the cache must be placed in the Application layer, and you are right; putting the cache on separate servers would defeat the fast lookup process but that’s not the case discussed in this design. Please find below the link to the respected section of the lesson:

If you need any further help, please let us know :slight_smile:

in the solution, on component diagram after Part 10, they have cache as a separate service after additional LB. So it is definetely not a part of application here.
So how beneficial it is? Do we think that lookup in cache will be much faster than in database by primary key, considering we have network latency in both cases?