In part 1, we discussed the horizontal scaling of the application layer – scaling out servers to handle the PHP code concurrently via load balancers and other means. We covered sharing of local data, touched on potential optimization, and explained the basics of load balancers.
The application layer, however, isn’t the only thing that needs scaling. With a bigger demand for our app, the demand for higher read/write operations on the database also surfaces. In this part, we’ll look at scaling the database, explain replication, and cover some common pitfalls you might want to avoid.
As in Part 1, we need to mention optimization first. Indexing your database properly, making sure the tables consist of the least amount of important data and keeping the secondary information in others (users_basic + users_additional + users_alt_contacts, etc – known as database sharding – a complex topic warranting its own article for sure), doing small, atomic queries as opposed to large on-the-fly calculations – all those methods can help you speed up your databases and avoid bottlenecks. There’s one aspect that can help even more, though – query cache.
Database servers typically cache results and compiled SELECT queries that were last executed. This allows a client (our app) to receive the data from cache instead of having to wait for execution again. Such an approach saves CPU cycles, produces results faster, and frees the servers from doing unnecessary calculations. But the query cache is limited in size, and some data sets change more often than others. Using the cache on all of our data would be ludicrous, especially if some information changes faster than some other information. While the cache can be fine tuned, depending on the database vendor, there’s an approach that lends itself quite nicely to the query cache solution – contextual grouping of servers.
in progress 0Jun 22/15 at 08:41 1 Answers 15 Views 2