The biggest challenges of today's web development are performance, scalabilty, rapid development and maintainability of the source code. Usually one quickly sets on abstraction models such as MVC, which poor covers the above mentioned requirements for today's development. Since my main concern is scalable and secure code, I've developed my own model, which solves the needs better. Below I will explain the model and take reference to my SQL class that implements this model.
Scaling model
When I've devoted myself on the topic scalabilty about 4 years ago, I only found sparse information about that. The only useful information came from Brad Fitzpatrick, who wrote in a PDF, that his system is scalable from one machine up to thousands of servers. It took me a while to develop a working system from this idea, even though from today's perspective, where the term sharding is generally known. Perhaps, it is hard to imagine that this conclusion was hard at the time, where I needed to scale my systems.
Today there are several shard-architectures and so my solution is only one of many, but I haven't had any case where this approach didn't work. I'm going to limit to scaling a user database in the examples below, but in theory you can use this approach for almost every entity, like scaling documents, pages, albums, videos or something else.
Basically, I wanted to be able to move users between databases, yet most of the lookups to be performed without additional database requests but simple calculations. I'm using a simple trick which works without hashs, but with integers where users are spread by a modulo calculation on a pre-determined cluster size. If the modulo calculation shows a result which is not null, we use this value as cluster-ID, otherwise a check is performed, if the current user is asking for the cluster-ID. Is this also not the case, a lookup on a global management database is performed that is hierarchically above the cluster and stores which user is located on which cluster node. Therefore, one ID per range is reserved as a lookup-ID to be able to move users to other systems and the rest can be calculated directly. The only thing you have to keep in mind is, that you have to skip the ID's at the end of every range and create gaps at the registration process. For the cluster lookup of an ID we can use this little pseudo-code:
if ($uid % CLUSTER_SIZE) > 0 then
return $uid % CLUSTER_SIZE
end
if $uid = $_SESSION['user_id'] then
return $_SESSION['user_cluster']
end
DB::setCluster(<global lookup node>)
return DB::value(SELECT cluster from user WHERE uid=$uid LIMIT 1)
At the beginning, you can keep the management database and all cluster databases on a single server and switch between the cluster nodes by using mysqli_select_db(). But be sure, your table cache is big enough to not force the server spending the most time with reopening tables. If your project has success, each node can be distributed on a "mini-cluster", which manages itself. The lookup node, which could become a SPOF can also be enlarged and operated for example with a MySQL-Cluster setup. Thus, self managing database systems with several thousands of servers are relatively easy to deploy - except of the many little things which can break this theory.
Security model
SQL-injections should be no problem anymore, because there is much of clarification done by many sites and books. Unfortunately, out of laziness also snippets of beginners find the way into popular open-source software, which in turn fills mailing lists because of secuirty vulnerabilities. A look into the source of one or other software reveals that security concepts were implemented mostly ill-conceived and sometimes inconsistent. Thus, there are scripts full of htmlspecialchars(), mysqli_real_escape(), addslashes() and so on, and certainly all of of them have it's right to exist, whereby one can not blame a language for insecure applications, but such a confusing use of functions is quickly not manageable.
That's why I've chosen a manageable security concept in which I'll send everything the user is allowed to enter unfiltered to the database. There are no filters applied on request variables, no magic quotes and other rubbish. The only thing that is tested are established formats such as E-mail addresses or phone numbers, everything else is only checked shortly in front of the database, that there is no chance to introduce any malicious code. Looked at that way, parameter bindings are the best and safest solution for this problem, but these are not nice to use and somewhat inflexible. I will discuss a more elegant solution under the heading transparency and abstraction below, how I approached the whole thing.
One more thing to the just discussed approach of security, that everything is left into the database what the user sends, taking into account that it can not possible to execute SQL externally: Increasingly it is not SQL vulnerabilites that are exploted but XSS that would be easily possible with this approach, too. You could use htmlspecialchars() for the input defusing, but it's so inflexible as it often happens that one wants to quickly modify a string just before it is sent to the user, which re-opens a potential security hole. This article is not about XSS, but I wanted to mention that you should care about this in your application, too. I'll focus on this topic in another post, when it comes to write an easy to use and performant "view".
Transparency and abstraction
Both, scaling and hardening an application is no great rocket science anymore. To make both points as simple as possible for developers and to increase the speed of development, I've introduced a transparent layer to dissolve both things in the background. The goal was to be able to write queries in the following form:
DB::query('SELECT * FROM user WHERE UID=#u OR UName=#s LIMIT 1', $user_id);
In order to make this SQL-API possible (by the way, I also implemented a memcached cluster management in this way), I use the function xsprintf() from my infusion extension internally. Who would not use the extension can also take the slower PHP implementation for the beginning or easier with the following preg_replace_callback() substitution:
preg_replace_callback('/#([0-9]*[a-z])/i', $cb, $str);
But what does the above written query actually says? I use hash-codes to implement type hints and to be able to execute necessary simplifcations for scale-out. The query in detail means that the user info of the currently logged-in user and the user, whose name is given as parameter should be returned. The implementation of the hash-codes is realized by the specified callback function. In principle, it is the same as parameter bindings with a slightly nicer syntax and the possibilty to do some extra things. I would like to briefly document the hash-codes I've implemented in my SQL class:
- #u - Inserting user-ID of the current logged in user session with automatic cluster selection
- #U - Inserting user-ID of the current logged in user session without cluster selection
- #s - Inserting a string and secure it
- #i - Inserting an integer
- #f - Inserting a float
- #o - Inserting an user-ID and automaticaly select the cluster
- #x - Inserting binary data as hexdump
- #X - Inserting a hex string to store it as blob or binary
- #n - Same as #s with the exception that an empty value is stored as NULL
- #h - Inserting a htmlspecialchars encoded string
- #r - Inserting a remote address as unsigned integer
- #d - Inserting a DATE value from unix timestamp
- #t - Inserting a DATETIME value from unix timestamp
- #v - Inserting a storage gentle boolean by using NULL and empty string as values
- #a - Inserting a CSV value of array elements
- #j - Inserting a JSON value of array elements or objects
- #J - Inserting the required query part for doing the subquery elimination trick, I've already described
Most of the hash codes are only for formatting the data, but #u and #o are probably the most important codes for transparent scaling.
In the upper query-example, I've mentioned the query() method, which I actually never implement, but make a read/write split at this place to seperate master-writes and slave-reads. From a master can also be read, but not via the write() method, as read and write differ fundamentally:
reads
If a read query is executed, the list of opened connections to database servers is checked if a qualified connection already exists. If so, the first is used for read access. If there is no connection, a connection is established to a server from which we know that it has the data:
$res = DB::read('SELECT * FROM album WHERE U_ID=#u');
The result is a mysqli result, which can then be used with the normal mysqli API. Furthermore, I also implemented 3 more functions get(), gets() and value() to return only a row, all rows or only the value of a single column.
As described on my PHP hacking page, I've patched PHP5.2 to have a native type casting and now PHP5.3 with mysqlnd comes with this feature innately (maybe because I'm annoyed long enough to get this feature implemented).
writes
If a write query is executed, any required connection is established, if they do not already exists and after that the query is executed on each affected master. For example, to store user links, the following simple query suffices to write everything:
DB::write('INSERT INTO friend SET UID_Applicant=#o, UID_Accepter=#o, CreationTime=#t', $uid1, $uid2, time());
In the case that both users are in the same database, everything is handled with one single connection, otherwise two connections to the corresponding servers are required.
Transparent pagination optimization
A common problem are paginations. Common problem? Why are paginations a problem? Even if you cover an index with your ORDER BY clause, the LIMIT is the problem. Normally, you would implement a pagination like this:
$res = DB::read('SELECT * FROM search ORDER BY rank DESC LIMIT #i,#i', ($cur_page - 1) * $num_page, $num_page);
Everything is fine, until you do not jump to page 2000. In this case, the database server would read 20000 rows and only return the last 10. Another problem of the query above is, that the column "rank" normally is calculated on the fly and does not have an index. I worked on a solution, which works up to 1 million rows, by creating a temporary table, adding an index on it and use an optimized SELECT statement to finally return the data. All these steps are also done transparently and you can rewrite your query to this:
DB::setOptions(array(
'sort' => 'rank',
'dir' => 'desc',
'offset' => ($cur_page - 1) * $num_page,
'limit' => $num_page
));
$res = DB::read('SELECT * FROM search');
This solution is also not too perfect, because there is still a big copy process in the background, which actually limits the speed of this approach. If I find a better way doing this, I'll add this to the SQL class to improve the whole thing. But like for everything which involves a huge ammount of data, you'll need to implement an own specialized solution.
Examples
<?php
include 'db.php';
// Write to the super cluster
DB::super('UPDATE lookup SET UUpdate=NOW() WHERE UID=#U');
// Auto-selection of the cluster with a constant user-id
$data = DB::value('SELECT /* User: #o */ INET_NTOA(#r)', 15, $_SERVER['REMOTE_ADDR']);
// -> 192.168.138.1
// Directly set the cluster via the API
DB::setCluster(4);
$data = DB::value('SELECT #h', '<body>');
// -> <body>
// Force the API to get the cluster mathematically or by lookup
DB::setCluster(6443, true);
$data = DB::value('SELECT #J FROM (SELECT 1 A UNION SELECT 2 UNION SELECT 3)X', array('A'));
// -> [{"A":'1'},{"A":'2'},{"A":'3'}]
// It's also possible to select the super cluster directly
DB::setCluster(0);
$data = DB::value('SELECT #s', "It's a great day!");
// -> It's a great day!
// Using the two background tricks JSON and array imploding
DB::setCluster(14);
$data = DB::value('SELECT #J FROM (SELECT #i wtf UNION SELECT 2 UNION SELECT 3)X WHERE wtf IN(#a)', array('wtf'), 1, array(2, 3));
// -> [{"wtf":'2'},{"wtf":'3'}]
// Getting the user information of the currently logged in user
$data = DB::get('SELECT * FROM user WHERE UID=#u LIMIT 1');
To setup an one node cluster you can use the following:
<?php
apc_store('database-0', '127.0.0.1');
for ($i = 1; $i <= CLUSTER_SIZE; $i++) {
apc_store('database-' . $i, array(
'master' => '127.0.0.1',
'slaves' => array(
'127.0.0.1',
'127.0.0.1',
'127.0.0.1'
))
);
}
?>
Closing words
In general, this SQL-class is a connection manager and I thought about implementing this class as PHP extension, as layer on top of mysqlnd, as mysqlnd pluging, patch for the parameter bindings and so on. All of these ideas remove the flexibility of the current implementation, because in this form it is so simple and can left as pure PHP implementation without modifications to the PHP core. Also this layer sits directly on top of the PHP mysqli API and should not impact the performance too much. Maybe more to the contrary, that the performance is better in combination with the "view", I'll publish in a different post.
To be able to use the SQL-class, you have the following requirements:
- PHP 5.3 for mysqlnd with native type casting (or patch your PHP 5.2 installation)
- APC for cluster config (can be simply removed by arrays, but I like this more than stupid config arrays)
- Definition of $_SESSION['UID'] and $_SESSION['SRV'] to get #u working
- Definition of some constants like username, password, database name and cluster size
- After I initially looped over console.log(), I now switched to FirePHP to send debug messages to Firebug
- And sure, the class itself: Download
One last word: The class is only ONE possible implementation, I've written for demonstration purposes. If you like the idea and want to use it, you may have a few changes on the source. This class is definitely not a copy-paste class which can be simple installed and will solve all of your problems (but maybe some of them ;-) !