|« Does A Mess Constitute Technical Debt?||Looking For A PHP Development Job »|
Every few weeks, someone publishes an article talking about how it’s faster to use single quotes rather than double quotes and how you should use echo() instead of print(). Most of these are bunk; that is, the time we spend talking about them far exceed the CPU time saved by implementing them.
Micro optimization doesn’t work. So why, then, is this post called “micro optimizations that matter”? The optimizations below could be described as micro – not in the little amounts of performance improved, but in the very minute (if any) changes required to your code to make use of them. All of these optimizations are standard optimizations you should consider, and all of them will offer considerable performance enhancements.
The fastest, easiest, and least painful way to get a performance boost is to enable caching on your server. There are a number of caches that are available to you.
First, if your database query cache is turned off, turn it on. For MySQL, that means taking a look at the documentation and setting up the query cache to cache all queries. This helps because it prevents the database from having to rerun queries. You should also have an opcode cache installed. I like APC. APC will automatically cache the opcodes from the compilation of your scripts. There are ways to boost the the performance of APC that you can investigate as well.
Neither of those to suggestions requires any code changes on your part, but will yield improvements in performance, sometimes as great as 300% (where APC is concerned). These “micro optimizations” are crucial. There are also some things you can do with caching that will require code changes but will make your application better for them.
The first is to enable the use of either APC or Memcached to cache objects and data points. For example, there’s no reason you should even be asking the database (regardless of the enabling of the query cache) to generate your blog post list every time someone visits your blog. Put that in the cache. You can even put your sessions into memcached to eliminate disk IO. These will require some code changes, but will be worth it.
Eliminate Any Sort Of Logged Errors
Disk IO is one of the things that can kill your application’s performance. Disks are usually very slow; people pay attention to how large a disk is but not how fast it is, and memory is always faster. Unfortunately, one of the ways that people kill their apps is by logging unnecessary warnings, errors and notices. I say “unnecessary” because they’re things that should have been resolved before the application went into production. Make sure that you get rid of notices that are avoidable and only have errors raised when something truly does go wrong.
Enable Output Buffering For Everything
It is possible to set an INI command that will enable output buffering on all of your pages. This is a good thing, because it means that Apache will get the parsed version of your PHP application as a chunk, rather than piecemeal, improving performance and reducing system calls (see this PDF for more).
You can turn on output buffering in the php.ini file with the following directive:
Will output buffering solve your problems if your application is resource-intensive or badly written? No. But it will help improve the performance of a well-written application.
Make Use Of A Content-Delivery Network
One of the fastest and easiest ways to reduce the total load time of a page and the load on your server is by moving things like images and videos to a dedicated server or content delivery network. For example, you can make use of Amazon’s S3 service. This low-cost service will allow you to have another server provide images, reducing the load on your own server quickly, cheaply, and without too much code modification. Less load on Apache means that you have the ability to serve more pages.
Determine What Data Doesn’t Need To Be Real-Time
If you are still pinging the database on each request, you’re doing it wrong. No seriously. Chances are good that if you’re like 99% of other people, you have some data that can be stale. By “stale” I mean “not updated every request.” For example, the comments count on your blog might be something that can be stale. Or the actual comments themselves. Or maybe the popularity of posts. Or perhaps a counter or some other dynamic content. See what you can make stale versus real time, and use that to reduce load.
Allowing data to get stale reduces load because instead of having to ping the database, the server can just serve up what it already knows. Combined with memcached or APC, and you can keep from even having to do a disk read to get your data. That’s going to result in potentially significant performance improvements.
Consider Using An Autoloader
Autoloaders are something we’ve discussed before. The SPL Autoloader allows you to create a stack of autoload functions that are looked at when you invoke a class that doesn’t exist in the current scope, and tries to load that class.
Autoloaders can result in performance improvements. Here’s why: each time you include a file, that file has to be compiled. If you include six files, that’s seven times the compiler has to be run (one for each included file, and once for the file you’re executing). APC improves this, but only to a degree; you’re still running the stat() calls and other junk that goes along with it. If you’re using all seven files, fine. But if you’re not, you’re wasting CPU time compiling code that’s sitting there.
This, perhaps, takes the most work, because it requires you to redefine your application in classes and objects, using OOP. This certainly is not for everyone. However, you can gain significant performance improvements through this method.
Some Additional Notes
There are some things that definitely do not make your code faster, even though people sometimes argue that they do. For example, less code does not necessarily equate to more speed. This might seem like a paradox, but it’s true: reducing lines of code does not necessarily have an impact on the speed of your application.
It’s good to refactor and reduce the lines of code being used, but this should not be part of your performance strategy, or your optimization strategy. It should be part of your code cleanliness strategy.
It’s always a good idea to architect first, and optimize later. Lots of times, you can get enhanced performance just by improving the architecture of your application, rather than implementing these or any other strategies. You should always solve the business case for optimizations, too. Make sure you know why you’re doing them, and what you’ll gain. And make sure that you go through a process before you just start mucking around with your code. Good luck!
Brandon Savage is the author of Mastering Object Oriented PHP and Practical Design Patterns in PHPPosted on 10/16/2009 at 1:00 am
Martins wrote at 10/16/2009 4:14 am:
I completely agree. Just wanted to add – be suspicious about external resources. One slow sql query is worth hundred single quotes optimization. Same about external and uncached data sources, APIs and files.
Adam wrote at 10/16/2009 4:48 am:
Rasmus Lerdorf claims that autoloading is slow (http://pooteeweet.org/blog/538/). I’ve made a test with 1000 classes and autoloading gave similar results as require_once. It seems that PHP stat cache helps in this scenario.
On the other hand when APC comes into consideration I’d believe that autoload slows things down because of indeterministic nature. Unless you’d compile all core code in one file. It might be the reason why DooPHP is blazingly fast in hello world test (http://doophp.com/benchmark).
Brandon Savage (@brandonsavage) wrote at 10/16/2009 7:21 am:
Adam, I’ve seen this blog post, but I have to disagree with Rasmus. I’ve seen performance enhancements from autoload, and I can also tell you the time it takes to develop using something like autoload is greatly reduced (because you have fewer bugs).
Simon wrote at 10/16/2009 9:02 am:
Thanks Brandon, such articles are always useful !
Adam, even if Autoload had been slower during your tests, this wouldn’t have meant anything. Autoload isn’t an alternative to require/include. Without autoload you might be including 100 files, with autoload only 10 could be loaded (for example). That’s the performance gain: Including less files, not the speed of this include ;)
Tomaž Muraus (@KamiSLO) wrote at 10/16/2009 10:21 am:
Another good post.
Anyway, enabling MySQL query cache for all queries is not always a good idea, so remember to analyze the queries and query cache efficiency after enabling it or changing its size.
In some cases it can make execution even slower (e.g. if the data in tables changes a lot).
I know this post is not about hardware optimizations, but SSD drives can offer a big performance boost in heavy-read database applications.
Matthew Weier O'Phinney (@mwop) wrote at 10/16/2009 11:22 am:
@Adam — Rasmus’ post was written pre-5.2.0. A lot of work was done for the 5.2 series to optimize the realpath cache in PHP. Once those optimizations were in place, autoloading became not only a viable option, but a more performant option — particularly if the files being loaded do not have require_once calls within them.
When I tested ZF with and without autoloading, I found that the results were phenomenal — ranging between 20% and 300% improvements.
So, the lesson learned? Always check the date on articles you quote, and, if they are old, check to see if anything has changed since then.
Oscar (@omerida) wrote at 10/16/2009 11:35 am:
Related to “Determine What Data Doesn’t Need To Be Real-Time”, but more specifically I’d recommend you move any code that depends on a network call to a corn job. The classic example of this is using curl/include to get an RSS feed and parse it. This should be done outside the script that handles the http request. Store the results in cache/db and retrieve them quickly for display on your pages.
Adam wrote at 10/16/2009 12:27 pm:
@Matthew: I’ve read about those optimizations and IMHO that’s why in my test scenario autoload was as fast as require_once while loading all 1000 classes. It’s amazing how well it performs.
@Brandon and @Simon: I know the benefits of autoload – performance gain and ease-of-use.
My doubts are about APC opcode cache and autoloading. In this case code is loaded at runtime, on demand, so does opcode cache know how to deal with it? Maybe it does cache all code and use cache blocks (representing classes) when needed?
Gyorgy (@feketegy) wrote at 10/16/2009 4:35 pm:
Thumbs up for this post!
Jay (@docmonk) wrote at 10/16/2009 5:57 pm:
A small clarification: S3 is not a content-delivery network. It’s just file storage. Just hosting your files on S3 won’t speed things up much, if at all.
CloudFront is the name of Amazon’s CDN. It requires S3, but it’s a different product.
Brandon Savage (@brandonsavage) wrote at 10/16/2009 8:32 pm:
Point taken. You’re quite right. I think you can deliver content directly out of the S3 storage network but its not a CDN.
Samuel Folkes (@SamuelFolkes) wrote at 10/16/2009 10:52 pm:
Yet another excellent post Brandon. I’m glad you wrapped the word “Micro” in the title in quotes because when compared to some of the ‘optimization strategies’ I see floating around the web the optimizations you listed here a pretty huge. Just two points. The first is that in almost all my tests I have found autoloading to be faster than or at least as fast as multiple requires or includes using PHP 5.2+. The second is that I would add to your list that page load times can be significantly decreased if output compression is enabled, whether via output buffering (ob_start(‘ob_gzhandler’)) or php.ini (zlib.output_compression = On)
Jeremy Glover (@jagwire16) wrote at 10/17/2009 6:31 pm:
Great post, Brandon.
Do most web hosts support APC and memcached or do you have to go to more expensive hosts for those to be available?
While S3 isn’t technically a CDN, you can still get many benefits from using it as one. As was noted earlier, it offloads the request/response process from your sever, allowing it to work on more important things. Also, the quality of S3 is so high that its response times are typically way faster than most people’s severs, especially if you’re using a cheap host like me. Check out how much S3 helped my website: http://www.jeremyglover.com/blog/2009/03/16/speed-up-that-cheap-website-with-cheap-amazon-s3/
Thanks again for all the pointers!
Tomaž Muraus (@KamiSLO) wrote at 10/18/2009 4:40 pm:
Most hosts use some kind of opcode caching system like APC or eAccelerator, but I don’t know any shared hosting which supports memcached.
Anyway, if you have a large and popular website and you are using some cheap shared hosting, wondering if your host supports memcached should be the least of your concern.
cocowool (@cocowool) wrote at 10/18/2009 11:22 pm:
if we use a host service from others, many optimizations have been down by the company, so i think we should focus on our architect, when we have the traffic more and more, it’s time to consider optimization
Zyx wrote at 10/19/2009 3:00 am:
I think it is worth pointing out that autoloader should be fast itself in order to get the performance boost. Basically, it must use the information stored in the class name to locate the file, and it should not perform any complex calculations on them. Another good thing is not to use include_paths, which are very slow if there are many paths (PHP must test all of them in the worst case) and the most frequently used are at the end.
LP wrote at 10/19/2009 4:28 am:
I think eliminating error logging is bad advice. Sure, it adds some overhead, but security your app means much more then performance. You can’t be 100% sure that you’ve fixed all errors during development and testing, many errors show themselves only on real data, and browsing error log can be the only way to find them when your app is live.
Brandon Savage (@brandonsavage) wrote at 10/19/2009 4:34 am:
You completely missed the point here. I never suggested getting rid of error logging. I suggested getting rid of as many errors as you possibly could so that they wouldn’t be logged. Not logging errors is dumb. But having known errors that you’re logging because you’re too lazy to fix them is even greater stupidity. Don’t do that.
LP wrote at 10/19/2009 5:01 am:
Ah, got it. Well, sure then, fixing errors is always a good practice, and I can’t imagine developer who allows himself not fixing continuous errors in his app.
Brandon Savage (@brandonsavage) wrote at 10/19/2009 5:02 am:
Unfortunately I’ve seen lots of developers leave errors in their apps. They don’t error trap well enough, or they don’t consider the possibilities and they end up with notices or errors. It happens more than one might think.
Daniel (@coderguy64) wrote at 10/26/2009 11:33 am:
Sage advice. Thanks for all the tips. I will definitely keep bookmark this one for future reference.
|« Does A Mess Constitute Technical Debt?||Looking For A PHP Development Job »|