Installing and XHProf to profile Drupal on Ubuntu

Filed under: drupal,web development — jaydublu @ 2:48 pm

The following recipe was used to install XHProf on a Ubuntu server running Drupal 6, inspired by – my PEAR installer complained about missing config.m4, and I couldn’t find Brian Mercer’s php5-xhprof Ubuntu package.

Download and manually install XHProf:

tar xvf xhprof-0.9.2.tgz
cd ./xhprof-0.9.2/extension/
./configure --with-php-config=/usr/bin/php-config
make install
make test
cd ..
cp -rp xhprof_html /usr/share/php/
cp -rp xhprof_lib /usr/share/php/
mkdir /var/tmp/xhprof
chown www-data /var/tmp/xhprof

Optional – install graphviz for the Callgraph funtionality

apt-get install graphviz

Create /etc/php5/conf.d/xhprof.ini



alias /xhprof_html "/usr/share/php/xhprof_html/"

Restart Apache

apache2ctl graceful

Configure Drupal in /admin/settings/devel

xhprof directory: /usr/share/php
xhprof URL: /xhprof_html

Now to get my head around what it all means!

Drupal Views bulk export

Filed under: drupal,web development — jaydublu @ 5:06 pm

I’ve been using Drupal in earnest since the New Year, and I have to say that I wish I’d discovered it sooner. It’s by no means perfect, but I think perfection is impossible with CMS systems and it’s doing everything I need it to.

Initially I was sceptical about using the Views module – I was thinking that Views are just for people who can’t be bothered to do proper PHP, but then I got into them and discovered the power and simplicity, and I have to say my first approach to most content display challenges in Drupal now tends to use Views.

But this leads on to one of my biggest gripes about Drupal – how much of the site configuration is stashed in the database where there is little accountability or control – I much prefer having files in the filesystem where you can use Version Control.

Then I discovered bulk export which does all of that – and it makes a big difference in safe use of Views. This is how I’ve done things several times now quite successfully:

  • Carry out rapid development using the Views UI as normal.
  • Enable the Views Exporter module – this turns on the ‘bulk export’ tab under ‘tools’ in the Views admin
  • In Site Building > Views > Tools > Bulk Export, select the views that you want to export, and enter the name of the module you want to store the views code in and hit ‘export’
  • Follow the instructions on the next page to add snippets to the .info and .module files, and to create the file
  • In Site Building > Views > Tools > Basic, click ‘Clear Views Cache’ button
  • The right hand-most link against the views you exported in the admin list should now have changed from ‘delete’ to ‘revert’ – go ahead and ‘revert’ all the views you exported

That’s it – simples!

If you need to make any changes to exported Views, there are two approaches:

  1. Modify the exported php directly in the file – these changes should be immediately reflected on the site, but you might need to hit ‘Clear Views Cache’ just to be sure.
  2. Use the Views UI, and re-export the views again, save the updated file, then hit ‘revert’ on the list to use the file version

I’ve found it very useful when there are two or more enviornments being used for development (e.g. dev and live) to be able to spot that a local change has been made to an exported view by the presense of the ‘revert’ link – using diff on exports gives you a clue to what setting changed – a bit fiddly but not as bad as trying to go through every setting the UI one at a time to spot the difference. It’s also very reassuring to have Subversion (or your vrsion control system of choice) keep all your various environments in sync.

The final trick to pass on that has made my life much simpler is proper use of names and tags in views – if all the views for your module have the same prefix to their name, and the same tag, it makes it much easier to spot them all in lists, and to select them all together when you come to re-export them – it’s very embarrassing if you miss one and a whole section disappears from the site! If you need to rename or add tags – it’s easy to spot in the exported code where they can be changed, and that’s something you can’t do through the UI.

Now if only I could get more of Drupal’s config in the filesystem this way …

When to rebuild?

Filed under: life,tinkering — jaydublu @ 5:05 pm

This post can relate to so many things – stylesheets, php code, glass fibre moulds – at some point when you’ve gone through a few iterations of an agile like process, you start thinking “If I’d known I was going to end up doing this, I would have started differently”

When you’re so close to completing, yet you know deep down it’s getting messy overcomplex and perhaps not as ‘nice’ as it could be, the decision of whether to leave it as-is because it works or to take the opportunity to rebuild everything before you go any further becomes almost unbearable.

But then I suppose is that not what the perpetual beta is all about, and eventually you get the opportunity to go for v2.0 (or 1.0 even)? In the mean time if it’s not doing any harm, leave it alone – you know it will only open up another can of worms if you start from scratch again!

Mobile device detection

Filed under: mobile,web development — jaydublu @ 7:36 pm

mf-xmas.jpgIs Christmas a time for blog posting?

Certainly a chance to review the year, and catch up on things that have been missed. I’ve been lax in not keeping up with progress in mobile content delivery for instance, but it’s not out of choice.

Scanning some emails and posts just now I came across a summary of what’s been hapenning over at the mobiForge (ex and I feel unbearable guilt that I haven’t tried their DeviceAtlas yet. But I do note with some satisafction that in one article they’re plugging the use of a lightwieght device detection function from  Andy Moore that seems to do a similar thing (but no doubt much better) to what I was playing with a couple of years back.

New Years resolution – make the time to keep up with this stuff, ‘cos it interests me and I’ll need it one day!  (see Perl xkcd strip!)

PHP ob_gzhandler “Content Encoding Error”

Filed under: web development — jaydublu @ 3:56 pm

You know sometimes how the simplest little issue holds you up for days? Well I’ve just had a doozy!

I’m maintaining an inherited CMS application that I’m still having to trust that some of the inner workings ‘works’ because it’s all a bit involved and I’m not getting paid to rip it all apart for the sake of it.

We’re deploying the app onto new servers – something that I’ve done a few times so I wasn’t scared – but this time I just couldn’t get it to work – calling up pages I was getting “Content Encoding Error” messages from Firefox, and generally not helpful responses from other browsers.

The app was using GZIP to compress output where browsers support it using ob_gzhandler I knew, and if I commented out the gzip bits it was working, but that wasn’t something I wanted to do – and I was determined to find out why the same codebase wasn’t working on this environment with almost identical configuration to others that it did work on.

To cut a long story short – there’s a configuration include called at the start of each page that’s not under version control (for obvious reasons), and on this server it somehow acquired a couple of line breaks at the end after the closing php tag, so it started the html output early.  Nothing to do with double encoding or other potential issues I found when Googling.

Ubuntu’s Subversion

Filed under: tinkering,ubuntu — jaydublu @ 5:41 pm

I’ve a minor gripe about Ubuntu – only ‘cos it’s caught me out a couple of times.

My local dev server is runnung Unbuntu Gutsy, and I do run apt-get upgrade etc. every now and then to keep things current.

I tend to keep most of the sites I’m working on checked out out of the repository somewhere that Apache can get to them so I can see the rendered output easily, and to make life easier I also access the server’s webroot over an SMB share from my laptop.

Life was great until I upgraded my laptop’s TortoiseSVN to 1.5.0-something-or-other as it keeps nagging to do – but if I’m careless enough to do an update on a remote working copy using Tortoise, it upgrades it to the new 1.5 format, which means it can’t be used by subversion on the boxes own command line as the Gutsy Subversion package is not up to 1.5 yet.

Twice now I’ve had to check out a fresh working copy to overcome this problem, and to save any future accidents, I’m downgrading my Tortoise to a pre-1.5 version – I looked at trying to get an ‘experimental’ Debian package installed but it looked far too risky.

Using Amazon S3 to deliver flv content

Filed under: web development — jaydublu @ 4:59 pm

I’ve used Amazon S3 for some time now on and off – it’s a great, fast, cheap service, but it does have its own quirks.

Developing a site that hosts a Flash movie displaying a series of flv encoded videos, we decided to host the flv content on S3 to save on bandwidth costs in case we got mass amounts of traffic.

First problem we encountered was performance – with a bucket created using the default location the videos were loading slower than they were playing, but switching to use a bucket created to be located in Europe, the speed has no longer been a problem.

The last issue which really took some head scratching was reported by some testers that the videos weren’t loading – they tended to be testing from behind corporate firewalls.

It would seem that when we used the S3Fox Firefox extension to upload the files, it didn’t know what flv’s were so didn’t set a Content-type. S3 default response is not to sent a Content-Type, which it would appear these obtuse firewalls didn’t like so blocked it.

The solution was to knock up a php based upload script using the Amazon S3 PHP class written by Donovan Schonknecht specifying ‘video/x-flv’ as a Content-Type – works a treat.

Solve the problem to succeed

Filed under: rants,web development — jaydublu @ 8:20 pm

A piece by Raphael Pontual in this months .net magazine led me to think about what it is I do, and how I expect to compete in a marketplace filled with those who spend far more time than me keeping up with current techniques and technologies.

I’m only just starting to really get into jQuery and design patterns, and I have to make an effort not to keep reverting to old tried and trusted old skool strategies that have worked for me in the past.

The piece I believe was more aimed at design, but a relevant excerpt is: “It might seem crazy, but the older and busier you become, the less time you have to find out about the latest trends and adapting to the new graphics software. Meanwhile, there’s always a new generation that spends hours learning everything about the latest creativity suite.”

I’m a developer, not a designer, but I get what he’s saying, that successful professionals concentrate on identifying and solving problems, rather than just throwing gadgets and glitz at a project. A design for the sake of it is nonsense, it has to solve the problem, and the best solution is often the simplest whether it uses the latest whizzbang2.0 bling or not.

A good friend Sujvala kindly left a comment on an old post of mine ‘I want to be Clarkson‘ and one opinion he has is that I have an ‘infantile enjoyment of new toys‘ yet I’m ‘old enough to keep the safety catch on whatever is being tinkered with‘. I really like that.

Yes I do like toys, especially well thought out ones, but there’s a big difference between a toy and a tool. A tool has to earn a living.

The challenge when developing for the web, or making a fibreglass mould, or fitting a satellite dish (or most of my other previous employments) is to identify what it is you’re trying to do, what the challenges are, and what the most appropriate methods are to solve those problems.

Tried and tested (and safe) often beats bleeding edge, although you always have to be open to the idea. As Confucius said “It does not matter how slowly you go so long as you do not stop.

Using Subversion to version control websites

Filed under: web development — jaydublu @ 11:34 am

I can remember when I first started developing oh-so-many years ago, instinct and common sense told me I should implement some sort of version control, even though at the time I was writing simple MS-DOS Basic applications, and working by myself. At strategic intervals, if I remembered, I took a copy of my source code, renamed it and put it somewhere ‘safe’. In reality I didn’t, but I knew I should so that if all else failed I had something to go back to.

As my applications started getting more complicated, as the business risk of getting it wrong increased, and especially when I started working collaboratively with other developers, the need for a ‘better way’ grew. I was aware of systems like CVS, but they didn’t seem entirely suited to our web development environments and the challenges we faced. We needed something which didn’t unduly get in the way of our work, and helped with as many problems and risks as possible:

  • Audit trail - who changed what, when, to get the code get into the current state?
  • Comparison – what changed between two versions
  • Collaboration – assisting multple developers working on the same files
  • Branching and Merging – the ability to create parallel threads of development, and combine them back together if required
  • Disaster recovery – assistance reconstructing sites in a hurry
  • Code migration - keeping envirnoments in sync, and moving changes between them

The break came, as with most things, with the opportunity to spend a bit of time ‘playing’ with possible solutions. First thing was giving Subversion a go – although similar in many ways to CVS, I found it much more suitable for my needs in that it was more usable, a ‘version’ represents the entire codebase rather than a specific file, and commits are atomic meaning if something fails it cancel the entire transaction rather than leaving a working copy ‘broken’ or inconsistent.

Now I had a tool I liked, how to apply it to our needs? My primary concern was moving sets of amends between environments – I wanted to get this automated and as safe and foolproof as feasible. Yet it had to be an easy process that anyone (within reason) could do.

So the files on the webserver, or any other environment which needs to be covered by version cnotrol, should be a working copy – that is checked out from a repository using a Subversion client. The new method to get changes to the live site is to make them in a dev environment, commit them to the repository (with a suitable log message of course) then ‘svn up’ the live site.

One stumbling block was that different environments tended to need different configuration – whether it was filesystem path, url of the web root, or database connection details. Like all well structured applications this configuration was already in a single global include file – but this was a very risky file to have under version control. It was too easy to have a locally modified file overwritten, or even worse to accidentally commit a dev server’s config and update (and break) a live site.

The solution – don’t version environment configuration!

Put all your environment specific code in a file named for instance .config.local ( the dot prefix prevents Apache from serving requests for this file which would be a bit of a security loophole) placed in the sites DOCUMENT_ROOT and setting a global ignore on *.local in all svn client config. Now in every script you can ‘include ($_SERVER[‘DOCUMENT_ROOT’].’.config.local’);’ and you’re away, safe in the knowledge that you don’t have to keep an eye out for this dangerous file.

Of course that’s a simplification of the task for safe management of sites, but it demonstrates the point. Neither does it take into account all the potential issues surrounding managing websites – changes in the unprotected config file, filesystem permissions and database schemas to name but a few; but with a bit of thought and some good working practices Subversion can usually help out to some degree, if only to keep manual documentation of such things in the same place as the code.

But that’s another post someday … what is for certain is that properly used and suitably configured, Subversion is most definitely worth the effort, and it could save your life one day – or at least your job.

Printing jQuery manipulated pages

Filed under: css,jquery,web development — jaydublu @ 5:26 pm

I’m working on a page which is using jQuery to show one block of content at a time from a group of blocks, managed by a navigation. It’s working very nicely, but geting to the later stages I start trying to get print styling working.

Ordinarily, I’d attach a print stylesheet and hide irrelavant things like navigation, and show anything that I want to print that is hidden as per Eric Meyer’s classic article in A List Apart ‘Going to Print‘ – in this case it would be ideal to have all the content blocks printed. But jQuery has rewritten all styling so the print stylesheet which loaded with the initial page rendering can’t help. Or can it?

After much research, herad scratching and good old ‘suck it and see’ experimentation, I’ve determined that yes the theory still holds, you just have to apply a bit more brute force.

jQuery manipulates page styling through use of inline styles. Linked stylesheets can still override inline styles if given a ‘!important’ suffix. So for instance: <style media=”print”>.switchable {display: block !important;}</style> should keep content hidden by $(“.switchable”).hide(); visible.

Next Page »