I've been developing a workflow for practicing a mostly automated continuous deployment cycle for a PHP project. I'd like some feedback on possible process or technical bottlenecks in this workflow, suggestions for improvement, and ideas for how to better automate and increase the ease-of-use for my team.
Core components:
HudsonCI serverGitandGitHubPHPUnitunit testsSelenium RCSauce OnDemandfor automated, cross-browser, cloud testing withSelenium RCPuppetfor automating test server deploymentsGerritfor Git code reviewGerrit TriggerforHudson
EDIT: I've changed the workflow graphic to take ircmaxwell's contributions into account by: removing PHPUnit's extension for Selenium RC and running those tests only as part of the QC stage; adding a QC stage; moving UI testing after code review but before merges; moving merges after the QC stage; moving deployment after the merge.
This workflow graphic describes the process. My questions / thoughts / concerns follow.

My concerns / thoughts / questions:
Overall difficulty using this system.
Time involvement.
Difficulty employing
Gerrit.Difficulty employing
Puppet.We'll be deploying on
Amazon EC2instances later. If we're going about setting upDebianpackages withPuppetand deploying toLinodeslices now, is there a potential for a working deployment onLinodeto break onEC2? Should we instead be doing our builds and deployments onEC2from the get-go?Another question re:
EC2andPuppet. We're also considering using Scalr as a solution. Would it make as much sense to avoid the overhead ofPuppetfor this alone and invest in Scalr instead? I have a secondary (ha!) concern here about cost; theSeleniumtests shouldn't be running that often thatEC2build instances will be running 24/7, but for something like a five-minute build, paying for an hour ofEC2usage seems a bit much.Possible process bottlenecks on merges.
Could "A" be moved?
Credits: Portions of this workflow are inspired by Digg's awesome post on continuous deployment. The workflow graphic above is inspired by the Android OS Project.