I recently bought a RAID enclosure for my backup drives. One of the nice features is that it powers down the drives and fans when there is no activity for five minutes.
I use burp for backing up my laptops. Burp has a timed backup mode, in which a client can connect to the server, and start a backup if it's been too long since its last backup. This allows, for example, wife to suspend her laptop whenever she wants. When it is running, burp will check with the server every 20 minutes whether it needs to be backed up. If she suspends it in the middle of a backup, it will resume during the next check.
I've configured burp to only allow backups outside of my normal working hours so I don't need to hear the drives grinding away while I'm at my desk. (Pulling myself away from my desk at night is the next challenge.)
The only problem with this configuration is that when a client connects, the backup directory is accessed to figure out whether to start a backup, causing the drives in my RAID enclosure to spin up. The solution was to not have burp touch the backup directory until it's starting a backup.
When a client connects, burp creates a lockfile. By default, this lockfile is
created in the backup directory, but there is an option to use a different
client_lockdir. Set this in the server's
The next change required is to have the timer script, which checks whether the
server should start a backup when the client asks, use a separate directory to
check the state of existing backups. First, enough data from the backup
directory needs to be copied to a separate directory each time a backup is
completed. To do this, I use the
server_script_post option, which
runs a script each time a backup completes.
script copies the
current symlinks and
files from the backup directory to a separate state directory. I updated
the timer script to check this state directory. Review all of the changes at
As described in the previous post about our development process at GreatVines, we use LiquidPlanner for project management. Estimating how long tasks will take is important in most software development projects (and generally any project involving more than one person).
LiquidPlanner provides a great way of estimating tasks using an 80% confidence interval, but estimating tasks like this is a new concept for most developers. Below is a transcript with a developer that recently joined our project. We were able to get from "I have no idea how long it will take" to a decent estimate in about half an hour, but it should only take a minute or two for the next task.
2013-09-05 11:16:52: <Christian G. Warden> have you guys reviewed all of the tasks for the september release? 2013-09-05 11:20:44: <Ben H> yes, but many of the tickets refer to parts of the code I have no experience with so I have no idea how much work they will be. 2013-09-05 11:23:11: <Christian G. Warden> you can put wide estimates on tasks like that. can you give me an example, and i'll walk you through it? 2013-09-05 11:25:15: <Ben H> 10631525- Make Modal Form Fields scroll-able: I have no idea where contacts are viewed in the application, I haven't had any reason to look at their controller/model 2013-09-05 11:25:52: <Ben H> so any time estimate I would come up with would be a sheer guess 2013-09-05 11:26:38: <Christian G. Warden> ok, no problem. what's the likelihood it will take you more than 2 weeks to complete the task given what you currently know about the app? 2013-09-05 11:27:21: <Ben H> darn near 0% I would figure 2013-09-05 11:27:26: <Christian G. Warden> phew :) 2013-09-05 11:27:31: <Christian G. Warden> how about 1 week? 2013-09-05 11:28:29: <Ben H> that's still very high, so 2%? 2013-09-05 11:28:38: <Christian G. Warden> how about 3 days? 2013-09-05 11:28:40: <Ben H> given that it looks like pure GUI work 2013-09-05 11:30:03: <Jim Thompson> yeah Ben the problem is that one customer has put so many fields in the FieldSet that you can't see the entire modal, the Save and Cancel buttons scroll below the page 2013-09-05 11:30:33: <Ben H> I guess 3-4 days (if it is pure GUI work then 1-2 should be more then enough, unless there's something unseemly complex) 2013-09-05 11:30:45: <Jim Thompson> I am going to move this task beneath the Performance items Christian 2013-09-05 11:30:47: <Jim Thompson> (and Ben) 2013-09-05 11:32:20: <Christian G. Warden> would you say there's a 10% probability that it will take more than 4 days? 2013-09-05 11:33:34: <Jim Thompson> at 12-32 hours it might have to drop off the list, or at least to the bottom 2013-09-05 11:34:40: <Christian G. Warden> jim, we'll go through the tasks and reprioritize after the estimates are updated. 2013-09-05 11:34:57: <Jim Thompson> ok, I also moved "New Task" and "New Event" to October, which is when we promised it 2013-09-05 11:35:39: <Christian G. Warden> you're interrupting my estimation lesson :) 2013-09-05 11:36:02: <Ben H> (yeah, he's been high balling for tutorial purposes) 2013-09-05 11:36:11: <Jim Thompson> ok im out 2013-09-05 11:37:50: <Christian G. Warden> no, i want to come up with realistic estimates. if they're wide right now because of uncertainty as to what it will take to complete them, that's fine. once we have estimates, we can decide whether to invest time to come up with tighter estimates. 2013-09-05 11:38:44: <Christian G. Warden> so, do you think 4 days as a high estimate is accurate? 2013-09-05 11:39:29: <Ben H> for this example I put 12-24, it's really wide and most likely most of that will be learning about how that GUI element is working and making the tests 2013-09-05 11:39:57: <Ben H> the range is more to show that it could go horribly wrong if there is something there I have no idea of 2013-09-05 11:40:24: <Ben H> but 12 still seems really high 2013-09-05 11:40:30: <Christian G. Warden> what's the likelihood that you will complete the task in 2 hours? 2013-09-05 11:41:14: <Ben H> the likihood I'll complete it in 6 hours I would meet the 10% threshold 2013-09-05 11:41:26: <Ben H> *likelihood 2013-09-05 11:41:38: <Christian G. Warden> ok, so, let's make it 6-24 2013-09-05 11:45:29: <Christian G. Warden> and you can check yourself by imagining we're going to a casino. you can choose from two wagers: 1) roulette - there are 10 numbers on the roulette wheel. if 1 or 10 comes up, you lose. if 2 through 8 comes up, you win $100. 2) the bookmaker - if you complete the task in less than 6 hours, you lose. if you complete the task in more than 24 hours, you lose. if you complete the task between 6 and 24 hours, you win $100. which wager do you want to take? 2013-09-05 11:47:25: <Ben H> bookmaker, it also means I can spend some speculative time trying different approaches to find one that solves the problem better 2013-09-05 11:47:34: <Ben H> if things go well 2013-09-05 11:49:16: <Christian G. Warden> so it's not really an 80% confidence interval. it might be more like a 90% confidence interval. you should make the range narrower. 2013-09-05 11:49:52: <Ben H> how then do you show the risk if (I doubt in this case) that 10% could be really bad 2013-09-05 11:52:19: <Christian G. Warden> it's ok if there's an outlier occasionally. we want to get an accurate estimate across all of the tasks. 2013-09-05 11:56:09: <Christian G. Warden> you should narrow the range until you're ambivalent between taking the two wagers. and widen the range when you'd rather play roulette. 2013-09-05 11:58:02: <Christian G. Warden> but we've already gone from "i have no idea how long it will take" to "i'm 90% confident, it will take between 6 and 24 hours". 2013-09-05 11:59:40: <Ben H> I'm just worried it's misplaced confidence 2013-09-05 12:00:03: <Ben H> either way, given the range, I am (90%) confident that I can get it done within the range 2013-09-05 12:07:07: <Christian G. Warden> nobody dies if you're wrong. you're expected to be wrong 20% of the time (once you narrow it to an 80% confidence interval). if we decide we need to know with more certainty when something will be done, we might ask you to spend an hour to do a little research so you can come up with a tighter estimate, after which you would adjust the high and/or low estimate. for now, though, estimate the tasks based on what you currently know. 2013-09-05 12:07:50: <Ben H> ok
The roulette idea comes from Douglas Hubbard's How to Measure Anything.
In response to a customer's request for roadmap details, and not having the project artifacts they expected based on how they manage internal projects, I've written up a description of the development methodology at GreatVines.
The GreatVines development methodology follows from a few high-level axioms, draws from multiple software development and project management methodologies, and makes use of modern tools for collaboration.
The first two items, changing requirements and priorities, are related, and mean that we don't expect to be able to predict the future more than a few months out. We generally have a good sense of which new features customers want to make use of next month, but not those that will be most useful next year. Additionally, as we deliver new features to customers, requirements for enhancements to these features become apparent as we get feedback from them.
Item 3, delivering quality software, stands on its own. These first three items reflect what we think are the most important principles from the Agile Manifesto.
Item 4, that the delivery of updates is cheap, follows from the fact that we are a software-as-a-service company. The days of shrink-wrapped software are gone, and updates to both our web-based and mobile applications are delivered without end-user intervention. Because delivering updates is cheap and requirements change, we are able to deliver multiple iterations of new features, and adjust requirements as we discover exactly how customers are using our software and how they would like to use it.
Therefore, we aim to deliver useful, working software frequently under the assumptions that requirements and priorities change regularly, and that delivering updates to our software does not impose a large cost on either GreatVines or our customers.
We generally organize releases into three- to four-week cycles, similar to Scrum sprints, but try to keep the next two to three releases planned as well so we're looking two to three months down the road. Like scrum and kanban, we start from a backlog of features that we would like to implement. We bring together team members from development, support, implementation, and sales when prioritizing the tasks for future releases.
Each release typically has a combination of small tasks, which may already describe a combination of technical requirements and planned implementation details, and larger tasks, the requirements for which need to be further elaborated.
In preparing the bigger tasks for an upcoming release, we work to ensure the requirements are clearly defined so development can proceed. Features that require a new user interface or significant changes to an existing interface are mocked up. There are often a few iterations of mockups, as questions are raised and addressed, and the requirements and mockups updated.
Depending on the feature, we occasionally share the mockups with existing or potential customers, and solicit their input before starting development, but we generally prefer to implement a working interface, then adapt it as we get feedback from real use.
As the requirements are fleshed out, we break them down into development tasks, referencing the mockups and relevant requirements. Each task is estimated by the developer that will be responsible for writing the code.
The tasks within a release are prioritized so if there is any slippage, the highest priority tasks get completed first. The assignment of prioritized tasks to developers ensures that each developer knows which task they should be working on at any time, and helps collaboration by letting all members of the team know who is working on what and how the priorities are defined. As in kanban, it also limits the amount of work-in-progress.
We have found that testable code is better code. If it's hard to test, it's probably poorly designed, and needs to be decomposed.
During peer review, both the tests and implementation code are reviewed. We look for areas in which the design of the software can be improved as well as ensuring that the code follows our internal coding conventions to ease future maintenance.
With our mobile app, we have started writing automated functional tests as well as unit tests. The goal is to have the tests fully specify the functionality of the application.
When a new version has been released, our release notes are updated.
There are a couple important tools we use to support our development process. We use LiquidPlanner as our project management tool. Each release is organized as a package. We have our web-based application and mobile applications organized as separate projects, broken down into broad features using folders within the projects. We pull tasks from both projects into a release package.
Mockups for new interfaces that will be built as part of the release are done at wireframe level like one would do in Balsamiq Mockups. (Our mockups are actually often done as Google Drawings because Jim typically does them, and he has somehow become proficient at creating them quickly as Google Drawings.)
The use of ranged estimates in LiquidPlanner allow us to organize releases with a fair amount of confidence in being able to hit delivery dates. Estimating software development tasks is a continuing challenge, but we find the confidence interval-based approach superior to point estimates often used in project management tools and story-point estimates used in scrum. In the backlog, we often put wide estimates on broadly defined tasks, when it's not yet clear the value of the information provided by more granular requirements and estimates. When organizing a release, we break down big tasks into smaller ones, generally around one- to six-hours. Smaller tasks are easier to estimate more accurately, and estimates improve with practice.
LiquidPlanner also serves as a collaboration tool, minimizing project management overhead. As discussed above, the priority of tasks is unambiguous. We keep estimates of remaining work on tasks up to date, and LiquidPlanner automatically updates the schedule. Discussion about task details happens within LiquidPlanner so if there's a change to or clarification of requirements, there's one place to look. (When real-time discussion is required, we generally use Google Hangouts, then record the result of the conversation in LiquidPlanner.) Using LiquidPlanner mostly eliminates the need for "what's the status?" discussions.
We also track the status of code reviews and whether the code for each task has been merged within LiquidPlanner.
We use git and GitHub for version control. Our use of github follows how most open source projects are organized. We have a greatvines organization, which contains the primary repo, from which we package our software. Each developer has a fork of the repo. Developers create feature branches for individual LiquidPlanner tasks, and open a pull request against the greatvines repo when the code is ready to be reviewed. The pull request is noted in LiquidPlanner and the task is moved to a ready-to-review package, which puts the task on hold (not scheduled for further development).
Peer review is done within GitHub, using inline comments on the open pull request. If the task needs further work, the task is moved out of the ready-to-review folder in LiquidPlanner so it's scheduled for additional work. When the pull request is merged, the task is marked done.
There are a couple areas in which we are planning improvements to our development process around testing. We are using jasmine and jasmine-jquery to do some functional testing of our mobile app, but it's not exhaustive, and we don't have anything similar in place for our Visualforce interfaces. Therefore, we augment our automated testing with manual testing. We would like to add more robust automated functional/acceptance testing, perhaps using a browser automation tool like selenium or casperjs.
As we further automate testing, we also plan on introducing continuous integration. Travis CI looks promising here, with tight integration with GitHub, and extensive use among open source projects already.
There are a couple areas in which we might experiment with changes in process in the future. One is in choosing which features should go into each release. We currently use a consensus building approach in which we informally consider the value of possible enhancements to our customers. I'd like to investigate whether there might be gains to be had from the use of more formal Decision Analysis practices.
We might also experiment with scrum-style user stories for documenting requirements. For the most part, our lack of a formal requirements documentation structure has not been a problem, and we are able to turn requirements into technical designs easily. But in cases where requirements are very broadly defined, the "As a user, I want" structure may prove valuable. Using a standard structure may also ease on-boarding of new employees and coordination with any contract or outsource developers with which we work.
Every time I tried to start an Android emulator, the window appear briefly,
then disappear, and I'd get the error, "emulator window was out of view and was
recentered". The solution is to edit
emulator-user.ini in the
$HOME/.android/avd/<image name>, by
default, on linux. Set window.x and window.y to 0.
The solution is on stackoverflow, but Google couldn't find it.
Update 10/05/2012: I've found this solution to work inconsistently. A better solution under xmonad is to set up a ManageHook to match the emulator window and move it to the floating layer:
myManageHook = composeAll [ className =? "emulator-arm" --> doFloat ]
The solution I came up with uses VirtualBox, a Windows 7 image, Google Drive for Windows, a tool from Microsoft called SyncToy, and the Windows task scheduler.
Based on my previous Force.com app for managing Salesforce Cases on a Kanban board, I've built Kanban for Salesforce, a project management tool on the Force.com platform. It uses a custom object for cards so it's not tied to an existing Salesforce native object. Cards can be organizated into sprints.
I've updated my kanban tool for Salesforce Cases to allow filtering Cases by multiple criteria. In addition to Owner, you can now filter by the age of the Case, the Priority, and the Type.
I've started building a tool to manage Salesforce Cases on a kanban board. Each case is a card, and each Status picklist value is a list on which the cases can be organized. You can drag cases around to prioritize and track progression.
The lists are organized left-to-right in the same order that the picklist values are arranged top-to-bottom. There's a custom setting to treat the first picklist value, typically "New", as your backlog. The backlog is hidden by default, and can be shown by clicking the arrow on the top left.
If you use Salesforce Cases, please try it out. I would love to get some feedback and suggestions for features. It's still early in development, but I was able to get off to a good start on the front-end by adapting code from huboard, a great tool for managing github tickets.
This is also my first time trying to build my own commercial application on the force.com platform. Thoughts from anyone who has sold applications on the AppExchange would also be appreciated.
Update 2012-05-27: Added better handling of many open Cases.
Update 2012-06-24: Now supports filtering by multiple criteria.
Update 2012-08-11: I've developed a new project management tool not tied to native Cases, Kanban for Salesforce.
Static Resources in Salesforce are often zip files containing multiple files. If you're keeping static resource under version control using git, here's how to get useful diffs for them whether they are zip files or single text files.
Create a shell script which will identify whether a file is a zip file or not.
If so, it should unzip the contents to stdout; otherwise, it should just output
the contents of the file. I called it
#!/bin/bash file -b --mime-type $1 | grep application/zip && unzip -c -a $1 || cat $1
Tell git to use this conversion utility for a new "resource" diff driver:
$ git config [--global] diff.resource.textconv resource-conv
Tell your repo that .resource files should use the "resource" diff driver by
adding the following to your
The following steps can be used to create a debian package repository easily and host it on Amazon Web Services S3.
First, install reprepro, which will create the repository file structure from .deb packages. Also install s3cmd to sync a local copy of the repository to s3.
$ sudo apt-get install reprepro s3cmd
Create a directory for the repository and a conf sub-directory.
$ mkdir -p /path/to/my-repo/conf
Create the config file, conf/distributions, describing the repository. Setting Codename, Components, and Architectures are sufficient to get started. If your packages are specific to a Debian distribution (or other Debian-based distro like Ubuntu), you can set Codename to the code name of the distro, e.g. squeeze. It should not be set to stable, testing, or unstable; these can be set in the Suite option. See reprepro(1) for more details.
Codename: example Components: main Architectures: i386 amd64
Add a package to the repo using reprepro.
$ reprepro -b /path/to/my-repo includedeb example /path/to/package.deb
Here are the contents of the repo after adding one package:
my-repo/ my-repo/pool my-repo/pool/main my-repo/pool/main/m my-repo/pool/main/m/mypackage my-repo/pool/main/m/mypackage/mypackage_1.0_all.deb my-repo/dists my-repo/dists/example my-repo/dists/example/main my-repo/dists/example/main/binary-i386 my-repo/dists/example/main/binary-i386/Packages.gz my-repo/dists/example/main/binary-i386/Release my-repo/dists/example/main/binary-i386/Packages my-repo/dists/example/main/binary-amd64 my-repo/dists/example/main/binary-amd64/Packages.gz my-repo/dists/example/main/binary-amd64/Release my-repo/dists/example/main/binary-amd64/Packages my-repo/dists/example/Release my-repo/conf my-repo/conf/distributions my-repo/db my-repo/db/packages.db my-repo/db/release.caches.db my-repo/db/checksums.db my-repo/db/version my-repo/db/references.db my-repo/db/contents.cache.db
Configure s3cmd with your AWS credentials:
$ s3cmd --configure Enter new values or accept defaults in brackets with Enter. Refer to user manual for detailed description of all options. Access key and Secret key are your identifiers for Amazon S3 Access Key [XXXXXXXXXXXXXXXXXXXX]: Secret Key [XX+XXXXXX+XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX]: Encryption password is used to protect your files from reading by unauthorized persons while in transfer to S3 Encryption password: Path to GPG program [/usr/bin/gpg]: When using secure HTTPS protocol all communication with Amazon S3 servers is protected from 3rd party eavesdropping. This method is slower than plain HTTP and can't be used if you're behind a proxy Use HTTPS protocol [Yes]:
You can leave the encryption password blank; you don't want to encrypt the files in the repository.
Create a bucket in S3 and sync your repository to the bucket. The bucket name must be globally unique. You will get an error if you use the name of an existing S3 bucket.
$ s3cmd mb s3://my-repo/ $ s3cmd --verbose --acl-public --delete-removed sync /path/to/my-repo/ s3://my-repo/
Note the trailing slash after /path/to/my-repo. Without it, the my-repo folder itself will be created in your bucket.
Add the repository to your sources.list:
deb http://my-repo.s3.amazonaws.com example main
Now you can install packages from the repository:
$ sudo apt-get update $ apt-cache policy mypackage mypackage: Installed: (none) Candidate: 1.0 Version table: 1.0 0 500 http://my-repo.s3.amazonaws.com/ example/main amd64 Packages
The state is that great fiction by which everyone tries to live at the expense of everyone else. - Frederic Bastiat