Stephen Klein, a biz dev guy from Instacart, emailed me to ask why I hadn't tried it yet. Here's my response:
Hi Stephen, I was excited to learn about Instacart offering deliveries from Costco. Unfortunately, your current pricing model makes deciding whether to use Instacart too expensive. It's not that your prices are too high. It's that I don't know if your prices are too high; that is, the cost of information is too high.
When deciding whether to use Instacart, my decision is between going to Costco or having Instacart go to Costco for me. In most cases, I would prefer to have Instacart go for me. (Sometimes, I might want to browse.) But to make the decision, I need to know how much it would cost me to use Instacart.
The current pricing model is not transparent enough to determine this cost. Your prices are not the same prices as those offered as Costco, so I don't know what the total cost of using Instacart is, and rather than spend the time trying to figure it out, I simply don't use Instacart.
My recommendation would be to change your pricing model to make explicit the cost of using Instacart. You could charge a fixed percentage above the Costco price, charge a higher delivery fee, or charge a per-item delivery fee, for example; whatever model would be both transparent and profitable.
CenturyLink released a new open source tool for building and managing applications composed of multiple docker containers called Panamax. It's similar to fig in providing a file format for describing the docker images which make up your application, and expressing the links between the containers.
The files used by Panamax to describe an application are called templates, and Panamax expands upon the model provided by fig by allowing applications to be built from existing templates, i.e. collections of existing images, and by providing a web interface for building templates. Templates can be fetched from and export to any github repo.
You create a new Panamax application by using an existing template or starting with a single docker image from Docker Hub.
Below I provide an example of how to build an application using a Panamax template. I created a template for the Cube event-logging and analysis server. (There's also a copy of the template in the panamax contest repo, but it was built using a MongoDB image which has since been deleted from Docker Hub.)
We'll create a simple Hello World node.js app, which logs each request to the cube server.
First, install Panamax as
described in the documentation. The current version of Panamax is distributed
as a Vagrant VM, running CoreOS. Panamax itself is three docker apps
that run in the VM: an API server, a web app that provides the main interface,
and cAdvisor, used to monitor
the docker containers. Panamax also includes a shell script,
panamax, which is used to start up the VM.
Panamax is distributed with two github repos containing templates. Navigate to
Manage | Sources to add my
cwarden/panamax-templates repo as a
source of templates.
Now, on the search screen, if you search for "cube", you'll find my template.
(The first one is from the contest repo.)
Below the templates, you'll also find individual docker images. If you wanted to build up your application starting from a single image, you could start with one of these.
Click on More Details, and you'll see that my template is made up of three docker images: a MongoDB database, the cube collector, which accepts events and stores them in the database, and the cube evaluator, which reads data out of MongoDB and computes aggregate metrics from the events.
The details modal window also shows documentation I wrote up when creating the template.
Click on Run Template. This will create a new application called "cube".
The Documentation link will show the same notes as on the More Details screen. The Port Forwarding section is important. Recall that Panamax and all of the docker containers it manages are running within a VM. If we want to access any of the services provided by these containers from outside the VM, we need to set up port forwarding.
In this case, we're going to add another docker container which sends data to the cube collector, but we'll want to access the cube evaluator from our host machine to make sure the logging is working correctly, so we need to set up port forwarding to VM for the evaluator:
$ VBoxManage controlvm panamax-vm natpf1 evaluator,tcp,,1081,,1081
(Instructions for sending data to the collector from the host machine are also included in the documentation.)
We'll use Valera Tretyak's simple hello world app for our application, making a small change to log each request to the cube collector.
Then we can create a docker image for this app and upload it to the Docker Hub.
$ docker build -t cwarden/hello-cube . $ docker push cwarden/hello-cube
Now, let's actually use the template as a template by adding our new image as
another container. From Manage | Manage Applications | cube, let's add the
Next, we need to link the new container to the cube collector so it knows the
hostname and port to use when sending events. We also need to expose the
container's port to the VM. Click on the hourglass next to the hello-cube app.
And we need to expose the VM's port to the host machine.
$ VBoxManage controlvm panamax-vm natpf1 hello,tcp,,8080,,8080
When we access the app on localhost:8080, it will send an event to the cube collector. We can use the cube evaluator to monitor the number of events being generated.
Now that we've finished building our app, we can export the app as a new template to a github repo using the Save as Template button.
I recently bought a RAID enclosure for my backup drives. One of the nice features is that it powers down the drives and fans when there is no activity for five minutes.
I use burp for backing up my laptops. Burp has a timed backup mode, in which a client can connect to the server, and start a backup if it's been too long since its last backup. This allows, for example, wife to suspend her laptop whenever she wants. When it is running, burp will check with the server every 20 minutes whether it needs to be backed up. If she suspends it in the middle of a backup, it will resume during the next check.
I've configured burp to only allow backups outside of my normal working hours so I don't need to hear the drives grinding away while I'm at my desk. (Pulling myself away from my desk at night is the next challenge.)
The only problem with this configuration is that when a client connects, the backup directory is accessed to figure out whether to start a backup, causing the drives in my RAID enclosure to spin up. The solution was to not have burp touch the backup directory until it's starting a backup.
When a client connects, burp creates a lockfile. By default, this lockfile is
created in the backup directory, but there is an option to use a different
client_lockdir. Set this in the server's
The next change required is to have the timer script, which checks whether the
server should start a backup when the client asks, use a separate directory to
check the state of existing backups. First, enough data from the backup
directory needs to be copied to a separate directory each time a backup is
completed. To do this, I use the
server_script_post option, which
runs a script each time a backup completes.
script copies the
current symlinks and
files from the backup directory to a separate state directory. I updated
the timer script to check this state directory. Review all of the changes at
As described in the previous post about our development process at GreatVines, we use LiquidPlanner for project management. Estimating how long tasks will take is important in most software development projects (and generally any project involving more than one person).
LiquidPlanner provides a great way of estimating tasks using an 80% confidence interval, but estimating tasks like this is a new concept for most developers. Below is a transcript with a developer that recently joined our project. We were able to get from "I have no idea how long it will take" to a decent estimate in about half an hour, but it should only take a minute or two for the next task.
2013-09-05 11:16:52: <Christian G. Warden> have you guys reviewed all of the tasks for the september release? 2013-09-05 11:20:44: <Ben H> yes, but many of the tickets refer to parts of the code I have no experience with so I have no idea how much work they will be. 2013-09-05 11:23:11: <Christian G. Warden> you can put wide estimates on tasks like that. can you give me an example, and i'll walk you through it? 2013-09-05 11:25:15: <Ben H> 10631525- Make Modal Form Fields scroll-able: I have no idea where contacts are viewed in the application, I haven't had any reason to look at their controller/model 2013-09-05 11:25:52: <Ben H> so any time estimate I would come up with would be a sheer guess 2013-09-05 11:26:38: <Christian G. Warden> ok, no problem. what's the likelihood it will take you more than 2 weeks to complete the task given what you currently know about the app? 2013-09-05 11:27:21: <Ben H> darn near 0% I would figure 2013-09-05 11:27:26: <Christian G. Warden> phew :) 2013-09-05 11:27:31: <Christian G. Warden> how about 1 week? 2013-09-05 11:28:29: <Ben H> that's still very high, so 2%? 2013-09-05 11:28:38: <Christian G. Warden> how about 3 days? 2013-09-05 11:28:40: <Ben H> given that it looks like pure GUI work 2013-09-05 11:30:03: <Jim Thompson> yeah Ben the problem is that one customer has put so many fields in the FieldSet that you can't see the entire modal, the Save and Cancel buttons scroll below the page 2013-09-05 11:30:33: <Ben H> I guess 3-4 days (if it is pure GUI work then 1-2 should be more then enough, unless there's something unseemly complex) 2013-09-05 11:30:45: <Jim Thompson> I am going to move this task beneath the Performance items Christian 2013-09-05 11:30:47: <Jim Thompson> (and Ben) 2013-09-05 11:32:20: <Christian G. Warden> would you say there's a 10% probability that it will take more than 4 days? 2013-09-05 11:33:34: <Jim Thompson> at 12-32 hours it might have to drop off the list, or at least to the bottom 2013-09-05 11:34:40: <Christian G. Warden> jim, we'll go through the tasks and reprioritize after the estimates are updated. 2013-09-05 11:34:57: <Jim Thompson> ok, I also moved "New Task" and "New Event" to October, which is when we promised it 2013-09-05 11:35:39: <Christian G. Warden> you're interrupting my estimation lesson :) 2013-09-05 11:36:02: <Ben H> (yeah, he's been high balling for tutorial purposes) 2013-09-05 11:36:11: <Jim Thompson> ok im out 2013-09-05 11:37:50: <Christian G. Warden> no, i want to come up with realistic estimates. if they're wide right now because of uncertainty as to what it will take to complete them, that's fine. once we have estimates, we can decide whether to invest time to come up with tighter estimates. 2013-09-05 11:38:44: <Christian G. Warden> so, do you think 4 days as a high estimate is accurate? 2013-09-05 11:39:29: <Ben H> for this example I put 12-24, it's really wide and most likely most of that will be learning about how that GUI element is working and making the tests 2013-09-05 11:39:57: <Ben H> the range is more to show that it could go horribly wrong if there is something there I have no idea of 2013-09-05 11:40:24: <Ben H> but 12 still seems really high 2013-09-05 11:40:30: <Christian G. Warden> what's the likelihood that you will complete the task in 2 hours? 2013-09-05 11:41:14: <Ben H> the likihood I'll complete it in 6 hours I would meet the 10% threshold 2013-09-05 11:41:26: <Ben H> *likelihood 2013-09-05 11:41:38: <Christian G. Warden> ok, so, let's make it 6-24 2013-09-05 11:45:29: <Christian G. Warden> and you can check yourself by imagining we're going to a casino. you can choose from two wagers: 1) roulette - there are 10 numbers on the roulette wheel. if 1 or 10 comes up, you lose. if 2 through 8 comes up, you win $100. 2) the bookmaker - if you complete the task in less than 6 hours, you lose. if you complete the task in more than 24 hours, you lose. if you complete the task between 6 and 24 hours, you win $100. which wager do you want to take? 2013-09-05 11:47:25: <Ben H> bookmaker, it also means I can spend some speculative time trying different approaches to find one that solves the problem better 2013-09-05 11:47:34: <Ben H> if things go well 2013-09-05 11:49:16: <Christian G. Warden> so it's not really an 80% confidence interval. it might be more like a 90% confidence interval. you should make the range narrower. 2013-09-05 11:49:52: <Ben H> how then do you show the risk if (I doubt in this case) that 10% could be really bad 2013-09-05 11:52:19: <Christian G. Warden> it's ok if there's an outlier occasionally. we want to get an accurate estimate across all of the tasks. 2013-09-05 11:56:09: <Christian G. Warden> you should narrow the range until you're ambivalent between taking the two wagers. and widen the range when you'd rather play roulette. 2013-09-05 11:58:02: <Christian G. Warden> but we've already gone from "i have no idea how long it will take" to "i'm 90% confident, it will take between 6 and 24 hours". 2013-09-05 11:59:40: <Ben H> I'm just worried it's misplaced confidence 2013-09-05 12:00:03: <Ben H> either way, given the range, I am (90%) confident that I can get it done within the range 2013-09-05 12:07:07: <Christian G. Warden> nobody dies if you're wrong. you're expected to be wrong 20% of the time (once you narrow it to an 80% confidence interval). if we decide we need to know with more certainty when something will be done, we might ask you to spend an hour to do a little research so you can come up with a tighter estimate, after which you would adjust the high and/or low estimate. for now, though, estimate the tasks based on what you currently know. 2013-09-05 12:07:50: <Ben H> ok
The roulette idea comes from Douglas Hubbard's How to Measure Anything.
In response to a customer's request for roadmap details, and not having the project artifacts they expected based on how they manage internal projects, I've written up a description of the development methodology at GreatVines.
The GreatVines development methodology follows from a few high-level axioms, draws from multiple software development and project management methodologies, and makes use of modern tools for collaboration.
The first two items, changing requirements and priorities, are related, and mean that we don't expect to be able to predict the future more than a few months out. We generally have a good sense of which new features customers want to make use of next month, but not those that will be most useful next year. Additionally, as we deliver new features to customers, requirements for enhancements to these features become apparent as we get feedback from them.
Item 3, delivering quality software, stands on its own. These first three items reflect what we think are the most important principles from the Agile Manifesto.
Item 4, that the delivery of updates is cheap, follows from the fact that we are a software-as-a-service company. The days of shrink-wrapped software are gone, and updates to both our web-based and mobile applications are delivered without end-user intervention. Because delivering updates is cheap and requirements change, we are able to deliver multiple iterations of new features, and adjust requirements as we discover exactly how customers are using our software and how they would like to use it.
Therefore, we aim to deliver useful, working software frequently under the assumptions that requirements and priorities change regularly, and that delivering updates to our software does not impose a large cost on either GreatVines or our customers.
We generally organize releases into three- to four-week cycles, similar to Scrum sprints, but try to keep the next two to three releases planned as well so we're looking two to three months down the road. Like scrum and kanban, we start from a backlog of features that we would like to implement. We bring together team members from development, support, implementation, and sales when prioritizing the tasks for future releases.
Each release typically has a combination of small tasks, which may already describe a combination of technical requirements and planned implementation details, and larger tasks, the requirements for which need to be further elaborated.
In preparing the bigger tasks for an upcoming release, we work to ensure the requirements are clearly defined so development can proceed. Features that require a new user interface or significant changes to an existing interface are mocked up. There are often a few iterations of mockups, as questions are raised and addressed, and the requirements and mockups updated.
Depending on the feature, we occasionally share the mockups with existing or potential customers, and solicit their input before starting development, but we generally prefer to implement a working interface, then adapt it as we get feedback from real use.
As the requirements are fleshed out, we break them down into development tasks, referencing the mockups and relevant requirements. Each task is estimated by the developer that will be responsible for writing the code.
The tasks within a release are prioritized so if there is any slippage, the highest priority tasks get completed first. The assignment of prioritized tasks to developers ensures that each developer knows which task they should be working on at any time, and helps collaboration by letting all members of the team know who is working on what and how the priorities are defined. As in kanban, it also limits the amount of work-in-progress.
We have found that testable code is better code. If it's hard to test, it's probably poorly designed, and needs to be decomposed.
During peer review, both the tests and implementation code are reviewed. We look for areas in which the design of the software can be improved as well as ensuring that the code follows our internal coding conventions to ease future maintenance.
With our mobile app, we have started writing automated functional tests as well as unit tests. The goal is to have the tests fully specify the functionality of the application.
When a new version has been released, our release notes are updated.
There are a couple important tools we use to support our development process. We use LiquidPlanner as our project management tool. Each release is organized as a package. We have our web-based application and mobile applications organized as separate projects, broken down into broad features using folders within the projects. We pull tasks from both projects into a release package.
Mockups for new interfaces that will be built as part of the release are done at wireframe level like one would do in Balsamiq Mockups. (Our mockups are actually often done as Google Drawings because Jim typically does them, and he has somehow become proficient at creating them quickly as Google Drawings.)
The use of ranged estimates in LiquidPlanner allow us to organize releases with a fair amount of confidence in being able to hit delivery dates. Estimating software development tasks is a continuing challenge, but we find the confidence interval-based approach superior to point estimates often used in project management tools and story-point estimates used in scrum. In the backlog, we often put wide estimates on broadly defined tasks, when it's not yet clear the value of the information provided by more granular requirements and estimates. When organizing a release, we break down big tasks into smaller ones, generally around one- to six-hours. Smaller tasks are easier to estimate more accurately, and estimates improve with practice.
LiquidPlanner also serves as a collaboration tool, minimizing project management overhead. As discussed above, the priority of tasks is unambiguous. We keep estimates of remaining work on tasks up to date, and LiquidPlanner automatically updates the schedule. Discussion about task details happens within LiquidPlanner so if there's a change to or clarification of requirements, there's one place to look. (When real-time discussion is required, we generally use Google Hangouts, then record the result of the conversation in LiquidPlanner.) Using LiquidPlanner mostly eliminates the need for "what's the status?" discussions.
We also track the status of code reviews and whether the code for each task has been merged within LiquidPlanner.
We use git and GitHub for version control. Our use of github follows how most open source projects are organized. We have a greatvines organization, which contains the primary repo, from which we package our software. Each developer has a fork of the repo. Developers create feature branches for individual LiquidPlanner tasks, and open a pull request against the greatvines repo when the code is ready to be reviewed. The pull request is noted in LiquidPlanner and the task is moved to a ready-to-review package, which puts the task on hold (not scheduled for further development).
Peer review is done within GitHub, using inline comments on the open pull request. If the task needs further work, the task is moved out of the ready-to-review folder in LiquidPlanner so it's scheduled for additional work. When the pull request is merged, the task is marked done.
There are a couple areas in which we are planning improvements to our development process around testing. We are using jasmine and jasmine-jquery to do some functional testing of our mobile app, but it's not exhaustive, and we don't have anything similar in place for our Visualforce interfaces. Therefore, we augment our automated testing with manual testing. We would like to add more robust automated functional/acceptance testing, perhaps using a browser automation tool like selenium or casperjs.
As we further automate testing, we also plan on introducing continuous integration. Travis CI looks promising here, with tight integration with GitHub, and extensive use among open source projects already.
There are a couple areas in which we might experiment with changes in process in the future. One is in choosing which features should go into each release. We currently use a consensus building approach in which we informally consider the value of possible enhancements to our customers. I'd like to investigate whether there might be gains to be had from the use of more formal Decision Analysis practices.
We might also experiment with scrum-style user stories for documenting requirements. For the most part, our lack of a formal requirements documentation structure has not been a problem, and we are able to turn requirements into technical designs easily. But in cases where requirements are very broadly defined, the "As a user, I want" structure may prove valuable. Using a standard structure may also ease on-boarding of new employees and coordination with any contract or outsource developers with which we work.
Every time I tried to start an Android emulator, the window appear briefly,
then disappear, and I'd get the error, "emulator window was out of view and was
recentered". The solution is to edit
emulator-user.ini in the
$HOME/.android/avd/<image name>, by
default, on linux. Set window.x and window.y to 0.
The solution is on stackoverflow, but Google couldn't find it.
Update 10/05/2012: I've found this solution to work inconsistently. A better solution under xmonad is to set up a ManageHook to match the emulator window and move it to the floating layer:
myManageHook = composeAll [ className =? "emulator-arm" --> doFloat ]
The solution I came up with uses VirtualBox, a Windows 7 image, Google Drive for Windows, a tool from Microsoft called SyncToy, and the Windows task scheduler.
Based on my previous Force.com app for managing Salesforce Cases on a Kanban board, I've built Kanban for Salesforce, a project management tool on the Force.com platform. It uses a custom object for cards so it's not tied to an existing Salesforce native object. Cards can be organizated into sprints.
I've updated my kanban tool for Salesforce Cases to allow filtering Cases by multiple criteria. In addition to Owner, you can now filter by the age of the Case, the Priority, and the Type.
I've started building a tool to manage Salesforce Cases on a kanban board. Each case is a card, and each Status picklist value is a list on which the cases can be organized. You can drag cases around to prioritize and track progression.
The lists are organized left-to-right in the same order that the picklist values are arranged top-to-bottom. There's a custom setting to treat the first picklist value, typically "New", as your backlog. The backlog is hidden by default, and can be shown by clicking the arrow on the top left.
If you use Salesforce Cases, please try it out. I would love to get some feedback and suggestions for features. It's still early in development, but I was able to get off to a good start on the front-end by adapting code from huboard, a great tool for managing github tickets.
This is also my first time trying to build my own commercial application on the force.com platform. Thoughts from anyone who has sold applications on the AppExchange would also be appreciated.
Update 2012-05-27: Added better handling of many open Cases.
Update 2012-06-24: Now supports filtering by multiple criteria.
Update 2012-08-11: I've developed a new project management tool not tied to native Cases, Kanban for Salesforce.
The state is that great fiction by which everyone tries to live at the expense of everyone else. - Frederic Bastiat