The fractal box model of companies

Companies are boxes. Customer feedback and market data come in through one side of the box; employees come in through another side; product goes out a third side; and (hopefully) profits redistributed to shareholders go out the fourth side.

The company’s job is to make sure that it uses its two main inputs — its market understanding and its employees’ skills — to most efficiently produce its major outputs — product and profit.

You might think of the company box as being “thin” — opposing sides are close together — if the company’s product is  easy to produce. This can be correlated with, but is not the same as, the technical complexity of the product.

The company box is large if the company’s product is hard to replicate — either because it’s highly complex (Tesla), or because it has some other attribute, like a large userbase, that are hard for a competitor to gain quickly (Instagram).

The model is fractal. Departments and teams are boxes within the larger company box. They have a similar structure to the company box, including market intelligence about their customers (internal or external), a set of employees to produce their products, and their (hopefully positive) contribution to the bottom line.

Different levels of managers operate at boxes of different sizes. The CEO is responsible for the entire company box; a VP might be responsible for a large box that holds several smaller boxes inside it.

The key thing for each manager is to realize that their job is really to most efficiently produce their box’s outputs from its inputs.  Managers often define their role as simply the “human resources” part of management: performance reviews, 1:1s, skills development, etc. It’s easy to forget that if your box isn’t producing anything useful, you’re not doing your job, even if you’ve got the employee development part of your job down pat.

“Managing” a team really just means “being responsible for delivery” of that team, and while the human aspects of management are important, they are in service of the goal of creating the team’s product as efficiently as possible.


Creating Effective Presentations

I’ve done a few presentations over the years, ranging in time from three minutes (a Demo Day pitch) to an hour (a programmer training session). I’ve found some techniques that seem to work well for me when creating presentations that capture and keep people’s attention:

1. Be clear about what the end goal of the presentation is. At the end of the presentation, what should the average listener take away? You could be trying to teach the audience a useful practical technique, or you could want them to invest in your company, or you could simply be trying to impress your listeners with how smart you are. Each of these are valid goals, but keeping the overall goal in mind helps in creating a more focused presentation.

2. Write down the key points you want to cover in a text editor before you open your presentation software. Jotting down the skeleton of a presentation is much harder when you have Powerpoint already open. This partly because the mechanics of writing notes are much more cumbersome — click “New Slide”, click the Text tool, click somewhere on the slide to start typing, etc — but also because the abundance of presentation options available distract from your end goal.

3. For each slide, represent the information as visually as possible. For most presentations, the less words on each slide, the better. Many textual descriptions can be replaced with visual flowcharts or diagrams. Some opinions go so far as to recommend never having more than three words per slide, although I consider that a bit extreme.

Note that this is true for presentations that are primarily delivered verbally, not for presentations that are meant to be emailed around. An example of a presentation that’s primarily emailed is a detailed slide deck that for follow-on meetings with investors. For such presentations, the number of readers may be far higher than the number of people present at your presentation, so it makes sense to optimize for the common case with more wordy slides.

4. When practicing, think about the key points you want to cover on each slide. You don’t need to memorize the specific words you’ll be using, but make sure you have (three or fewer) points you want to touch on for each slide. Then you can practice the segue from one point to the next, and have a clear indicator for when you can move on to the next slide.

5. Incorporate humor every few slides. Putting a joke on every slide is overkill in most business situations, but periodically including subject-appropriate cartoons or art can lighten the presentation and re-focus people’s attention by drawing them momentarily out of the topic.

6. End with a summary of key takeaways. This might be obvious, but “tell ’em what you told ’em”. Don’t wait for people to make the connections between the information presented across different slides; explicitly make the connections and tell them what you want them to remember after the talk.

I also find, in general, that the better prepared I am for the talk, the more confident and comfortable I appear right at the start. While most presenters can get into the rhythm of a talk within a few seconds, it takes (for me at least) some effort to appear that way right from the start. I can generally get away with memorizing the first two or three sentences of my presentation — enough to cover the first 15 or so seconds — so that I’m not searching for words at the beginning of the talk. Once that initial ice is broken, it’s a lot easier to think on my feet and use language that comes naturally.

Straddling product and technology in large companies

Many roles at startups are for generalists. Most (good) startups want people who are interested in, and have ideas about, the product as well as the technology. This is partly because the product side of startups is generally immature — a consequence of not having reached product-market fit — and also because smaller teams make it easier for people to voice opinions outside the area for which they were hired.

As companies grow, this fluid movement between product design and implementation slows and becomes more formalized. Larger companies break down product development into product managers, who have responsibility for product spec, and engineers, who control how the product is built. Good product managers don’t try to influence technical decisions, and many engineers are uncomfortable providing opinions on user-facing product features.

For people who move from startups to big companies, it’s hard to find a role that provides the same mix of product design and engineering work that you might do as a founder or early employee at a startup. The roles have simply become too specialized to allow for hiring a single person with impact on both areas.

While some of this specialization is inevitable to allow the company to grow, I think the best teams have a strong product orientation among all team members, whether their job involves programming, QA, or product management. The more the team leadership can do to extract product opinions from everyone on the team, the more invested people will be in the final outcome.

I’ve found the following techniques work well to get even introverted team members to voice opinions about the product:

1. Regular free-form brainstorming sessions about the product. It helps if these are focused on a few product areas, identified in advance, that you are soliciting opinions on.

2. Team-based playthroughs or walkthroughs of the product. Many team members have a hard time seeing the product in its entirety, and are focused on their piece of the whole. It’s useful to get the team together on a regular basis to try out the product. For things like multiplayer games, you can have the team simply play the game together for an hour. For less social applications, one person can demo the product to the rest of the team.

3. Feedback sessions about product roadmap, hopefully before it is committed to outside the team. Providing a coherent vision brings the team together and gives people a shared goal that they can stand behind. It’s even better if the roadmap incorporates suggestions from across the team, for greater shared ownership of the product.

A code migration adventure: moving a 25 GB repo from SVN to GitHub

We’ve had our code hosted in an internal SVN repository for a long time, running on the stereotypical machine under someone’s desk. In a small team with limited branching, SVN has worked very well for us. If you serve SVN over HTTP with port forwarding, people can connect to the repository from within or outside the company network with little local reconfiguration.

Despite the advantages of this model, I recently moved our local SVN repository to GitHub. The main reasons to do this were twofold:

1. Better support for distributed development: GitHub is better at serving requests from across the world than a single machine

2. Better backup (hopefully): GitHub is less likely to lose our data than we are

At $7 per month, moving seemed like an obvious choice, but the process was more complicated than I expected. This was primarily because our repository was large (25 GB, with commits going back 3 years), and I wanted to maintain revision history.

Here are the steps I followed. If you have a similar large SVN repository, these steps should provide a useful guide on moving it to GitHub.

Export, compress and move the SVN repo

Export the SVN repository to a dump file using svnadmin dump:

svnadmin dump /path/to/repo > myRepo.dump

This exports *all revisions* of the repository, which can result in a very large file: 25 GB for our repo. I was shutting down our SVN server machine anyway, so I compressed the file and transferred it to a new machine:

tar cvzf myRepo.tar.gz myRepo.dump

My trusty Western Digital MyBook did a great job of moving the tar file to other computers.

2013-08-08 15.34.19

Extract to a working SVN repository (on a Mac)

All the decompression programs I tried on a Windows machine (7zip, Winzip, PowerArchiver) were able to uncompress the gzip file, but failed with an error when trying to untar the resulting tar file. This is probably due to the size of the file.

On a Mac, I was able to simply use the inbuilt Archive Utility program, which really means just double-clicking the file.

When this is done you should have the original myRepo.dump file. Extract this into a working SVN repository on the Mac:

svnadmin load /path/to/repository < myRepo.dump

Start the SVN daemon:

sudo svnserve --daemon --root /path/to/repo/parent

The repository is now accessible via the svn:// protocol:


Use svn2git to convert into a Git repository

svn2git is a tool for converting an SVN repo to a Git repo while maintaining branches and tags.

Install svn2git:

$ sudo gem install svn2git
Successfully installed svn2git-2.2.2
1 gem installed
Installing ri documentation for svn2git-2.2.2...
Installing RDoc documentation for svn2git-2.2.2...

Create an authors.txt file that lists the name and email of each person who committed into your repo. I used the following awk script from

$ svn log -q | awk -F '|' '/^r/ {sub("^ ", "", $2); \
sub(" $", "", $2); print $2" = "$2" <"$2">"}' \
| sort -u > authors.txt

Make a directory in which you want the Git repo to live, and move into it:

$ mkdir myGitRepo

$ cd myGitRepo

Then run svn2git and tell it to use this authors.txt file:

svn2git svn://localhost/myRepo --authors authors.txt --verbose

(This will probably take a long time.)

When it’s done, verify that the git repo is set up by doing a git log on one of your files:

$ git log myFile.txt

commit 361a778a6d3aed906d26acc6c7b93d079fa29aeb
Author: ABC <>
Date: Sat Jun 5 07:26:40 2013 +0000

help_movies file is moved to its parent directory for the web- based application


If you get similar output, you’ve successfully converted your SVN repository into a local Git repo.

Push up to GitHub

Log in to your GitHub account and create a new repo through the web interface. Add the interface as an SSH “remote” to your local repository:

git remote add origin

(GitHub has a handy guide to working with repositories using SSH.)

I used SSH rather than HTTPS since it seems the more recommended mode when making the initial push for large repositories.

The next step is to try naively pushing the entire repository to the remote server:

git push -u origin master

Because the repository is large, this is likely to fail with the following error:

fatal: pack exceeds maximum allowed size

The only solution I found was to push a smaller range of commits at a time. In Git parlance, HEAD is the name for the current “tip” of your local repository, and “HEAD~n” means “n commits before the current HEAD”. So to push a series of commits to the remote server, you might do something like:

$ git push -u origin HEAD~1000:refs/heads/master

$ git push -u origin HEAD~500:master

$ git push -u origin master

This pushes all revisions from the start up to (tip-1000) to the remote server, then revisions from (tip-999) to (tip-500), and then finally the last 500 revisions.

Why /refs/heads/master for the first push? If you don’t do that, and simply use git push -u origin HEAD~1000:master, you’ll get the following error:

error: unable to push to unqualified destination: master
The destination refspec neither matches an existing ref on the remote nor
begins with refs/, and we are unable to guess a prefix based on the source ref. 

This is something called a detached head. Fully qualifying the destination for the first push allows you to use master for subsequent pushes.

Pull the repo down from GitHub to another machine

The repo is large enough that trying it to pull it down with git defaults on another machine may fail with the following error:

The remote end hung up unexpectedly

If you see this, increase the value of a setting called http.postBuffer:

git config --global http.postBuffer 524288000

This changes the size of the data “chunks” that Git uses to transfer data.

Congratulations! You should now have a working GitHub repository with all your revision history intact, and with local mirrors on a couple of machines. Another data loss disaster averted!

High performance teams

I had an interesting discussion with someone recently about what “high performance” means in the context of development teams. Specifically, I mentioned to this person that I thought I had been able to build a high performance team at my startup, to which he immediately countered: “So what do you mean by high performance?”

I had used the phrase somewhat casually, as though it was obvious what it meant, but I realize that the term can mean very different things to different people. Everybody’s trying to build (or be part of) a high performance team, but how do you get to one?

In my experience, a high performance team has the following characteristics:

1. Lightweight processes are enough to maintain progress. A light agile process may be sufficient.

2. Everyone has a rough idea of what everyone else is working on

3. Made up largely of generalists. Specialists may be necessary for some functions, but most engineers are comfortable working on the front end or back end, and artists can be given work as far apart as UI or character animation.

4. People are curious and eager to take on new challenges; they have the “get stuff done / JFDI” trait

When hiring, I generally found that what I call “context-free programming questions” were effective only as a base-level filter. These are questions that test the interviewee’s ability to write code on a whiteboard, often after thinking of an algorithmic solution to an abstract problem: “Write a function to print a matrix in spiral order”, “Write an iterator which implements the next operator for a binary search tree” etc.

Programming problems can serve to show that the candidate has a base level of programming ability. I’d generally ask Fizzbuzz type problems which are even simpler than the ones above — they helped weed out people who were out of touch with programming in general. For artists, I’d ask them to sketch a background on a piece of paper in 30 minutes; this assumes that drawing skills are highly correlated with digital art skills (which is generally true in my experience).

Questions more complex than this run into issues of people getting nervous and freezing up, or simply being unable to think of the right solution to the problem in a limited amount of time. Either way, there isn’t much point in trying to infer a person’s long term fit for a job from such a small set of observational data.

The most high quality predictor of productivity on the job that I’ve found for programmers is to ask them in detail about their past projects, preferably ones that they did as side projects.

I interviewed mostly people at or close to the start of their careers, where side projects are an indicator of whether they are curious enough to do things that are not strictly necessary to get their college degree.

Questions like the following are very useful:

1. Describe one of your side projects

2. How big was your team? What did you do, specifically?

3. Why did you pick this problem?

4. Why did you pick this set of technologies to implement this project?

5. What was the most difficult technical challenge in this project? How did you solve it?

6. Describe the most complex bug you had to fix. How did you debug it, and what did you do to fix it?

New questions naturally emerge from the candidate’s answers to these questions, and are a good way to further drill down into their experience.

If a candidate answers “I don’t have any side projects” to question #1, that’s a strong indicator right there (although it’s typically not sufficient to simply stop the interview there).

I found over time that refining these questions, and maintaining answers from previously successful hires as a benchmark, helped me get better at determining who to hire or not.

Startup locations in the Bay Area

I’ve had occasion to speak with a number of startups in the Bay Area over the last few months. One trend I’ve seen is how many of them are located in San Francisco, in SOMA as well as other locations in the vicinity of Market Street. Two BART stops seem to cover a large number of these SF-based startups: Embarcadero and Montgomery.

It didn’t always used to be this way. When I first moved to the Bay Area, most tech companies — both established ones and startups — were in the peninsula. My first job was at a startup in Redwood City. SF was a city you used to drive through on the way to Marin County, and none of my friends from college worked in the city.

Things began to change around the end of the 2000s. I’m not sure what exactly it was — perhaps SF got really cheap or the peninsula got too expensive — but a lot more startups seemed to be popping up in SF. I met with an acquaintance at a game company near AT&T Park in 2010 and noticed how many tech companies I passed on the way to his office from the Caltrain station. The trend seems to have accelerated since.

It seems that there are relatively few new startups being started in the peninsula, and of the ones that are, many move to SF soon after their first serious round of funding.

I’m interested to see how this trend plays out. Being in a city definitely has its advantages in terms of attracting a certain type of employee (generally younger and single), but it may have adverse effects on hiring more experienced employees with families. It may also limit the radius within which companies can hire — unlike mid-peninsula locations like Palo Alto, SF is hard to reach from many places in the Bay Area (and much harder to find parking in!).

Be that as it may, I now have several friends who commute up from the peninsula into the city every day.

The Last Mile Problem

One of the things you hear most often about Indian infrastructure, especially telecom, is the “last mile problem”. While the telecom backbones within the country are fairly robust, actually getting physical connection wires into a customer’s premises is problematic and failure-prone.

I’ve had a good opportunity to observe the last mile problem play out fully over the last few months with the Internet and phone connection at my office.

The connection was first set up in June. Since the building is old, it didn’t have a connection to the provider (Airtel)’s distribution network, so the first installation technician they sent did whatever it took to get the connection going. This meant stringing a wire from the nearest telephone pole, in the air across the street, and then taping it all around the building to the nearest window from my office, and then hooking up the cable modem inside the office to that.

I felt this state of affairs was unstable and could be brought down by rain, or something even more dangerous like a gentle breeze. After repeated calls to the customer service number, they sent out another technician, who now encased the wiring around the building in a PVC pipe to protect it better. We were still getting our Internet connection through a wire thrust through our window.

Yet more phone calls yielded one more technician, who went a bit further. He was now able to connect their wires to the building’s internal phone wiring (it did have wiring, thankfully), so that we could now plug in our cable modem into the phone outlet in the wall.

All this time the wire from the telephone pole suspended 20 feet in the air was still our Internet lifeline. The ISP was reluctant to change this since it meant digging an underground cable into the building for them. After much follow-up, they finally agreed to do this. A (small) army of labourers arrived one day and installed a “distribution box”, or DB, on the building premises. The location of this box caused some controversy among the building residents due to NIMBY-related concerns, but some negotiation took care of this. The DB was then connected through an underground pipe to the nearest point – luckily, just outside the building – that had a connection to the ISP’s backbone in Chennai, and the PVC piped-wiring attached to the building (which was itself finally connected into the building’s internal wiring) was plugged in on the other side.

We now have:
1. A cable modem in our office, connected into a wall socket, that provides us with beautiful data bits;
2. A PVC-encased copper wire circling the building;
3. Not an exposed or hanging wire in sight.

Could life get any better?