Venn Candid discussion on building a design-development agency

It's probably not a race condition

I came across an interesting little problem a month or two ago. I was working on a crypto-currency project that needed to transmit data in the form of JSON. One down side of using JSON, however, is that binary information needs to be character encoded.

Most crypto-currencies use Base58 encoding, which is like Base64 encoding, without difficult to distinguish characters: 0, O, I, and l. (Fun aside, Base58 encoding is the only encoding where “1” does not equal 1, it equals 0 because it is the first number.) It is useful because it stops humans from misreading wallet addresses or transaction ids.

Rather than confuse people by including both Base64 and Base58 encoding, I decided to just stick with Base58 encoding for everything, even parts of that humans shouldn’t normally need to read like cryptographic signatures and encrypted messages.

I was writing the project in Ruby so to convert the binary chunk into a Base58 encoding I’d written:


Which basically reads as: Take the binary blob, represent it as Base16 characters, append all those characters together, convert them to an integer, then convert that integer into a Base58 encoded string.

To decrypt I’d:


I should have been able to go straight from binary to Base58, but the library I was using required integer input.

My tests passed, and everything worked great, so I went on to start implementing an unrelated portion of the crypto-currency.

But every now and then (around 5% of the time) my tests, which verified that signatures and encrypted chunks would Base58 encode and properly propagate to and from the blockchain, would fail.

Then I’d immediately re-run them and they’d pass. So naturally I thought “race condition” and moved on. My code worked just fine a second later so it couldn’t possibly be anything else. I always thought that Rspec would do the right thing and run tests in parallel, since tests are supposed to have no side effects. Also my tests were pretty slow, because certain operations are not possible with less than 1024 bit keys in Ruby’s OpenSSL due to security concerns, so to speed things up I used variables that were shared from spec to spec, instead of the normal:

let(:signed_blob) = { long_running_process }

So I figured it was possible I was inadvertently opening the door to a race condition somewhere. I should note at this point that when I mentioned this to Justin, he shook his head and informed me that it probably wasn’t a race condition and told me that Rspec does not run things in parallel by default.

I continue on letting these tests randomly fail until I started the file-upload code, which splits large files into smaller chunks then puts them back together. When those started failing I knew the problem wasn’t OpenSSL. With a now reliably failing test I started diving into where my code was breaking down.

The problem with debugging binary data stuff is that you can’t just “see” what you are doing. Printing it to the terminal will just flood you with a whitespace and unreadable characters. Frustrated, I finally decided I was going to compare binary to binary before and after. To do this I changed my hex based packing code to binary:




so I could print the actual ones and zeros to compare them bit by bit (Achievement Unlocked - Literally Bit by Bit!).

I discovered problem quite quickly. When you have binary data that starts with “0001111” and you represent it as an integer it is equivalent to “1111” since leading zeros are meaningless on a number line. But without those leading integers, it wont’ reassemble the same data.

I was losing all my leading zeros! The fix was really easy, just add one bit at the start and encode it. When decoding it, ignore the first bit.

It also explains why my initial build was failing roughly 5% of the time, since I was representing the binary data as a hexadecimal encoding before converting it into an integer 1/16 of the time the first hexadecimal number was a “0”.

Heisenbugs are hard, but next time I encounter one I’m going to resist thinking race condition.

Designing for the color blind

Color blindness affects 7 to 10 percent of the male population. It is a large enough demographic that you should be spending some some time making sure that your designs are still usable by those affected.

The most common types of color blindness are protanopia (can’t see red) and deuteranopia (can’t see green). About one third of the color-blind are completely blind to red or green. The rest generally have a milder form of color blindness.

What can you do to improve your designs? Here are a few basic things:

Blue is the safest color. If you make your link color red or green, or even just tint it, you need to underline it as well or people may not realize it is a link. Take a look at this screenshot from CrunchBase. It is not clear at all that you can click on the names of people.

Invision’s primary calls to action, normally pink, fall completely flat and fade into the background. The signup button in the top right corner is practically invisible.

Google, on the other hand, is largely unaffected. The primary link color stays the same, and the url in green underneath turns gray and matches the link to its right.

Don’t depend on red and green to indicate status or error state

Red and green may as well be the same color, so don’t depend on either. Look at the Devices tab in Google Chrome. Which port is forwarding correctly and which is failing?

If you want to use red and green make sure to combine them with a label or icon so color is not the only indication of what is going on.

Be careful when using color legends

Frequently chart legends depend on the user matching the color of a bar or line with a color swatch next to a label. Colors like purple and blue will become eerily similar when viewed by a color-blind person.

Unsure? Simulate color-blindiness in Photoshop

Photoshop provides a built in way to preview what designs look like to somebody with protanopia or deuteranopia. Simply turn on proof colors with ⌘Y and go to View > Proof Setup and select the color blindness type you want to simulate.

Breaking into Data

Recently someone I used to work with at 500px reached out to meet for coffee.

I want to get into data.

The allure of web applications had started waning, and he was looking for something more esoteric and challenging. We talked for about an hour and a half, and by the end of it I’d emailed him several links for him to follow up on. Only now, almost a month later, did I realize that this information could be useful to more than the one person I shared it with.

Pick a direction

Data Science is an awkward title. The field is based on programming, but it has many influences depending on where you specialize. Visualizations / charting (design), predictive analytics (statistics), machine learning (statistics & computer science), analytics / report writing (business accounting), data warehousing (database administration), multi-objective optimization (economics / systems design engineering).

And in most of these sub-specialties we have the general data munging that removes the Zuckerbergs and matches Zoë Keating, San Francisco, California with Zoe Keating, SF, CA (this is literally half of what we do).

Disregard certifications, acquire data

No one really gives a shit if you’re certified. No one worth working for anyway.

The best way to get into data is start and finish projects that are similar to the type of work you want to tackle at your ideal day job. Here is a big list of open datasets that you can peruse to get inspiration. Also, try to stick to Python for backend programming and JavaScript for visualizations; there are alternatives, but unless you’re angling to work at a hedge fund, this is what the market wants right now and for the foreseeable future.

Don’t think that you need the perfect open dataset, sometimes the right non-open dataset is just as easy to acquire. For example, if you want to analyze and optimize, say, startup growth metrics, you can read up on it then convince some local startups to let them give you a crack at analyzing their data. Anonymized reports on startup growth from real startups look a whole lot better on a portfolio than nothing at all. Furthermore, if you do a really stunning job, you may not even need to interview at all; a startup may give you an offer on the spot.

But what startup is just going to give me their data?!

Most of them. They wish they had someone looking at their data.

What about other sub-fields?

Creating visualizations is an easy way to start. Take something that is already visualized poorly, and revisualize it. Contact the original user and get them to use your version instead. Checkout orange, iPython, and d3. Of course Excel works great. Stay away from pie charts and infographics that lack substance. If you want to make something truly amazing, consider learning photoshop or illustrator and do the charts and graphics by hand after rendering them somewhere else. If it isn’t beautiful, you’re wasting your life.

Data warehousing is a bit trickier. Your best bet would be to find a number of separate data sources that really should be combined then combine them and build out an API for anyone to use. Examples here are hard because it is highly situationally dependant, but perhaps write a scraper that consumes multiple municipalities and provinces voting data in your country and provide a uniform API for people to query. Whatever you do, be sure to read The Data Warehouse Toolkit, I did and it changed how I looked at organizing data for reporting (thrilling stuff, I know). I will warn you that data warehousing is extremely boring, but extremely easy and pays well with incredible job security. Be the Beamter of the tech world.

Predictive analytics: sports, stocks / options / derivatives, elections. Also, Kaggle competitions to a limited extent. If you really want to own this field it will take more work than the others, but it is the easiest way to become a millionaire by the time you’re 27, you’ll just have to work for a soul crushing hedge fund or private equity firm “making book”.

Machine learning: Tons of possibilities here. Classifiers, search, recommendation engines, image analyzers, chatbots, so many possibilities. Start by getting up to speed on some of the basics here. Check out a couple cool libraries here, here, here, and even here. Read papers coming out of Google / Facebook / U of Toronto / MIT / Stanford / Italian Universities. It will be hard at first, but if you already have a background in science, engineering, or math, you’ll be ok. Next, hit up some open data sets and try to think of what people want to know or find that they currently can’t. That is essentially what machine learning is. Also, be sure you understand ROC curves, because they are probably what you’ll want to make sure that your predictor is operating optimally (this is how you iterate on a ML project). Be sure to hit up Kaggle and try your hand at some of the competitions or just analyze Wikipedia and build a website classifier. Pro-tip: don’t crawl Wikipedia, there are tarballs available for data hackers.

Multi-objective optimization: I’ve only used these in the construction industry, so my exposure to them is quite narrow. I understand that economics and government / environmental policy is involved here too; perhaps investigate a pollution angle? If anyone has suggestions here, I’d appreciate them.


Finishing is the hardest thing in the world, but once you get a couple polished things out there and start telling people about them success will start snowballing.

If this post inspired you to start, finish, and polish something, please let me know, and I’ll be sure to feature your work here. If you have any questions or are looking for more specific advice please reach out, I love talking to interesting people.

EmberUI Alpha Release

Nine months after we first wrote about it, liftoff!

EmberUI has been a labor of love and it is finally ready to see the light of day. Please use it, fork it, patch it. We especially need help ironing out bugs in Windows and Linux.

We have tried to set a new standard of quality with EmberUI. Throughout the design and build process we have been governed by one rule: the user experience trumps all. No matter how small the detail was, if it made it the act of using a component better then it had to go in. It is, after all, the small details that create the elusive “native” feeling.

Of course, we didn’t want to sacrifice the ease with which you can develop amazing applications either. A lot of thought has gone into the API and we will continue to iterate on it until we hit 1.0.

Play around with a live version on the EmberUI website or JS Bin Sanbox

Key Features

EmberUI is a polished product that you really need to use to appreciate, but here’s some of the things we are most proud of:

Error handling. Form elements are smartly validated as the user uses them and errors are displayed inline. Combine with a library such as ember-validations and get all your error handling done with almost no work.

Customization. EmberUI was built to look great right out of the box. Obviously each application has unique requirements so customization is paramount. Everything can be changed or extended to fit your product: component sizes, aesthetic styles, animations. Everything, and with a nice API to boot!

Animations. The EmberUI philosophy is that anything that changes visually needs to be animated. We use a combination of CSS and JS animations and transitions to achieve a very fluid experience on desktop and mobile.

Keyboard support. You can use all the components using purely a keyboard. We paid special attention to the experience of tabbing through the components and making sure focus is always where you expect it to be.

The road to 1.0.0

When we release 1.0 we will lock down the API, but before we get there, there are a few things that has to happen first.

More components and mixins. Adding a masked input, custom scrollbars, Stripe, and possible a validations library is in the cards. We want to make sure once we hit 1.0 that the component library covers all the basics of building a modern application.

Better mobile support. While most components will work just fine on mobile right now, we want to really push the mobile experience forward and make the components feel like native mobile components. For example, the select window should consume the entire viewport on mobile.

More polish. There are a few known issues that needs to be addressed. The biggest one is how we disable page scrolling when a modal is open. Right now is causes the content to shift around on Windows which is not acceptable. We also need full keyboard support which the calendar components do not yet support.

Full WAI-ARIA support. This is a larger goal and very important to ensure that EmberUI can be used by everybody. When 1.0 is released we need full WAI-ARIA support for all components.

How can you help

This biggest thing right now is to get people using EmberUI so we can get feedback on the API. Once 1.0 hits the API will be locked down, but until then we want to make sure we create the best possible interface. 

Code refactoring would also be appreciated. Our goal was to get a working version first, and then iterate on it after that to improve it. Take a look at the repository and please send us pull requests.

Take a look at the project README for more about how to send us a pull request and syntax conventions.

EmberUI Website
Github Repo
JS Bin Sandbox


Justin, Jaco, and I want to build software of exceptional quality and reliability, build products that have an impact, and surround ourselves with amazing people. Each one of us has our own primary focus, but all three of these goals must exist in superposition in order for any of them to manifest.

I didn’t even realize this until we’d started talking about the place we wanted to build; but after we talked about our primary goals - mine is the people - we realized that all of these things were necessary for any of them to succeed.

Great people work on things that matter. Great people build with exceptional talent and drive. Great people need to be around great people. I frame everything around the people, since that’s my focus, but this could all be reformulated to be about the product or the quality of work.

Here’s how we’re going to do it:

  1. Partners only. After a 6 to 18 month probation, you either make partner or you move on. Partners vote. Partners split year end dividends. This stops us from holding onto nice, but ineffective people or from hiring ineffective support staff. It is also the only way to get truly outstanding people to work with you in the long run.

  2. Minimize the amount of time billable work. Per project billing encourages efficiency and aligns incentives because it motivates us to finish the project and to focus on core value. Yes there are risks, but when you bill by the hour you have give out an estimate anyway. What are you going to do? Invoice double the estimate? Besides, great software isn’t about the feature set, it is about the polish. How much skill and effort went into making Instagram or Slack? How complicated were those apps on paper?

  3. Open source projects. True, contributing to open source projects decreases the amount of direct top line revenue, but without it skills hemorrhage and you lose your best people.

  4. Encourage Vennetians to build and ship side projects and startups. Losing one or two people pales in comparison to the good vibes and press associated with incubating world changing stuff.

Let’s build the best damn team, software, and world. Say hello on email or twitter.