Civet

The Spice of Life

About Me

Polyglot, Problem Solver, and into improving systemic efficiencies.

recent public projects

Status updating…

found on

contact at

zander@civet.ws

Weather Github Downtime by Pushing to All the Places

- -

Want to not care whether Github’s up or down at the moment?

#

Have faulty servers of your own? Or tend to anger people with bot armies?

#

Want to push to two Git Repos via a single command?

#

Want to do it easily via a simple .git edit?

My use case is pushing code that resides on Github as well as on Bitbucket. I want it available in both remote locations in case one is unavailable.

Here’s how you do it:

Add the two remotes as normal

git remote add origin GIT_LINK_TO_REPO

git remote add bitbucket GIT_LINK_TO_REPO

In local repo, edit .git/config. Find the entry for origin and bitbucket

1
2
3
[remote "origin"]
url = git@github.com:zph/zph.git
fetch = +refs/heads/*:refs/remotes/origin/*
1
2
3
[remote "bitbucket"]
url = ssh://git@bitbucket.org/zph/zph.git
fetch = +refs/heads/*:refs/remotes/bitbucket/*

Add a new entry using that information and urls:

1
2
3
[remote "all"]
url = git@github.com:zph/zph.git
url = ssh://git@bitbucket.org/zph/zph.git

Now when pushing code:

`git push all`

Credit for this solution: http://stackoverflow.com/questions/849308/pull-push-from-multiple-remote-locations?lq=1

Git Archaeology - Find the Secrets of Those Who Came Before

- -

“Those who don’t know their Git history are doomed to repeat its mistakes” – Yammy the Programming Kitten

Setup

You’re working for Cato Consulting.

And, you’ve been lent out to a new client on an unfamiliar project. It’s a codebase that you haven’t touched before. In the sense that you’ve never worked on it, you could kind of call it “Greenfield”… but from where you sit, the project looks like “Brownfield with withered vestiges of healthy code.” This isn’t going to be an easy upcoming two weeks.

So it’s a new day and you have a new story from the client about a text input box that’s no longer working. It should behave as a search field, but instead it does nothing. You realize you also do nothing useful before coffee while sitting there, pondering the solution and decide to increase your caffeine level.

Much better, there can be coding once the caffeine hits.

How Do You Approach the Problem

First, you need the text field. Perhaps it will have a unique id or class.

Bingo, you learn that the field has an id of “#super_search_box”.

Your other big clue in the story is that this feature was working well up until a few weeks ago. The client doesn’t have a schedule of exactly when it went bad, but that gives you a starting point. You can reasonably estimate that the code from 4 weeks ago was working.

Let’s dig into this Git Archaeology! We’ll dirty our hands but our spirits will be clean and we’ll sleep well at night knowing that our problem domains were well understood when implementing a fix.

Git Bisect

Our first useful incantation is git bisect. Git bisect helps in a situation where something stops working.

The basic workflow looks like this:

(Starting on git branch = master at newest commit, which happens to be broken) git bisect start (Then identify a good and a bad commit.) git checkout SHA_from_4_weeks_ago (manually check that search box worked, or run associated automatic tests) If it’s good, mark it via git bisect good, otherwise git bisect bad Then find an opposite example (ie if you found a bad one, find a good working commit). Tag that one via a git commit [good || bad] Then git bisect’s magical excavation will begin.

Git will do a binary search, and each step of the way, you will enter git bisect [bad || good] until git bisect identifies the earliest commit that broke that feature.

After that process is over, you’ll have the bad commit where a breaking change was introduced to that feature.

Next, it’s time to understand why the code author would do such a dastardly thing!

Run a git show to see the full content of that commit, both the message and the code differences.

(Note that you can see which were marked as good/bad by checking out git log --oneline)

Project Tracker

If there are more than 2 people on the project, hopefully there’s a central project management tool like Pivotal Tracker or Jira. Ever hopeful, you expect to see a story number or issue number listed in the git commit. Let’s see what it looks like:

1
2
3
4
5
6
7
[CTL] [#3443450] Switched JQuery selector for search box

Changes selector from '#super_search_box' to '.snazzy_search'.

diff monolithic_everything.coffee
- '#super_search_box'
+ '.snazzy_search'

Not a very helpful message but at least it contains a story id. Looking that up in the Project Tracker show that this was a chore to refactor some code & apparently it wasn’t done with sufficient care or any automated behavioral tests. Bummer :(.

Advanced Git Searching

Since we’re left underwhelmed but the currently available info, it would be nice to know when the markdown was modified that relates to ‘.snazzy_search’

git log -S 'snazzy_search'

This will search each git log entry for any mentions of snazzy_search.

With knowing when the markdown was changed, along with knowing the exact commit that introduced the regression, you’re set to implement a fix. By doing a bit of digging, the underlying domain becomes clear & the possibility of introducing your own regression will decrease.

Go Forth And Dig

Check out my .gitconfig and my .zsh.d/git.zsh files here for some helpful shortcuts for everyday git behaviors.

Also, look into git blame via Fugitive.vim or Magit (on Emacs). Git blame’s a great way to find out if the developer who introduced the changes is still around and can give you a bit of context on the changes.

If you’re just getting started as a Git Archeologist and want some help with getting started, or if you’re an experienced Git Excavator and want to bounce ideas off me, ping me on Twitter at @_ZPH.

For further reading, checkout these two great blog posts: http://ruturaj.net/git-bisect-tutorial/ and http://mislav.uniqpath.com/2014/02/hidden-documentation/

Conference Advice: How to Meet Devs and Influence Ppl

- -

Want to know the secret about tech conferences? It’s not about the talks. It’s not about the venue.

It’s about all the people who are there!

While attending @RbOnAles, I was asked by @eliserius about how I knew so many people there. This article is a summary of that discussion.

First off, go to a couple conferences each year. It’s best if your employer will pay all or part of the way. But if they won’t, it’s your responsibility to make it to the conferences. They’re an investment in your career’s future.

Now that you’re attending 3+ regional tech conferences each year, what steps can you take to make the most of it? Well, remember that you’re attending to meet the other attendees. By virtue of them also attending the conference, they either work at decent companies or they prioritize conference attendance.

So meet these folks! It’s not hard*. (It’s terrifying at first, but push past that & make it happen). I’m an introverted person and know a fair number of Ruby conference attendees, but those first few minutes of awkwardly interacting are just that: uncomfortable. That’s fine, move the conversation into tech stuff. Chat about Service Oriented Architecture, argue about Dependency Injection, take a stand and say that Ruby isn’t a dying language.

Remember, these folks who are at the conference also want to meet people.

What specific steps do I recommend for breaking the ice or stacking your social deck at the conference?

  • Don’t attend conferences with workmates. Or if you’re at conference, avoid them. There are 363 other days to bs with ‘em. You’ve only got 24 hrs * 3 days for the conference… make the most of it and meet new people.
  • Set a goal of meeting X new people at conference. Let’s consider “meeting” to be defined as exchanging names and at least one memorable detail about the other party. Maybe they juggle, write assembly, breed horses. Folks all have a story, ask “What brought you to the conference”. Or “What tech stuff are you playing with right now?”. Or start a heated debate about which $EDITOR to use. If you’re stumped about approaching someone, just be candid, “I saw you standing there & realized we hadn’t met yet: I’m Zander”. Most of the time that’s enough to kick off a conversation, though you might choose to use your own name in the prior quotation.
  • Take your meals with a different new group each time. With a 2 day conference, that gives you 4 opportunities to meet groups of 4+ people each time. Some conferences actively organize this kind of activity. Big shoutout to @steelcityruby for doing this stuff :). If your conference isn’t doing this, post on twitter that you’re organizing a group for Italian/Thai/Indian/Bar food and see who bites.
  • While out having food with others, buy a meal or drink for some folks you don’t know well. Do it for the nice factor the icing on the cake is that you’ll seem even more awesome than you already are.
  • Bring something to share to the conference and then offer it up to people. Could be a cardgame, boardgame, bourbon, soda water, LAN party… just get yourself out there.
  • Volunteer to carpool from airport to conference. Or from major city to tiny town where conference exists.
  • Learn 2 love Twitter. Start or revive your account. Twitter is the lifeblood of many Ruby conferences. Start up Tweetdeck or Tweetbot and add a column dedicated to the conference hashtag. That way you’ll be aware of the pulse of things.
  • Post Twitter messages with hashtag. For example at this last conference, I had the pleasure of doing breakfast with my favorite Aussie (@ryanbigg) by virtue of an early morning Tweet about meeting for breakfast. It’ll also force you to meet people who you might not otherwise get to meet.
  • Stay at the conference hotel & possibly split a room w/ someone you don’t know. Put a call out among friends to find someone to split room. It’ll be awesome. At least it has been for me thus far. Staying at conference hotel itself is wonderful because you’ll be in the middle of the action, late night philosophizing on OOP vs. Functional, etc.
  • Stick around for the evening events each day. Also stay for the workshops that often take place after the conference ends. At Ruby On Ales, it was a MiniTest workshop by @blowmage. It was awesome and I got to meet a few more people before leaving.
  • Still stumped for how to approach people at conference? See if you can recognize anyone at conference from Twitter or Github profiles. Then go up & thank them for the Open Source work they do. It’ll give them chills :). (This advice brought to you by @sarahmei who you should say hi to when you see her at a conference).
  • Be a speaker: you literally have to talk to people. At end of talk, invite them to stop you and chat, because you’re shy and want to meet people. Ask for help, it’s cool :).
  • Final tip: I’m terrified at the beginning of each conversation. Once I warm up, it’s groovy, but until then it’s rough. Get that momentum going, force yourself out of the comfort zone, and see what happens. Soon enough, those conferences will feel like reunions filled with friends.

Also, I’m terrified of starting conversations… so be my guest and come up & tell me you read this post. That’ll break the ice!

PS – I have a new friend from @RbOnAles who’s looking for a remote role (or in SF) either as QA or Jr. Ruby Developer. DM me on Twitter if you know of options.

Pathogen.vim Without the Submodules: Use Infect

- -

Over the weekend I finally admitted to myself that I hate submodules.

But they’re a keystone to one of my primary development tools: Vim. In order to use Vim, I use the Pathogen.vim plugin by @tpope. In order to use Pathogen, you normally use submodules in the .vim/bundle/ folder.

But submodules are the work of the devil.

I tried out the Vundle plugin and was seeing much longer loadtime for vim:

So I asked around on Twitter and @jwieringa advised me to checkout ‘infect’ by @crsexton .

And infect is awesome! It works with Pathogen to give it a declarative style for plugins. An example is:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
"=bundle tpope/vim-pathogen
"=bundle tpope/vim-sensible
source ~/.vim/bundle/vim-pathogen/autoload/pathogen.vim
execute pathogen#incubate()

"=bundle mileszs/ack.vim
"=bundle vim-scripts/AutoTag
"=bundle kien/ctrlp.vim
"=bundle Raimondi/delimitMate
"=bundle sethbc/fuzzyfinder_textmate
"=bundle tpope/gem-ctags
"=bundle gregsexton/gitv
"=bundle sjl/gundo.vim
"=bundle tpope/vim-vinegar
"=bundle jnwhiteh/vim-golang
"=bundle wting/rust.vim

call pathogen#helptags()
set nocompatible      " We're running Vim, not Vi!
syntax on             " Enable syntax highlighting
filetype on           " Enable filetype detection
filetype indent on    " Enable filetype-specific indenting
filetype plugin on    " Enable filetype-specific plugins

Use it. Love it. Don’t look back!

If you want faster downloads with ‘infect’ try this unofficial fork of the standalone: https://github.com/zph/zph/blob/master/home/bin/infect. When I have time, I’ll work w/ @crsexton to get this added to ‘infect’.

Solving Issues With RVM on BSD

- -

RVM installation went poorly on FreeBSD.

The ca-certificates weren’t up to date according to the install script.

Really, the ca-certificates weren’t in the right location for RVM’s curl install script.

These commands as root fixed it:

1
2
mkdir -p /usr/local/opt/curl-ca-bundle
ln -s /usr/local/share/certs/ca-root-nss.crt /usr/local/opt/curl-ca-bundle/share/ca-bundle.crt

Finding Myself in BSD

- -

Blame smartOS not reading my SATA controller.

Blame Linux for not having first class support for ZFS (where my data is held).

Blame @canadiancreed for giving me a way out of the quandry.

The backstory is that I moved all of my backups to two ZFS pools a few years ago. I was running ZFSonLinux and it generally worked…. except when I rebooted the server and had to force import the pools.

Fast forward to that server being replaced by a workstation (i7 Haswell, 240GB Sata3 SSD, and 24GB RAM). I tried smartOS by Joyent and ran into issues with my SATA controller not being recognized. It’s a shame given how awesome the smartOS vm administration is. I mean vmadm and imgadm are light years ahead of Docker.

Since smartOS refused to recognize 3 of my 8 drives I ran back to Linux. The good new, Linux recognized the sata controller. The bad news, Linux couldn’t import the pools. Frankly, Linux had a hell of a time with building the ZFS on Linux kernel modules. I managed to piece together a few clues from the ZFSonLinux Issues on Github. In fact, thanks to @dajhorn for supplying answers on that Github issue which allowed me to build the kernel modules.

This sounds promising doesn’t it? It’s not, it was a horrorshow. Linux refused to import the one of the two zpools. One of them imported nicely, the mission critical financial records. The other semi-critical zpool was shot and refused to import under Linux.

I spent a whole evening banging my head against this issue. If it hadn’t been for Chris Reed I wouldn’t have hit upon the solution.

His recommendation was to try FreeBSD… so I did. And it recognized the SATA controller and also imported the zpools cleanly! So after a dry run w/ FreeBSD, I installed PC-BSD, which is a desktop variant based on FreeBSD. Think of it as the Ubuntu of the BSD world. And hell yeah, it’s all working :).

So far, I’m really liking it. Replace ‘aptitude’ with ‘pkg’ and it’s pretty similar. Except, PC-BSD is working where Linux & ZFS were a hassle.

Using Null Terminators in Linux/OSX

- -

I ran into an issue with using xargs with rm -rf.

This could be dicey, so get your safety hats on.

The issue was that the filename included an apostrophe. So when trying to do a simple command such as tree -fi | grep conflicted | xargs rm -f '{}', I received an error about having an unterminated quote in the parameter.

Apparently, some versions of xargs allow you to specify a delimiter with a -d but the copy on my Mac didn’t have such a flag.

Instead, I learned that egrep --null will use a null character as the divider between matches. So the following solved my problem:

1
tree -fi | egrep --null conflicted | xargs -0 rm -f '{}'

Let’s break that command set down:

  • tree -fi is a trick I recently learned from my friend @olleolleolle. It prints out the local tree of files and the -fi prints out the whole filename (including directories).
  • egrep --null conflicted is where the magic starts. The --null flag tell egrep to separate matches with a null character.
  • xargs -0 rm -f '{}' this tells xargs that the null character is divider and to remove each filename that comes through the linux pipe.

This is yet another example of why I’m impressed with the commandline. Simple little tools that can be chained together with symbiotic behavior.

Automating Email With Ruby

- -

Last Friday was the kind of day where I dropped into a Pry repl in order to bang out an automation script.

The challenge was: automate the retrival of specific emails that contained receipts, wrangle them into sane data structures, and dump them into a spreadsheet with both daily totals and an absolute total.

An example email looked like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
 Receipt #9999999999999


 County:Example, FL      Date: 2014-1-27 

 Name:Jill Doe
 Credit Card #XXXXXXXXXXXX9999
 Authorization code:999999

 No.of Pages viewed:3
 Total Amount: $ 3.00

 Thank you for visiting http://www.example.com  

First step was to build an email parser for this format. I tried to keep it tolerant of future changes to the email generation scheme.

The general steps involved are: 1. Split the body linewise 2. Create a method for each piece of content to extract. 3. From the collection of lines, grep for the line with appropriate unique text. 4. Then in that line, use a regex to find the specific portion of data.

The full code for that module is listed below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
module Email
    class Parser
      attr_accessor :email, :content
      def initialize(email)
        @raw_content = email.to_s
        @content = @raw_content.split("\n").map(&:strip)
      end

      def receipt
        content.grep(/receipt/i).first[/\d+/]
      end

      def county_line
        @county_line ||= content.grep(/county.*date/i)[0]
                                .split(/\W{3,}/)
                                .map { |i| Hash[*i.split(':').map(&:strip)] }
      end

      def county
        county_line.first["County"]
      end

      def date
        county_line[1]["Date"]
      end

      def name
        array = content.grep(/name/i).first.split(':')[1]
      end

      def credit_card_number
        content.grep(/credit card/i).first[/#.*$/]
      end

      def authorization_code
        content.grep(/authorization code/i).first.split(':')[1]
      end

      def pages_viewed
        begin
          content.grep(/pages viewed/i).first[/\d+/].strip
        rescue NoMethodError => e
          warn "#{e.message} for #{content.inspect}"
        end
      end

      def total_amount
        content.grep(/total amount/i).first
                                     .split(':')[1]
                                     .gsub(/\$/, '')
                                     .strip
      end

      def website
        content.grep(/visiting http/i).first[/http.*$/i]
      end

      def all
        ParsingPresenter.new(
          county: county,
          date: date,
          name: name,
          credit_card_number: credit_card_number,
          authorization_code: authorization_code,
          pages_viewed: pages_viewed,
          total_amount: total_amount,
          website: website,
          receipt: receipt
        )
      end
      def self.all(msg)
        ps = new(msg)
        ps.all
      end

    end

    ParsingPresenter = Class.new(OpenStruct)

With that in order, I set about using the awesome ruby-gmail gem for retrieving said emails. Note: after completing this project, I learned of a continuation of the ruby-gmail gem called gmail. All the code in these examples is specific to the older incarnation of the gem.

ruby-gmail has a simple interface for retrieving messages between date ranges. So I setup a specific Gmail filter for emails from a certain sender that included the text ‘receipt’.

There’s nothing too fancy in this code, but it’s important to set @gmail.peek = true so that programatically viewed emails aren’t marked ‘read’. Also of note is the use of Dotenv for setting secret values without risking them in a git repo.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
 class Retriever

      USER = ENV['GMAIL_USER']
      PASSWORD = ENV['GMAIL_PASSWORD']
      LABEL = 'Receipts'

      attr_accessor :user, :password, :gmail, :messages
      def initialize(user=USER, password=PASSWORD)
        @user = user
        @password = password
        @gmail = Gmail.new(@user, @password)
        @gmail.peek = true
      end

      def message_count_in_range(start_date, end_date)
        #dates as '2010-02-10'
        gmail.inbox
             .count(:after => start_date, :before => end_date)
      end

      def emails_in_range(start_date, end_date, label=LABEL)
        #dates as '2010-02-10'
        gmail.mailbox(label)
             .emails(:after => start_date, :before => end_date)
      end

      def message_presenters_in_range(start_date, end_date)
        msgs = emails_in_range(start_date, end_date)
        @messages = msgs.map do |msg|
          Presenter.present(msg)
        end
      end

    end

    class Presenter
      attr_accessor :email
      def initialize(msg)
        @email = msg
      end

      def body
        email.body
      end

      def date
        email.date.to_date
      end

      def date_string
        date.to_s
      end

      def self.present(msg)
        presenter = new(msg)
        Message.new(date: presenter.date_string, body: presenter.body)
      end
    end

    Message = Class.new(OpenStruct)

The last task in building this tool was dumping the data to a CSV with totals by date as well as a grand total. The process is simple, pass in a collection of messages and iterate through them by date, add a subtotal per date, then add a final row with grand total.

I like to break out rows into their own methods when possible. In fact, were I to rewrite this code, the message row would have its own method to clean up the inner loop of messages_by_date(). Another trick that helped for testing was to not generate a file on the filesystem. CSV takes either an open or a generate method. With generate it will pass the complete csv file out as the return value!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
class CSVBuilder
    attr_accessor :messages, :csv
    def initialize(messages)
      @messages = messages
    end

    def create
      @csv = CSV.generate do |csv|
        csv << header
        csv << empty_row
        uniq_dates.each do |date|
          messages_by_date(date).each do |msg|
            csv << [msg.date, msg.receipt, msg.authorization_code, msg.pages_viewed, msg.name, msg.credit_card_number, msg.total_amount]
          end
          csv << sum_totals_row(messages_by_date(date), "Subtotal for #{date}")
        end
        csv << empty_row
        csv << sum_totals_row(messages, "Total Amount")
      end
    end

    def header
      ['Date',
       'Receipt #',
       'Authorization Code',
       'Pages Viewed',
       'Name',
       'Credit Card #',
       'Total Amount']
    end

    def empty_row
      Array.new(header.count)
    end

    def messages_by_date(date)
      messages.select { |m| m.date == date }
    end

    def uniq_dates
      messages.map(&:date).uniq.sort
    end

    def sum_totals_row(msgs, label)
      rawsum = msgs.map { |m| m.total_amount.to_f }.inject(:+)
      sum = sprintf( "%.2f", rawsum )
      sum_row_padding = Array.new(header.count)
      sum_row = [ sum_row_padding, label, sum ].flatten
    end
  end

I’m also quite proud of the variable name rawsum because it’s rawsome to design code that will save a couple hours every two weeks.

With a good ecosystem of libraries, it’s only a couple hours of work to write a re-usable tool that saves significant amounts of time. Hooray :).

Adventures With MRI 2.0.0 and Zlib: A Story of Malformed Gzips

- -

It started off as a casual inquiry on Twitter:

And led to my friend @geopet posting the Minimum Viable Demo as a Gist:

And it was interesting

What we found out was that the Ruby open-uri library would make calls to an external API (Wunderground) and throw a Zlib::DataError when run in MRI Ruby 2.0.0. The strange thing was that MRI 1.9.3 works perfectly fine. Same exact story when the GET Request comes from Net/HTTP instead of open-uri. But it succeeds on 2.0.0 when using the RestClient Gem, as documented by @injekt.

The Gloves Come Off

We dove into the source of the error and determined that it was thrown from net/http/response.rb:357. In order to better understand the error, I sequentially placed binding.pry statements to determine where the error percolated to the surface. It was the call to @inflate.finish which was where the Zlib::DataError surfaced.

I left the code at this point and posted my initial findings back to Geoff and left the project alone.

Today

Then I saw this message and it was time for more digging :).

I started by forking his Gist and pulling it down to my local computer. My first phase of troubleshooting was to try alternate tools, in order to see how they dump the HTTP response. Good ol’ curl came to the rescue and provided me with the results that I placed in curl_response.txt and curl_raw.txt. Notice the rather interesting artifact around line 12 on the RAW version that isn’t present in the alternate curl response.

Pulling in Net/HTTP

It felt like progress and I wanted a better way to tweak the net/http library. I prepended the local directory to Ruby’s LOAD_PATH and copies the net folder out from MRI’s lib directory. Having the Dir.pwd prepended to the path enabled me to make very convenient testing tweaks to the Ruby Standard Library without needing to alter my standard RVM install :).

Tapping the Sockets

With net/http libs loaded from the local file, I was off to the races. I tapped into the internal workings by using the ‘sack’ utility sack for jumping directly into and editing ack results. With the addition of a strategically placed binding.pry, I was able to tap into the live socket info via a socket.read_all and write that out as a binary dump to socket_content.bin.

Reducing it to Elements

The last step in my troubleshooting was to create zlib_targeted.rb for isolating the zlib load issues from net/http. Since the underlying issue appears to be a malformed gzip returned from Wunderground’s API, I created zlib_targeted.rb to remove net/http from the equation. Check out the demo content of the file below:

Conclusion

Now we have a very narrowly tailored set of examples that dig into the exact errors, thanks to @geopet, myself, and @injekt.

For more info, see the comments in this Gist repo from @geopet: Initial Gist

Or my repo that includes the files described in this post: Full repo

I’m happy with how the toubleshooting has progressed and would like to see this issue resolved, whether it is a malformed response from Wunderground, intolerant behavior from MRI 2.0.0, or anything else.

Don’t Fear Pair Programming - a Guide to Starting

- -

Pair Programming is becoming a big deal in the Ruby programming world: this guide will help you get started.

Pre-Reqs:

General familiarity with Ruby tools (Bundler, Gems, RVM/Rbenv) Basic commandline comfort

What is Pairing?

In its simplest form, pair programming is where a pair of programmers work on a problem using the same computer.

Since I live don’t live in a technology hub in America, programming in the same physical location is challenging. Instead, it’s possible to replicate that experience with both parties in separate locations.

How Does it Work?

Setup a video call using Skype, Google Plus, Twelephone. Both partners connect into a shared machine such as a Virtual Private Server (VPS). Each partner connects into a shared Tmux session. Both of the individuals can jointly edit the same files, as if they were present at the same keyboard.

Setting It Up From Scratch

Signup with a VPS provider (I’m currently very happy with DigitalOcean) Boot up a basic 512MB RAM instance in the Linux flavor of your choice. I’ll use Ubuntu 12.04 x32 for this example. Once the instance is booted up, let’s connect and setup basic sane defaults. Install tmux and vim-nox using the package manager. Install Ruby using RVM, Rbenv, or Chruby. Install Tweemux Gem - gem install tweemux Now that we’ve laid the groundwork for it, let’s work on making it available for a partner.

Inviting a Pair

When ready to invite a pairing partner, we start by adding a unique user for them. For convenience, it’s best to add their username from Github.

adduser --disabled-password $PAIRNAME

Next we’ll use the Tweemux Gem from RKing to pull down the partner’s public key from Github, and add it to their ~/.ssh/authorized_keys.

tweemux hubkey $PAIRNAME

At this point in the process, that user can login to your server using the IP address, their Github username, and their matching private key.

ie - ssh $PAIRNAME@IP_ADDRESS_OF_SERVER

At this point, the host should fire up a shared Tmux session:

tmux -S /tmp/pair

And enable that socket to be world readable:

chmod 777 /tmp/pair

NOTE: Doing this on anything other than a bare server, or with someone you don’t trust, isn’t a secure or a good idea. Don’t do this on a production server or with sketchy folks!

Next, it’s time for the guest to join the shared Tmux session:

tmux -S /tmp/pair attach

And you’re both in the same Tmux session! The view, keyboard and such is all shared =).

Web Statistics