Thursday, December 31, 2009

Thanks For A Great #code2009

It has been an amazing year for both personal and professional code development.

Starting with the inspiration to begin Project Flying Robot, to the prestige of presenting at LARubyConf, FutureRuby, TWTRCON, IgniteLA, 140 The Twitter Conference, RubyConf, and Conferencia Rails, and lastly the year-end fun of starting out the ongoing #code2009 Twitter meme, so popular that it spawned a couple of mashups and got picked up by Hacker News and uber-language blog Lambda the Ultimate.

In between were numerous meetups, hackfests, code jams, code dojos, pull requests, and casual codeslinging with friends. And the Maker Faire!

To everyone who welcomed me, listened to me, helped me, or taught me something, I am indeed grateful. Thank you. Let's do this 2010 thing right!

Monday, November 16, 2009

Flying Robot: World Tour 2009 Continues

As usual, no blog posts = a lot of other activity here at Flying Robot HQ. Among other personal stuff, my brother Damen Evans and I have been getting ready for the last public demos of @flyingrobot for 2009. And we are going out with style!

Later this week, we roll up to San Francisco to present at the prestigious RubyConf! Then, next week @flyingrobot and I will fly off to Madrid, Spain, to do our first European appearance at the awesome-looking Conferencia Rails.

Anyhow, if you have been waiting eagerly for more Flying Robot news and gadgets, be patient. We will be unveiling our mysterious cool new stuff in just a few days...

Saturday, October 10, 2009

PostgreSQL on Ubuntu on EC2: Backing It All Up

This post continues what I started with "PostgreSQL on Ubuntu on EC2: The Installation Guide". Once you have your PostgreSQL database server instance running, you will need to backup two different things: your database data, and the instance itself. The database data will be backed up using Elastic Block Storage (EBS) snapshots. Once we have the instance running the backups correctly, we will then create an Amazon Machine Image (AMI) that will allow you to launch a new instance to replace the database server in case it goes down.

Backing Up The Database
First, we need to connect to our database server instance via SSH using the ubuntu user.

We will need to install some dependencies to get our backup script to run:

sudo apt-get install build-essential
sudo apt-get install ruby1.8-dev
sudo apt-get install rubygems
sudo gem update --system


You will need to tweak RubyGems so that the update works correctly, as described here.

Now you can install Gemcutter, which is the new ultra cool repository for gems:

sudo gem install gemcutter
sudo gem tumble


Finally we are ready to install the Amazon EC2 rubygem:

sudo gem install amazon-ec2


Now we can create our backup script. Save this code into the ~/ directory under the name backup_database.rb. You will need to substitute the Amazon ACCESS_KEY_ID and SECRET_ACCESS_KEY, as well and entering the correct EBS volume for the DATABASE_VOLUME constant:

#!/usr/bin/env ruby

require 'rubygems'
require 'AWS'

ACCESS_KEY_ID = 'YOUR_ACCESS_KEY'
SECRET_ACCESS_KEY = 'YOUR_SECRET_ACCESS_KEY'
DATABASE_VOLUME = 'vol-XXXXXXXX'

puts "Starting database snapshot..."
ec2 = AWS::EC2::Base.new(:access_key_id => ACCESS_KEY_ID, :secret_access_key => SECRET_ACCESS_KEY)
ec2.create_snapshot(:volume_id => DATABASE_VOLUME)
puts "Database snapshot completed."


Due to the finicky way that Ruby runs as part of a cron job, we are best off creating a shell script that then runs the Ruby backup script. Save this code into the ~/ directory under the name backup_db.sh:

#!/bin/sh
cd /home/ubuntu
ruby /home/ubuntu/backup_database.rb


Don't forget to make the backup shell script executable:

chmod +x /home/ubuntu/backup_db.sh


Now we just need to configure this script to run as part of a cron job, so that the backups take place automatically. The crontab command brings up the list of configured cron tasks for the current user:

crontab -e


This example crontab entry runs the backup daily at midnight, but you may want it to run more frequently:

0 0 * * * /home/ubuntu/backup_db.sh


At this point, you should have a fully functional automated backup system. Verify after midnight that the script has run as you expect, by looking to see if a new snapshot has been created, using Elastifox or however you administrate your EC2 instances.

Creating The AMI
Creating the AMI to backup the entire database instance is pretty easy. First, you need to upload the PEM files. Remember you are authenticating as the "ubuntu" user:

scp -i id_rsa-gsg-keypair pk-YOUR.pem cert-YOUR.pem ubuntu@domU-12-34-31-00-00-05.usma1.compute.amazonaws.com:


Use your SSH connection into the database server instance to copy the PEM files to the /mnt directory:

sudo cp /home/ubuntu/*.pem /mnt


Now create the bundle. Make sure you use your Amazon account number (without dashes) as the value for the -u parameter. This can take quite a while, so do not get impatient:

sudo ec2-bundle-vol -d /mnt -k /mnt/pk-YOUR.pem -c /mnt/cert-YOUR.pem -r i386 -u YOURUSERACCOUNTNUMBER


You can now upload the bundle to your Amazon S3 account, in preparation for making available as an AMI. Use something versioned for the -b parameter which is the name of the bundle:

sudo ec2-upload-bundle -b my-database-server-1.0-ami -m /mnt/image.manifest.xml -a YOUR_ACCESS_KEY -s YOUR_SECRET_ACCESS_KEY


Final step is going back to your local machine, and making the newly created bundle available to be used to start a new instance:

ec2-register my-database-server-1.0-ami/image.manifest.xml


You can now now launch a brand new database server instance based on this AMI, and it will be a clone of your existing database server. This is the procedure you would follow if you need to restore your database server instance from backups.

Restoring Your Database Server From The Backups
In the case that something goes terribly terribly wrong, you can get back to normal as follows:
- startup a new EBS volume from your most recent snapshot backup,
- start up a new server from your database AMI
- configure your new server instance to use the new volume started from the backup data
- switch your elastic IP to point to the new server, or update the references in your application to point to the new server

This concludes part 2 of the great PostgreSQL config post for the EC2 cloud. Hopefully it will help you with a nice simple way to take the basic PostgreSQL instance that you got up and running on Ubuntu/EC2 using the directions in the part 1 post, and add the confidence that backed up data and a completely reproducible configuration provides.

Sunday, September 13, 2009

Happy 200 Posts: My 10 Personal Favorites

I was shocked to discover this morning that this is to be my 200th blog post. Wow! It has been a good run so far since I restarted the Dead Programmer Society in 2006, and I really appreciate the awesome feedback and support that I have received from the community.

To commemorate this personal event, here is a list of my top 10 favorite posts, in no particular order:

1. "I'd Rather Be A Jazz Programmer"
2. "Fear And Loathing At RailsConf 2009"
3. "Programming Zombies Will Crush You"
4. "The Twitter 1-2-3 Rule"
5. "Goldilocks and the Three Icons"
6. "Money In The Ghetto"
7. "I Speak For The Code"
8. "The Folly Of Accountabalism"
9. "The Planning Game Vs. The Crying Game"
10. "Architect Is Not An Honorary Title"

Once again, I thank everyone for your support, and I look forward to telling more tales from the Dead Programmer Society.

Saturday, August 15, 2009

PostgreSQL on Ubuntu on EC2: The Installation Guide

For some time, I have had clients hosting a couple different applications on Amazon EC2 using Ubuntu. One of these apps uses PostgreSQL, and has been running without event for quite a while. Yesterday, I got to catch up for lost time, by spending the entire day wrestling with data recovery issues related to a failed apt-get upgrade on an important database server. Luckily, the awesome Eric Hammond was around on IRC, came to my rescue, and coached my thru my self-inflicted pain.

If you are not interested in PostgreSQL, you probably just stop here. Nothing to see folks, move along. However, if you are looking for the well-lit path to getting PostgreSQL installed on Amazon EC2 will all the trimmings, read on.

I went to find various web pages to use as source material, expecting since the last time I went through this, someone would have written a nice definitive guide to installing PostgreSQL on Ubuntu, running it on a dedicated instance on Amazon EC2, and using Elastic Block Storage (EBS). Naturally, you want to be using the XFS file system too. However, no such luck: just a big collection of pages of instructions on the various parts, without any nice simple path to getting things working together.

Hence, this post tries to provide a set of instructions for getting things working, and avoiding a couple of problems that I have run into while running Postgres in production for the last couple years.

Step 0 - You are signed up for Amazon EC2, no? If not, there are plenty of pages with instructions on how to do so.

Step 1 - Choose your AMI
There are several AMI's available to you. I currently run Hardy 8.04 LTS x86 architecture in the USA, so I am using ami-5d59be34, but you may have other requirements. The Ubuntu EC2 starter guide has good info on your options.

Step 2 - Launch your instance
I like to use Elasticfox, cause I am super lazy. The command line works well, also.

What size instance? This AMI supports small and medium. PostgreSQL is pretty efficient these days, and especially using a dedicated instance and not running anything but the database server improves raw database performance considerably. You would probably be pretty surprised how well a small instance can perform, but choose medium if you think you will have more significant needs.

One key pattern I use for my EC2 hosted apps, is creating security groups in EC2 to separate my database servers from the public internet. I never use the default security group, but instead create a group for each tier of my application like "database", "web", "transcoder" and then allow specific groups to communicate with each other.

Step 3 - Create the EBS Volume
You can do this via Elasticfox, or via command line. Either way, make sure you do two key things: make sure you create the EBS volume in the same availabilty zone as your database server instance, and also make sure you create a volume with enough space. Here is how you would use the command line tools to create a 10GB volume in the 'us-east-1a' zone:

ec2-create-volume -z us-east-1a -s 10

One the volume is ready, attach it to the database instance. For example, this attaches an EBS volume named 'vol-VVVV1111' to the instance 'i-IIII1111' on device /dev/sdh:

ec2-attach-volume -d /dev/sdh -i i-IIII1111 vol-VVVV1111


Step 4 - Connect to the database instance
You need to SSH in to configure you new instance. Remember, you cannot connect as 'root' user in Ubuntu, you need to connect using the 'ubuntu' user. This page has good details about using sudo and SSH on the official Ubuntu EC2 AMIs.

OK, so now you are connected via SSH to your server. Of course, start with the usual update/upgrade:

sudo apt-get update && sudo apt-get upgrade -y


Step 5 - Install XFS
We will need to install the XFS file system. Actually, your could use some other file system, but XFS is quite mature and has good performance. Plus if you are crazy, you can scale up to a massive virtual RAID drive that will cost $4000 per month.


sudo apt-get install -y xfsprogs
Step 6 - Format the EBS volume using XFS
We need to install a file system on the EBS volume before we can do anything with it. Here is an example:

sudo modprobe xfs
sudo mkfs.xfs /dev/sdh

echo "/dev/sdh /data xfs noatime 0 0" | sudo tee -a /etc/fstab
sudo mkdir /data
sudo mount /data

Now we have a /data directory that maps to our EBS volume. Anything we write to /data will be persisted, even if the database server instance itself terminates.

Step 7 - Install PostgreSQL
Now we need to get PostgreSQL installed. This page has a very nice simple set of instructions on how to do that correctly for Ubuntu, but here is a synopsis especially for a headless server. First install Postgres:

sudo apt-get install postgresql postgresql-client postgresql-contrib

Now reset the password for the postgres account in the PostgreSQL server:

sudo su postgres -c psql template1
template1=# ALTER USER postgres WITH PASSWORD 'password';
template1=# \q

And then change the password on the user account to match:

sudo passwd -d postgres
sudo su postgres -c passwd


Now we need to modify the postgres configuration file postgresql.conf. First, to allow other machines to connect to our instance, and also to have PostgreSQL use our nice shiny /data directory.


sudo nano /etc/postgresql/8.3/main/postgresql.conf

Change the line containing

data_directory = '/var/lib/postgresql/8.3/main'

to

data_directory = '/data/main'

Now change

#listen_addresses = 'localhost'

to

listen_addresses = '*'

and also change

#password_encryption = on

to

password_encryption = on

Save the file, then open the pg_hba.conf file so we can control who can access the server:

# DO NOT DISABLE!
# If you change this first entry you will need to make sure that the
# database
# super user can access the database using some other method.
# Noninteractive
# access to all databases is required during automatic maintenance
# (autovacuum, daily cronjob, replication, and similar tasks).
#
# Database administrative login by UNIX sockets
local all postgres ident sameuser
# TYPE DATABASE USER CIDR-ADDRESS METHOD

# "local" is for Unix domain socket connections only
local all all md5
# IPv4 local connections:
host all all 127.0.0.1/32 md5
# IPv6 local connections:
host all all ::1/128 md5

# Connections for all PCs on the subnet
#
# TYPE DATABASE USER IP-ADDRESS IP-MASK METHOD
host all all 0.0.0.0/0 md5 # wide-open, you may want to make this more specific to your database


Step 8 - Move the database files
We need to stop the PostgreSQL server, move the database files to our EBS volume, then restart the server.

sudo /etc/init.d/postgresql-8.3 stop
sudo mv /var/lib/postgresql/8.3/main /data
sudo /etc/init.d/postgresql-8.3 start

You are now running PostgreSQL on Amazon EC2 using EBS for your database, with the XFS file system. Congratulations!

I will write a followup post on how to setup your database server for self-backing up using EBS snapshots, but that is all I have time for right now. Hopefully this pared-down set of instructions has been useful to you. Thanks again to Eric Hammond, and everyone else who's blogs were culled together into this post.

Saturday, July 25, 2009

The FutureRuby Revolution Will Not Be On AOL - Part 2

FutureRuby Day 2 began in a seemingly calm and reflective way. Coffees were sipped, and hangovers nursed. As the self-inflicted wounds from the Pravda-Vodka-Kalashnikov faded, Pete Forde, our leader and spiritual adviser, began a short sermon.

His message was simple: Vegas is a horrible place to hold RailsConf. And we should live in a manner that follows the "Four Agreements". Seriously, yes, he said both of these things.

Pete told us of the source of his sudden enlightenment: Portland's Jupiter Hotel. Instead of a Gideon bible, they have copy of Four Agreements in each room. Pete, being a curious guy, started to read the book. To save us time, he summarized it in nice Twitter-sized chunks.

1. Use your words for good... do not gossip
2. Do not take anything personally
3. Do not make assumptions
4. Always try your best

With our spiritual bootstrapping complete, we proceeded to have a consciousness-expanding session from Collin Miller presentation called "Transc/Ending Encoding". This was NOT about video encoding.

Collin Miller - "Transc/Ending Encoding"

If the 60's revolt gave the counterculture heroes like Leary and Hoffman, it gave us tech heroes like Englebart and Kay.

When writing software, we edit text files. We use textual encoding is a way to flatten down information to a simpler structure. But what does editing text lack? There are other options to make programs without text.

There is this high priesthood of text, however, programming does not need to be difficult to be useful. The future is the ONLY frontier... so where are we as programmer-monks going?

Martin Fowler
has his concept of "illustrative programming". As another example, a spreadsheet is non-textual programming.

Charles Simonyi's Intentional Programming in a different approach. It allows users to change names easily, or even program in two different natural languages. It does this by maintaining a constant set of references to everything in program. By doing this, different users can edit the same source database, without using the same editing style.

Another example is Subtext (http://www.subtextual.org/). In Subtext, everything is just a reference. It is like "googling the code". Subtext uses decision tables, and a syntax tree editor.

It was a very interesting talk, and it seems like many people were inspired to think differently about code. I was having Smalltalk flashbacks, and my brother Damen Evans was reminiscing about how cool HyperCard used to be.

Dr. Nic - "Living with 1000 Open Source Projects"

Next up was "Dr. Nic" aka Dr. Nic Williams who is actually a PhD in CS, so not just granting himself an honorarium. His talk was called "Living with 1000 Open Source Projects". I have heard Dr. Nic speak before, and he is a very intelligent and funny speaker.

There are two types of open source project founders:
Type A. Nurture and converse "Do you care?"
Type B. People who were previously type A

"Who ever looked at their old code and thought 'that's better than what I write now'?"

If you look after your old projects, you will end up with 500/hr. week of projects

"Open source projects don't scale, but neither does raising pets and children"

The question is which OSS projects to maintain? The pet projects you NEED every day

Goal: ZERO maintenance

How to reduce bad karma from "abandoning" your project:
- publish project status
- facilitate group therapy
- forward emails to mailing list

Put a badge on project home page that says last time someone contributed to the project

Aim for community to be self-sufficient

Github makes things easier with centralized patches. The github gem is great for laziness.

"Easy to give away commit rights, if you think 'this is not MY project, I just look after it'"

Aim: ZERO process cost

Aim for Zero
- don't use it? do not maintain it
- manage expectations
- community self-sufficient
- zero process cost
- zero defects

How to use your spare time
- find a hobby
- talk to your spouse
- create more projects

"you can do less"

Dr. Nic's talk really resonated with many of us. I, for one, immediately on getting back from the conference gave commit rights on two of my projects to two worthy individuals. Wow, what a relief!

Matt Knox - "Crimes Against Humanity, Writ Small"

After Dr. Nic, was a great talk called "Crimes Against Humanity, Writ Small" from Matt Knox. I have been hanging out for the last couple of years with Matt at various Ruby conferences, but I had no idea how awesome he really is, till he got to show his stuff at FutureRuby.

The message behind his talk was really about taking responsibility for one's own actions. This was a very important recurring theme, starting right from Nathanial Talbott's talk at the very beginning of FutureRuby "you write the software for the nukes, you own responsibility if they are used". In Matt's case, the "nukes" in question were adware. The kind that attaches itself to your machine like a vampire squid, and will not let go.

Like all roads to hell, Matt's started with the best of intentions. His wonderful idea was killing adware on Windows with Scheme. As in LISP. That sounds like a really fun job... and it was, at first.

From an auspicious start "kill this worm", the job progressed to "kill lots of worms/malicious ad clients". Then the job became "somewhat edgy" aka "kill competitors and keep us from being killed... by anything"

As a result of all this, there were major negative repercussions that took down the company. In the aftermath, Matt was able to do some amazing self-exploration. "What just happened? Is this just who I am?"

That brings us around to the famous Milgram experiments. The incredible part was that 70% went the distance, and did what they thought was "torturing" another human being. Most human evil lives here.

What does this mean?
- The human brain has a remote root exploit in 70% of the installed base
- Knowing is 1/2 the battle

Good
- don't be evil

Better
- recognize that people who do evil may not be evil
- this makes it easier to not hate them

Best
- set up structures to insure this does not happen

The world forgives. But to provoke forgiveness, one needs to own your actions, and their results.

There is a remote root exploit in human brain, but the world forgives.

Matt, thank you very much for your bravery and honesty, in sharing what was clearly a very painful learning experience.

Paul Dowman - "Between the Battleship and the FAILWhale"

After the raw psychology of Matt's talk, it was not easy for me to switch gears to Paul Dowman's talk "Between the Battleship and the FAILWhale". However, it was full of solid info with why's and how's about scaling. Here are a few highlights:

Scalability != performance
Performance is faster load time
Scalability is handling greater load on same hardware

2 kinds of scaling
Vertical scaling - increase power of a single unit of your architecture
Horizontal scaling - adding units to your architecture

Why is scaling so hard? It cannot be an afterthought.
Should I forget about my scaling problem till my app is a hit? It's a biz decision

Developers and shareholders should talk about the tradeoffs, because scaling has costs: it requires more capital, and makes system more complex.

You can do some simple things to prepare to scale, without a major engineering effort. The goal is to be able scale just by adding more servers

HTTP Caching like squid, varnish or Rack::Cache

Use a queue for anything not needed to render the page right then. You will get a faster response, have a more consistent system load, and have less contention for locks.

Amazon SQS
is pretty cool. SQS is slow but scalable, simple and requires no maintenance or deployment.

Memcached is inherently distributed, and you can just add more instance to scale. But it is not a database, so do not treat it like one. Data can/will disappear, since it is not persisted.

For scaling your database, you have various options:
- Use an RDBMS like MySQL or PostgreSQL
- SimpleDB
- Tokyo Cabinet
- CouchDB
- something else

Traditional RDBMS cannot scale horizontally forever. However, a lot of data does fit the table paradigm and SQL is powerful. Do not confuse data storage with data management.

Joe Wilk - "Cucumbered"
Next up was Joseph Wilk, all the way from London, to talk about Cucumber. Joe is a very unassuming but smart and witty fellow. The way he structured his talk was really clever. He used BDD itself to describe BDD... brilliant!

So why use something like Cucumber? So that the customer can use something less syntactically stripped to describe their needs, THEN translate that to Ruby. It is a token of the conversation, and defines the acceptance criteria for the "customer". It is useful as a design tool, and provides executable documentation for the project.

Cucumber has a gateway for different human languages, so that the developer and customer can interact in the customer's own human language. Like Swedish, Spanish, or LOLCATS. In fact there are currently over 30 languages already supported.

Really, that is part of getting the most value out of Cucumber, is getting customers using Cucumber THEMSELVES... you can even just send around the "plaintext" using email, Google Docs, whatever allows you to share the plaintext data.

The Art of "plaintext"
- don't force structure
- avoid noise
- avoid inconsistency
- balance abstraction
- use Ruby language building blocks to keep things DRY

There are a couple of cool features that have gotten into Cucumber while I was not paying attention. One is Tagging, which allows you to tag a feature, and run only those features.


@any plaintext word

cucumber --tags ~@in-progress


Another is Continuous Integration (Work In Progress) (--wip) which looks very useful since running each and every feature story can be time-consuming and slow down a CI build. Running all of the features as part of a nightly build is a workable compromise, and this look pretty useful to me for Integrity integration etc.

It was a very cool talk from Joe, and if you are not using Cucumber you really should be. It is an amazing source of insight into the needs of the user, and a great way to explain WHY you are doing things, not just WHAT you are doing.

Avi Bryant - "Failure: An Illustrated Guide"
Next up, Avi Bryant gave a fun talk called "Failure: An Illustrated Guide". He basically took us thru 30+ iterations (I lost count) of UI variations trying to create an important part of the functionality for his new site Trendly.

It was interesting to see all of the different attempts that they made, in finally reaching what is a pretty cool and different UI metaphor for visualizing time-series data from website stats. Once the presentation video is online, it is for sure worth watching, not so much because of what they did, but more from how it will make you revisit your own UI.

Jon Dahl - "Programming and Minimalism"
Jon's talk about "Programming and Minimalism" delved into the comparisons and contrasts of music and programming. He played a number of musical examples that showed stylistic development of musical genres from simple forms, to complex ones, and then evolved to simpler ones as part of a new "branch" of development.

It was interesting to consider these parallels, especially since I happened to be sitting next to friend Greg Borenstein, who's classical musical vocabulary is much greater than mine, and had interesting side-channel comments. Like Avi's presentation, the video/audio is probably needed to in order to get more than a superficial explanation of his points.

Brian Marick - "Artisanal Retro-Futurism and Team-Scale Anarcho-Syndicalism"
I was really looking forward to the next talk. Brian Marick is one of the original authors of the "Agile Manifesto" and a very interesting thinker. I had heard him bandy about this phrase "Artisanal Retro-Futurism and Team-Scale Anarcho-Syndicalism" and I was eager to hear what it meant. FutureRuby was about to get radical. The video of this talk is now online, so I really suggest you check it out for yourself.

"When I say agile, I mean Ruby... the way that Ruby projects are run"

"Switching to scrum, at least my job doesn't suck as much as it used to"

"I don't want to see on my gravestone, 'he made agile projects suck a little less'"

"Even the wage-slave can have joy-in work"

"the cubical is the single worst design of people and space to do software development"

What is "anarcho-syndicalism"? It is a political/economic trade-union movement peaked in 1923, crushed in 1924 by the U.S. government.

Here were a few of their tenants:
- getting rid of the government, and getting rid of private corporations
- worker self management
- direct action
- worker solidarity

Brian suggests adopting some of the ideas of the anarcho-syndicalists but at Team-scale" meaning within your own team.

Teams should band together more than they do, and we need more power in the hands of team to counterbalance the power in the corporation.

So on to this "Artisanal" thing. Brian used the example of artisanal cheese. The people who make this cheese are very into cheese. They care about the cheese! They do not just do it for profit, profit is the result of their caring.

Lastly back to the "retro-futurism". Recently, the New Yorker magazine did an issue about innovation. However, to capture the idea of innovation, they used images from the past, like the jet pack.

The idea of "retro-futurism" is trying to recapture the spirit of hopefulness from the past. Books like Freeman Dyson's "Infinite In All Directions" capture this endless sense of possibility.

Do not let the context drive you, you control the context. Brian calls for a revolution in how software development projects are run, and challenges us to be scrappy, care, and keep our spirit of naive optimism.

Start doing something about this: go to arxta.net, and talk to your teammates. And yes, I do have a sticker on my MacBook Pro.

Jesse Hirsh - "Fighting the Imperial Californian Ideology"
The final presentation, from Jesse Hirsh, was even more radical than Brian Marick. Jesse challenged all of us by taking some of those same principles that we had all just agreed with coming from Brian about software, but extended them further. Much further.

There have already been a couple of good posts that summarize or comment on Jesse's talk. Go check them out if you want more detail.

A couple of books that have influenced Jesse are "Snow Crash" and "Imperial San Francisco". The reason this is important is that ideologies are viral. In the mid 1800's the US sent surveyors into California, and once the mineral wealth there are been established, declared war on Mexico to get mines.

This was only the first of many "gold rushes" to take place in CA, although subsequent ones would develop other resources than mines. San Francisco technology invented the mining shaft to extract greater amounts of resources from the same mine. Taking those same technological achievements, a mining shaft turned upside down was a skyscraper - mining human labor instead of minerals.

The Hearst mining family was the most successful of these robber barons of mining, and the most responsible for many of the negative outcomes that resulted. As an example, Hearst Mining is responsible for 8 of 10 Superfund hazardous waste cleanup sites.

But it did not end there. As is well documented, the Spanish-American War, which resulted in the brutal colonial occupation of the Phillipines, was triggered by the first manufactured war, created by the first media mogul William Randolph Hearst.

San Francisco built all arms, and still responsible for all advanced military technology today. One big example is U.S. nuclear weapons, which are designed at the Lawrence Livermore National Labs.

This all established California as a place where a few small elites could conquer the world. The end of the cold war, was replaced by new imperial project - the California Ideology. The acolytes of this new ideology were Kevin Kelley, Stewart Brand, and the Global Business Network.

Many saw the emergence of magazines like Wired and Mondo2000 (shout-out to RUSirius!) as the frontier of new techno-utopia. However, not everywhere has the silicon valley infrastructure. This new world was still under the dominance of SF.

The corrupting influence and domination of SF was exemplified by BALCO - the Bay Area Labratory Co-Operative known for the designer steroids that have altered professional sports irrevocably.

When Chris Anderson wrote "The Long Tail", Jesse says we all recognized it as brilliant. However, it reinforced the hierarchy to allow the few to get all of the best parts, while relegating everyone else to the skinny end of the long tail.

Jesse goes on to attack Chris's latest manifesto "Free". He says there is something fundamentally wrong with his argument, however NOT the free part. Jesse says the fatal flaw is the ethic of waste. Chris says that now that bandwidth is in such abundance, we must waste it, because only then can we reach innovation.

Jesse rails on waste as an ethic in CA (cars, weapons, mining etc). He prefers a revolutionary wholeism. Wholeism is a flip on relativism. Making everything the same, is NOT the answer, according to Jesse. We are in similar time with social tools like as when AOL took over Internet and turned it into total crap.

When you have neighborhoods in the net, you can clean them up. We need to take the best tools available, merge into a coherent vision. Take a page from Barack Obama's playbook and become community activists.

Who can you trust? Not the corporation, only your comrades, which is whoever you have social capital with.

The struggle for human rights never ends, the question is which side are you on? The era of the nation-state is done, it is time for the new rise of the city-state. Get involved.

Jesse had given an intense and fascinating talk. We could not complete the FutureRuby agenda, without some serious rabble-rousing. We surely have to individually take responsibility for what we choose to do with our power as technologists. Agree with Jesse on any individual point, or not, there was a lot of food for further thought.

Aftermath
The conference sessions were now over, but the FutureRuby festivities had not yet ended. After so many ideas compressed into so little time, we needed to hang out and process things together, while allowing it to be unstructured. Meghann had come up with the innovative thought of putting the after-party into 3 different walkable nearby locations: a cool coffeehouse with retro video games, a classic little dive bar with live music, and HackerspaceTO. Not to mention a hilarious street performance that could only happen somewhere open-minded like Toronto.

Jamming on harmonica with a cyborg who played the water organ was just part of my personal awesome experience. Where is that video? So was getting to hang out at HackerspaceTO where they have frickin' laser beams. We timed it poorly, and missed the band dressed in Farscape garb, but there was so much to see and do, right up to the end.

FutureRuby was not just a fun conference. And it was not just a chance to learn about a bunch of new things. It opened me up to new possibilities, and helped re-affirm my personal commitment. I thank all of the staff, volunteers, speakers, and attendees for making it an inspirational experience.

Thursday, July 16, 2009

I Have Seen The FutureRuby, And It Is Amazing - Part 1

It was with tremendous excitement that my brother Damen and I had arrived in Toronto for FutureRuby. Not only were we getting to attend the reprise of what had been by all accounts the "Best. Conference. Ever.", but we were going to be speaking about Project Flying Robot.

There had been many interesting interactions with various security personnel on the journey, thanks to the many small homemade electronic devices that make up our tiny squadron. All of them were extremely friendly and professional as they carefully unpacked, swabbed, scanned, then repacked our cases full of joysticks, Arduinos, electric motors, batteries, and many wires. MANY wires.

By the hour that we arrived, we were too late to attend failCAMP (failed to make it?), but there would be many opportunities to interact with our fellow comrades in Ruby. @peteforde and @meghatron of Unspace had designed the conference with the kind of architectural integrity only a geek could conceive. It was not until the final sessions that the master plan became clear, but I will get to that.

As a result, the next morning we had no post-fail hangovers to slow us down with our last minute assembly and attempts at troubleshooting, combined with walking all over Toronto. Once the evening came, we were eager to connect with our fellows, and happy to climb the stairs to Unspace's cool digs. Pinball machine FTW! And Greg Borenstein's robotic drummer pounding the skins on Pete Forde's drum kit, controlled by Archaeopteryx. It was an excellent party, and they had to kick us out at midnight with the reminder of the talks in the early AM, not to mention the festivities yet to come.

Opening up the first day of the actual conference with the first talk was Nathaniel Talbott with a rabble-rousing speech on "How Capitalism Saves Ruby From Corporatism, or, Owning The Means of Production". This was an immediate shot across the bow of the status quo, and gave us all a clue that the 'collectivist' theme was not just a cool design style for the schwag, but also a serious theme for the conference content.

Next on was Ilya Grigorik with "Lean and Mean Tokyo Cabinet Recipes". If you do not know about it, Tokyo Cabinet is an open source key-value database, that also has server and full-text capabilities. Ilya gave a very hardcore presentation that went all the way into many of the cool things that can be done with TC right now. This was a departure against the traditional SQL way of doing things, and tied in with the revolutionary theme. You HAVE been getting up to speed on one or more non-SQL databases already, haven't you?

The next session was one I was particularly eager to hear. Austin Che spoke about "Programming Life". As in, "Hello World in a petri dish" kind of programming. I had missed the actual workshop, where some lucky people were successful as growing their own glowing bacteria. However, the excellent talk from Austin took us on a wild ride through the current state-of-the-art in biohacking. Let me put it another way: we already have the rough biotech equivalents of both Github, with the Open Bioinformatics Foundation, and Sparkfun with Auston's own Gingko Bioworks. Other sites like biobricks.org and openwetware.org are also there for anyone who wants to get started with this fascinating technology at home.

Following this was Anita Kuno with "Version Control: Blood Brain & Bones" reminding us that the human mechanism needs to be correctly maintained, and developed for correct performance. She had a bunch of specific eating techniques and foods to share, and almost immediately, it seemed that we were more conscious of what nutritional input we were routing into our individual biocomputers.

Next up was one of the best presentations of the entire conference. Foy Savas gave a talk named "polyglots Unite!" which spoke about multi-language programming, and using a takeoff on Rack named Crack to providing a kind of Rack-adapter for other web-backends other than Ruby. It is a neat concept, and I look forward to seeing where it goes. The presentation itself was absolutely fantastic. The timing, the clarity... in a word, he "killed". One of the best speakers of the conference.

Only something pretty different and amazing could follow up that, and Misha Glouberman's "Terrible Noises for Beautiful People" satisfied. It was a laptops closed participatory session that had our entire group singing, clapping, and sushing together. Not only that, but we actually played Conway's game of life using musical interaction with the entire group as the cellular automata. You can't do THAT at home! Absolutely brilliant.

Next were my brother Damen Evans and I with our "Flying Robot" presentation. Despite a few small technical glitches (hardware!) we pulled it off, and the crowd was enthusiastic. We had a great time, and congrats to the winners of the 2 Blimpduino kits @_krispy_ and @maplealmond. There is some cool video here, and lots of great photos like here, and here. Thank you to everyone who participated, we had a great time doing it!

Once we had demonstrated Ruby air superiority over the skies within the Metropolitan Hotel, it was all mobile all the time for the remains of the day. First, a 3-way talk from the guys at Phonegap, followed with a demo by Adam Blum from Rhomobile. I had seen Adam's basic pitch before at LARubyConf, one nice change was that they no longer seem to be trying to charge a per-user license. Per-user license, what's that?? I haven't seem one of those since last century, I think.

Finally, the sessions for the first day were complete. We all put on our finery, and took over Pravda, a Russian-mobster-styled vodka bar that pulled out all the stops, with many people staggering out of the vodka-freezer with smiles on their faces. My personal favorite moment was when we gave a spontaneous group loud "ahhh-clapping-shushing" in response to the wonderful announcement that Shopify was going to pay to keep the bar open longer. There was large quantities of amazing food as well. That @meghatron really knows how to throw a party!

After a pleasant stroll through the streets of Toronto, powered by Russian jet-fuel, we collapsed, to get a few comfortable, if short, hours of rest before FutureRuby Day 2.

Sunday, July 05, 2009

Getting Ready For Takeoff At FutureRuby

I just realized it has been an entire month since my last post. Sorry! In case you were wondering, the always overambitious plans for Project Flying Robot have taken up more time than expected. And parts. Especially parts.

Lucky for us, the benefit of a hard deadline approaches: FutureRuby is coming up next week. My brother Damen Evans and I are going to be showing off our latest works in Unmanned Aerial Vehicles (UAV) based on Ruby Arduino Development (RAD). I don't want to let on too much, so as to eliminate the surprise element, but this should be our biggest spectacle yet.

So if you're going to be in Canada next week, we look forward to seeing you. If not, I'm sure there will be plenty of video to watch in either amazement or amusement, depending on how well we can pull this off...

Wednesday, June 03, 2009

Project Flying Robot: Supporting The Blimpduino

As Maker Faire approached, my brother Damen and I were very busy working on something cool: support for the now-shipping Blimpduino kit!

Thanks to the tireless efforts of Chris Anderson and Jordi Muñoz, the long awaited Blimpduino kit is now shipping at Makershed. As readers of this blog know, we have drawn a lot of inspiration from the Blimpduino. Now, we actually have 2 of them, and you can get your own. For less than $100, plus a few other items, and flying_robot software of course, you have everything you need for a complete experimenters kit for Unmanned Aerial Vehicles at home.

There are a few mods you need to make to your Blimpduino, if you want to be cool like us, and control/reprogram it using a linked pair of XBee modems. We will post complete directions soon on how to mod your blimpduino into a flying_robot using Ruby Arduino Development.

In the meantime, you can look at the almost completed flying_robot for Blimpduino code here.

Sunday, May 17, 2009

Fear And Loathing At RailsConf 2009

We were around Barstow on the edge of the desert, when the drugs began to take hold... wait, that was someone else's story. OK, restart.

We were around Barstow on the edge of the desert when the excitement began to take hold... we were on our way to RailsConf 2009! No screaming bats, just loud pumping techno music to power the PT Cruiser. My designer, who was not old enough to be pouring beer on his chest, nor interested in facilitating the tanning process, said "What the hell are you yelling about?". I aimed the Cruiser toward the horizon without slowing down, "I need an In-n-Out milkshake."

Las Vegas... what a place. Putting RailsConf there is the sort of idea that makes sense on paper, but could turn a previously mild-mannered group of Ruby programmers into a mob of raging lunatics. Come to think of it, a group of Ruby programmers IS a mob of raging lunatics. Case in point? Video slot machines... the worst odds in vegas, but the best graphics. How will a group of perpetually partially attentive people be able to resist the siren call of millions of sensory distractions each designed to exert psychological pressure to LOOK AT ME? Seems like an interesting Milgram-like experiment.

The back of the PT Cruiser was full of musical gear for the RailsConf music jam. With a small but effective PA and a few spare guitars, this session should be the best one yet. Could we play Vegas? Without offending the locals, or running afoul of some Musician's Union enforcers, that is.

I had meant to keep meticulous notes, and post a flurry of blog entries as I have done as RailsConf's past. But the dull fog of Vegas combined with the mad dog sentiments already awakened in the Rails community at GoGaRuCo, left me with the sure knowledge that no matter how hard I might try to offend the insiders, no one would even notice with the continuous drunken flame wars that RailsConf Vegas would become quickly known for throughout the Twitterverse.

The madness had taken hold long before we hit Vegas, and adding alcohol and neon fueled hyperstimulation only had the effect of pushing us into a raging frenzy. "Tim Ferris? How DARE he tell ME to exercise. Bob Martin? How dare he accuse me of not testing? Everyone else? How dare they dare to dare, or else how dare they not dare to dare! Forgeddaboutit!"

Just then my already tenuous sanity began teetering, and I started yelling that the White Rabbit and I had been pair programming together for years. The wild-eyed activist within me leaped into action, and I practically elbowed people out of my way to get to the mic, to ask Uncle Bob the burning question on my mind: "What happened to the social revolution you started with Kent Beck and Ward Cunningham?"

From the look in his eyes, I know the question haunted him, just as it still haunts me. If this is the utopia, why are we all fighting so much? "I saw the best minds of my generation destroyed by madness, starving hysterical naked..." and still Twittering away trying to validate justify, explain, strengthen, while simultaneously eroding it, tearing it away.

I had to escape, find a place to hide and collect my shattered illusions. Fortunately, the safety zone of CabooseConf greeted me. The comfort and sanity of watching my programming buddies hacking together an LLVM implementation for AVR was like slipping under a warm, soft blanket, after the frenzy that had started while I was sitting in the Reptile Room, watching some giant lizards get ready to feast on fresh ideas.

Days had passed, but in the strange netherworld between Vegas's clockless existence, and the constant Twitter flow of new input, I had lost all sense of temporality. It was a surprise that we had already come to the final keynote aka Q&A session. It was an odd demonstration of our shared exhaustion and sensory overload, that pretty much no one wanted to ask any questions.

"Time to get out of here!" I said to my designer. We piled the PT Cruiser full of our gear, plying the staff with dollar bills like we were mythical high-rollers. I drove like the wind, but it was not quickly enough. Leaving behind a cloud of gritty, baked dust, we fled from a man-made 24-hour spectacle that even Dante could have never imagined, even if he had taken all the drugs available to an Italian in the 14th century at the same time.

With apologies to, and in memory of HST, we need his free spirit now more than ever

Wednesday, April 29, 2009

Heroku Has Launched

Well, just a very short time after I started using Heroku, they went commercial. Yes, after their very successful beta period, where apparently 24,999 web sites other than mine were already hosting, Heroku is now offering a paid version of their service. I had a sneak peak at the pricing a few days ahead of time, but I was not able to talk. And despite my intentions of blogging this right away, the other demands on my time have kept me occupied till just now.

Heroku is mimicking the successful "freemium" pricing plan of other services, but brings it into the Ruby web application hosting space, within some pretty generous limitations. Yes, exactly. Heroku still allows you to get started with their service at no charge at all. Wow. I do not know of any free web hosting service that does not at minimum plaster your site with hideous ads. Let alone quality Ruby hosting. Let alone Ruby powered cloud computing.

As your traffic needs increase, or database storage needs, they have a variety of pricing tiers. Thanks to a slick AJAXified pricing tool, the complexity of so many pricing options is somewhat mitigated. Plus it's fun to play with.

The evolution of most startups seems to track really well with Heroku's overall business strategy: as a customer becomes larger and more successful, their increased traffic and database needs will cause them to start paying Heroku. If your venture does not really go anywhere, it is not really taking up much in the way of resources anyhow. This aligns Heroku nicely with the needs of their customers, instead of pitting them against them trying to extract revenues too early in the growth curve.

Despite a few growing pains, they have had pretty decent uptime on my app so far. Even with my new app Thumbfight getting a few sudden traffic bursts, as well as having a major reliance on Twitter for back-end processing (more about Thumbfight in an upcoming post).

Heroku is a work in progress, but so is most everything else on the entire Internet. Heroku provides an amazingly easy and insanely cheap way to jumpstart your Ruby-based web application hosting and deployment, while still getting some real expertise. As long as you can work within their current technical limitations, for a Ruby-powered startup, I think Heroku is a great way to go.

Saturday, April 18, 2009

Project Flying Robot: Getting RAD With The ATMega328

I have been wanting to upgrade the hardware used in our Dorkboards for flying_robot, from the ATMega168, to the newer better faster ATMega328. More memory, and a faster UART for serial communications with the XBee modems in the same pinout = easy win. Thanks to a quick shipping turnaround from @adafruit I got them in before the weekend, so I could play a little bit today.

The first step was to upgrade my hard-working Arduino Diecimila to a 328. I now have it working great with Ruby Arduino Development (RAD), but since RAD was really setup for Arduino 12, I had to make a couple changes. Here is what I did:

1. D/l and install Arduino 15 (brave, I know, since that is the latest release, and many people run one version down from the latest)
2. Change my hardware.yml entry
mcu: atmega328p

3. Change my software.yml entry
arduino_root: /Applications/arduino-0015

4. Lastly, since the ATMega328 bootloader runs at a faster rate, I had to tweak the RAD code itself to support it. The file "/vendors/rad/generators/makefile/makefile.erb" is the template used to create the makefile that compiles and uploads the code to the Arduino. Line 77 in that file controls the baud rate, which needs to be set like this for the '328:
UPLOAD_RATE = 57600


Once I had done this, I was easily and quickly able to recompile/re-upload the latest flying_robot code to my test board. Yeah! Hopefully tomorrow I can upgrade Rogue 1 and try a flight at the new, higher communication speed.

Tuesday, April 14, 2009

Heroku, Why Haven't I Been Using You Till Now?

Last night, I finally got around to deploying something on Heroku, an interesting service founded by my formerly LA-based Ruby programming chums Adam Wiggins, James Lindenbaum, and Orion Henry. I had played with their previous incarnation of the service, now known as "Heroku Garden" but only recently have I gotten to know a little bit more about the incredible offering they have evolved into.

Basically, the Heroku crew have addressed the question "how can I deploy my Ruby on Rails, Sinatra, or other Rack-based web application into a dynamic cloud of servers with ridiculous ease?" They have done this with an ingenious architecture that takes advantage of Amazon's EC2 to provide their internal infrastructure. This allows Heroku to concentrate on their most important core value proposition, of a simple way to take your Ruby code and just push it into the cloud.

Notice I said "push". Heroku requires that you use git for source control of your application. You are using git for everything now, right? If not, git with it! Sorry, could not resist that. Anyhow, by simply adding a remote master to your existing git repo that points to Heroku, along with a few Ruby gems that they provide, you can deploy your app just by pushing your current branch to the Heroku master.

Doing this, causes your app to get packaged up into "slug". Once you have an active slug, it will be be deployed to a "dyno" within the Heruku grid, which is what a virtual node within their architecture is called. As your app requires more resources, the slug can be deployed to more dynos within "less than 2 seconds for most apps". That is way faster than starting up a new Amazon EC2 instance yourself, and having this extra layer has a number of other interesting benefits as well.

Heroku has a quick start guide, which pretty much runs down what you need to do. I had found a slightly more simplified quickstart here. I already had an existing Sinatra-based app that I wanted to test on Heroku, so here were my steps:

1. Install heroku gem
sudo gem install heroku

2. Setup Heroku account info, and upload public key
heroku keys:add 

This will prompt you for your Heroku account info. If you have not created one yet, better jump over to http://heroku.com/signup and create one

3. Create Heroku app from my existing app
I just changed so my current directory was the app I wanted to add to Heroku, then entered:
heroku create myappname

This creates the new app on Heroku, and creates a remote branch so you can deploy just by pushing the code.

4. Deploy my code
git push heroku master

That's it! If you have a really simple app, with no database access, you are done. What, you are deploying a Ruby on Rails app and need a database setup? OK, then...

5. Run database migrations
heroku rake db:migrate


NOW, you are fully deployed and running on Heroku. Unless you are not. I still had a minor problem with my app. I was writing my log file into "logs/production.log" but Heroku does not normally allow write access to disk. The two exceptions to this are the "tmp" directory and "log" directory (notice singular). They do provide an easy way to view your most recent log entries, by typing
heroku logs
which is how I figured out my problem with the log directory.

So, here is my total time required to deploy this app on Heroku:
- Reading quickstart = 3 minutes
- Installing gem and entering account info = 2 minutes
- Making my app a Heroku app = 1 minute
- Deploying my app for the first time to Heroku = 2 minutes
- Figuring out what I had done wrong from the Heroku documentation = 10 minutes
TOTAL = 18 minutes

Here were my bonus steps:
- Reading Heroku docs on using a custom domain with Heroku = 1 minute
- Realize I need to rename my app using Heroku command line = 1 minute
- Rename my app using Heroku command line = 1 minute
- Setting my DNS settings to point to Heroku = 5 minutes
- Telling Heroku about my custom domain = 1 minute
TOTAL = 9 minutes

So there you have it... a fully deployed app, living in the Heroku grid and consequently the Amazon EC2 cloud, in less than 30 minutes, having never used their tools before, including troubleshooting a minor configuration problem. That may seem unfair... and it is. That is exactly the kind of unfair I like on my side!

Much credit should go to the Heroku team for creating something extremely cool and functional. Important details are still not available, like pricing etc., but at least for now Heroku, is a great way to easily get your app up into the cloud within literally minutes.

What is the future for Heroku? Funded by Y Combinator, they have been quietly working away, and now with Sinatra team leads Blake Mizerany and Ryan Tomayko onboard as well, I think we will be hearing a lot from this exciting little company.

Wednesday, April 08, 2009

LARubyConf 2009 - Jim Weirich - "The Grand Unified Thoery of Software Development

As the 2009 Los Angeles Ruby Conference (LARubyConf) drew to a close, our keynote speaker Jim Weirich took the podium. I have seen Jim speak several times, and he is both intelligent, as well as down to earth, which is a rare combination indeed.

The subject of his keynote would be anything but down to earth. In fact, it would have to be one of the most ambitious talks I have ever seen at a Ruby conference. Only Jim could have pulled it off as he did, with both humor and insight.

One thing Jim does is have to conduct Tech Interviews. One question he always asks is "What do you look for in a good design?" Most people's answer: "UMMMMMM..."

Then Jim seemingly shifted themes abruptly, to physics. Specifically, subatomic particles.

It is known that particles that are charged the same repulse each other. Furthermore, every time you change electric field, there is a changing magnetic field at 90 degree angle.

James Clerk Maxwell
Maxwell discovered 4 equations that describe relation between electrical and magnetic fields. By describing these two entirely separate forces, combines into single force. Maxwell's work probably greatest contribution to science.

Then, along came Ernest Rutherford's famous electron experiment. The one where electrons were supposed to evenly deflect onto a screen, but instead occasionally would deflect wildly. As a result, we now know that matter was mostly open space.

There are four known forces
- electromagnetic
- gravity
- strong nuclear
- weak nuclear

The search for the "Unified Field Theory" in physics is a search for a single explanation that accommodates all four forces.

This is very much like the search for a single explanation for software design.

Some Commonly Accepted Software Design Principles
- SOLID
- Law of Demeter
- DRY
- Small Methods
- Design by Contract

Lots of ideas about how to write software, but no grand unified theory.

"The Grand Unified Theory of Software Development"

Composite/Structured Design
- Glenford J Meyers - 1978

Coupling & Cohesion - from best to worst
- no coupling
- data coupling - local data, simple
- stamp coupling - local data, structured
- control coupling
- external coupling - global data, simple
- common coupling - global data, structured
- content coupling - when you reach inside of modules and mess with them from outside

control coupling
- method has flag parameter
- flag control which algorithm to use

Symptoms
- word OR in description

Example:
Array.instance_methods(true)
and
Array.instance_methods(false)


Which one lists only private methods?

Another example, Rails does this:
Customer.find(:all)
vs.
Customer.find(:first)


Myers' classification were OK, however failed to extend well to objects and dynamic languages

Meilir Page-Jones's book "What Every Programmer Should Know About Object-Oriented Design" has 3 sections, two of which are not too useful, but the third is very interesting. It talks about the idea of Connascence in software design.

Connascence - when two things are born and grow up together
Two pieces of code share Connascence when a change in one module requires a corresponding change in the other.

CoN - Connascence of Name
- when code linked by name
- can also apply to databases
- class name is NOT, but parameters are

Locality Matters
- if you have things grouped together, there is stronger connascence.
- as dist increase, you reduce connect between them

Connescence of Position
- when the order of the params matters

Low/high degree of CoP

When you encounter CoP it is better to transform it to CoN

Degree Matters

CoP in test data example

User.find(:first)


CoM - Connescence of Meaning
- when two bits of code have to agree on the meaning of data

When you encounter CoM it is better to transform it to CoN

Contranesence is when things have to change opposite to each other
- for example, collision of class names in two different modules
- solution is to use namespaces

Another Example?
- do not monkeypatch unless you have to, and if so use namespaces

Connascence of Algorithm
- Two methods that do different things, but that are bound together by algorithm. For example, two different bits of code in two different languages that have to talk to each other. JavaScript client, Ruby server is good example.

CoA -> CoN
- also known as DRY

CoT - Connascence of Timing
- for example, a race condition

Summary
- Connascence is the 'quark' of software design
- Not really any tools to analyze code
- Seems like there is a relation between connascence and design patterns

Wow! Jim had taken us all the way from subatomic particles, to a start towards a unified theory of software design, and tied it all together nicely. And made it fun! It was a tremendous cap on an excellent conference, and we all had really appreciated Jim's contribution.

Tuesday, April 07, 2009

LARubyConf 2009 - Blake Mizerany - "Sinatra: the Ultimate Rack Citizen"

I was very happy when the next presenter at the Los Angeles Ruby Conference 2009 (LARubyConf) was Blake Mizerany, creator of the very cool Ruby micro-framework Sinatra. As long-time readers of this blog know, I am very into Sinatra.

There has been an incredible amount of work going into Sinatra lately, so I was very interested to catch up on what the team has been up to.

What is Sinatra? A Ruby Domain Specific Language (DSL) Mapping REST to simple actions

Why?
- small
- fast
- great rack and ruby citizen
- strong focus on HTTP
- HTTP caching helpers built in before it was cool
- content negotiation
- no boilerplate
- dead simple config when the default are not enough
- smart configuration
- DOCS- sinatrarb.com
- extending is easy
- rack is the only dependency
- very low WTF to LOC ratio (jeremy mcnally's rubyfringe talk)

when?
- a few controllers models and views
- starting any web application
- you need reusable apps and/or middleware and/or resources
- you need speed

who?
- heroku
- github
- taps
- integrity

sinatra in your gems
- a mini-github for offline repo browsing
- a local plugin and play wiki
- memcached utilization graphs
- config reusable github hook

Example: NotTwitter

As classic Sinatra

set :username,
Proc.new { fail "yo"}

get '/' do
...
end


Change
  require 'sinatra'

to
  require 'sinatra/base'



But I want to deploy to Passenger or Heroku! No problem.

./bin/install-not-twitter
Copy example config.ru to cwd
Copy .gems file

.ru is a standard Rack config file.

.gems is a Heruku configuaration file that will handle any needed Ruby gems installations


git init && git add .
git commit
heroku craete
git push heku aster
heroku


3 Awesome Features in Sinatra
pass - I cannot handle this request, try the next route
forward - sinatra as middle ware... done my job, let the next app take over... pop in front of rails metal
use - Sinatra loves rack so much, we made sure not to hide it


Resources
http://sinatrarb.com
http://github.com/rack/rack
http://github.com/rack/rack-contrib
http://github.com/rtomayko/rack-cache

If you are doing everything with Rails, you are probably using too much for the job. Sinatra is simple, fast, and extensible. I am using it in two production applications right now, along with Rails. Sinatra handles parts of the application better than how Rails does, so that is how I roll.

Especially with the ever increasing momentum behind Rack, Sinatra is a good bet for getting things done. Combined with Rails Metal, and you really have it all.

Sunday, April 05, 2009

LARubyConf 2009 - Danny Blitz - "Herding Tigers: Software Development and the Art of War"

I had no idea what I was about to experience at Los Angeles Ruby Conference 2009 (LARubyConf) when Danny Blitz took the podium as the next presenter. I had seen him hanging out with his distinctive pompadour, tattoos, and leather jacket. He is a big guy, and hard to miss. But he had been pretty quiet till then, which was about to change radically.

Herding Cats is a term commonly used when describing the management of software teams. But when Danny Blitz says cats, he means big cats aka tigers. So who is this guy? He has done TONS of stuff, from DOD to Dell, to the DARPA autonomous vehicle challenge. Very cool stuff.

Get Agility?

"A good plan violently executed now is better than a perfect plan executed next week" - Patton

Q. Why is software so difficult?
A. We don't want to face the truth

Q. Why don't we want to face the truth?
A. We're afraid

Q. What are we afraid of?
A. We're afraid we do not know anything about end result

Q. How can we deal with this?
A. Admit the truth

A tiger team is a small self improving team.

What can you expect from a tiger team?
Their first project was scheduled to take 5 weeks - took 5 days

TIGERS ARE A TEAM!

QA is part of tiger team

Tigers show leadership

Tigers self-improve

Almost no meetings on a tiger team

QA and test automation are the tip of the spear

QA should be there from very beginning

Shared psychology and intelligence

Winning, boldness, excellence

Not afraid of the dark

Who is on team?
- all staff needed to deliver product
- leader, 4 devs, 1 automation engineer, 1 Q, product staff m,ember
- in addition, architecture, system admins, any other support staff

why warfare?
- business is battle

There are two kinds of warfare: attrition and maneuver

Attrition warefare
- traditional, tactical
- clashing head-on

Maneuver warfare
- internet space
- rapid modern violent
- unexpected movements

Speed
- it is a competitive weapon
- undeniable advantage in business

Speed mitigates risk
- not a guarantee
- damage is contained by quickly compensating

Speed improves the team

Speed adds to job satisfaction

Speed allows agile to function properly
- max iteration length (usually 30 days)

Speed builds credibility
- shows a lot of work in short order

Cows and tigers
- cow is bigger, but who wins?

Disease: using agile terms to describe non-agile project

Corporate animal kingdom
- Tiger
cautious, calculating, looks to win

- Cow
herding
not known to be original
afraid of risk

- Bear
big, usually mellow
awesome battle skills
live and let live attitude

- Leopard
truly wild
will attack at any time

- Elephant
huge and tough
invincible
best to avoid battle

- Hyena
scavenger
steals food
evil

"Tiger teams are like Hell's Programmers" - Danny Blitz

Leadership
this is what makes or breaks
faith
love
hope
success belongs to the team
failure belongs to the leader
buck stops here
fearlessness
listener and learner
protector
outside influences
internally too
team members themselves

US marine management techniques
- manage by end state and intent
- reward failure
- demand to be questoned
- glorify the lower levels of organization

Politeness and professionalism
- that or poison

Agile is not a methodology, it is a mindset, it is inevitable

Danny says he is working on the book called "Herding Tigers". He has also started a blog at herdingtigers.com. All I can say is, he is a very dynamic and exciting speaker. Everyone was captivated, myself included. I'm still not sure how I took these notes.

Rock on, Danny!

LARubyConf 2009 - Jeremy Evans - "Sequel"

The next session at the 2009 Los Angeles Ruby Conference (LARubyConf) was Jeremy Evans presenting Sequel, which is a very powerful database toolkit for Ruby.

Ruby originally had adapters for each database. The problem was that each was very database specific. This was a problem due to both SQL differences, as well as API differences.

no behavior

1997 - ruby-postgres was first created.

2000 - DBI

2004 - Active Record
Although AR made things easier, it had strong opinions. These opinions did not always map perfectly to any particular database.

2007 - Sequel

Completely DB independent API.

Example is concatanating strings, which requires a completely different syntax in each flavor of SQL database.

Concise

Optional Behavior

Opinions?

Ruby should be like clay in a child's hands - Matz

Sequel advantages:
- Simple - as possible but no simpler
- Flexible - opinions, not dogma
- Powerful
- Lightweight - 1/2 memory usage as active record
- Well maintained
- Easy to Contribute
- Easy to Understand
- More Fun

Show me the *** code!

require 'seq'
DB = Sequal.sqlite('larun')
DB[:attendees].count
# => 1

DB[:attendees].first
# => {:name ='', :address =>}

Transactions

DB.trans do
DB[:entry],insert(:account_id => 1)

...

end


SQL Loggers

DB.loggers << Logger.new($stdout)


Each DB has its own connection pool

DB[:table].all # or .each

.update(:column => 'value)

.delete

DB[:table] # this is a dataset... like query cursor


Sequel is a functional API, where object methods return copies of themselves.

.select - while columns
.filter
.order

This means you can easily chain function calls, like jQuery. Awesome!

Sequel represents its internal objects using SQL's own internal representations

Sequel has 'core' and 'model'

class Attendee < Sequel::Model
many_to_one
one_to_many


end


Hooks & Validations - got 'em

Sequel it built entirely out of plugins. That sounds interesting, but experience with DataMapper has shown me that too many plugins may not be a good thing. However, I do not have any direct experience with Sequel yet, so this may be a non-issue.

13 - # of database adapters that Sequel supports today

Database graphing

Everything was going so well up to this point. However, Jeremy then tried to do his demo on Windows, just to show that if you are one of the poor souls using Windows, it can still work for you. He failed to account for the quirkiness of conference display adapters, and what that can do to a machine. Oh well, no demo.

That said, I was really impressed by what I heard about Sequel. I think I will have to try it out on something, just to see how it does.

LARubyConf 2009 - Aaron Patterson - "Journey Thru A Pointy Forest, or You Suck At XML"

The second presentation of the day at Los Angeles Ruby Conference 2009 (LARubyConf) was Aaron Patterson, XML maniac. I had a run at hardcore XML/XSL a few years back, and it has been a while since I walked the razor's edge of the angle bracket. But Aaron is not just an aficionado, he is genuinely obsessed. Given that he is the maintainer of the Nokogiri and Mechanize gems, this makes me happy.

He covered four areas related to XML:
- XML Processing
- HTML Processing
- Data Extraction
- HTML Correction

The XML Processing section was as thorough a synopsis of XML as one could possible fit into just a few minutes. He went over the four main XML processing styles:
- SAX
- Push
- Pull
- DOM

SAX parsers are fast, and low on memory use. However, searching is hard, doc handlers are verbose, and programmer expense high.

Push Parsing works the same as SAX, the difference is the programmer controls document IO. Has low memory use, fast, fine control over IO.

Pull Parsers are handed XML, and yield a node object. They work like cursors, moving thru the document, so you only get one chance to process data without starting over from the beginning again.

DOM Interface is what most programmers are familiar with. Given XML, they build an in-memory tree. They can then be easily searched using XPath. DOM parsers provide easy data extraction, are programmer friendly, but have high memory use, and you pay a serious speed penalty.

HTML Processing is just like XML parsing, but it limited to the HTML DOM.

Data Extraction in XML can be done two different ways:
- CSS selectors
- XPath queries

XPath basics:

//foo <!-- start at absolute root -->
.//foo <!-- start a relative root -->
//foo[@bar] <!-- has bar attribute -->
//foo[@bar = 'baz'] <!-- has bar attribute with a value of 'baz' -->

Here is an example of a problem for parsers that support either CSS and XPath selector as parameter, like Hpricot:

p[b]

The problem is that it is both valid XPath AND CSS. Nokugiri has separate methods for searching CSS or Xpath, so as to avoid this problem.

XML Namespaces use URL's to remain globally unique, to avoid collisions between XML formats taht are different, but use the same node name. Here is an important point: XML Namespaces are as important as tag names.

HTML Correction is taking some invalid HTML, and "fixing" it. For example, making sure that all tags are properly nested.

Aaron wrote a tool called tree_diff, that compares XML trees, cause "they are interesting". With this tool he was able to process many HTML files with multiple HTML correctors/parsers, then compare the results to see if they matched. In many cases, they did not!

Fonts are hard
Attributes are harder

29% of the time it works all of the time

Seems like after all that, I will never use anything but Nokogiri for XML parsing again!

LARubyConf 2009 - Dan Yoder - "Resource Oriented Architectures and Why It Matters"

Lead-off man Dan Yoder started off the day's proceedings at the Los Angeles Ruby Conference (LARubyConf) 2009, with a presentation on Ruby Waves called "Resource Oriented Architectures and Why It Matters". Despite not getting the same attention that some Ruby frameworks have, the Waves team has been tirelessly working on it. According to Dan, the foundation of Waves has gotten pretty solid. Waves adds a lot of support for things over and above just handling http requests. So what is Waves really? It is a layer on top of Rack for defining application frameworks.

The next thing beyond MVC is to help developers write more rest compliant apps.

But do the constraints in REST really buy anything? Actually, yes.

One nice thing about internet-based development, is that the existing infrastructure is already there like proxies, load balancers etc.

But why does it work? At the heart are the constraints.

The web is NOT MVC
- so why do we use it so often for web apps
- piggybacking off of the web browser

Example of busting out of the browser - RSS feeds from blogs and podcasts

OAuth SMART Proxies

Video Search
- edge caching

Resource Oriented Architecture (ROA) is just distributed objects, loosely based on Roy Fielding's definition

ROA solves an old problem, that there have been many attempts at solving previously with CORBA, COM, etc. But this time will be different.

Learning From Past Mistakes
- be platform neutral
- be wire neutral (any protocol)
- define meta-object protocols
- good performance, use edge and client caching
- allow layered architectures

Waves and ROA

Rich DSL for HTTP Requests

on (:get, ['location'],
:query => {:lat => /\d{4}/, :long => /\d{4}/},
:accept => [:json, :xml])

But it is still Ruby!

The One File Waves App
- influenced by Sinatra
- not quite as clean as Sinatra

Roadmap

A Resource DSL Example

class Blog
include Waves::Resource::Server

resource :list, :expires => 3.days, ['blogs'] do
get { model.find_all}
end

...

schema :element, ['schema', 'blog', '2009-03'] do
attributes :title, String, :descriptions => String
link :entries, :list => Story
end

...
end


What's with that "schema" block? This example also defines the RDF schema for the resource to provide machine-discoverability. That is one very cool aspect of ROA that I personally have not seen addressed much within the Ruby community.

Waves is really coming along, and I am planning to explore it a bit in the coming weeks. For more info, go check out http://rubywaves.com