Installing Rails on OS X

I recently started teaching myself Ruby on Rails. So far I'm enjoying the experience and find that Rails is living up to the hype as a fast and easy to use framework.

One pain point was getting Rails installed and configured on my local development machine (MacBook Pro). Although the process is relativity simple (when you know how) it is prone to error that can result in a lot of frustration (especially when you just want to start coding).

Below is a short five step process to installing and configuring Rails on OS X 10.10.

Step 01:

Install Xcode from the Mac AppStore. Xcode is an integrated development environment (IDE) containing a suite of software development tools developed by Apple. It's fairly big and therefore can take some time to install.

Step 02: 

Install Homebrew, which is an open source software package management system that simplifies the installation of software on OS X.

To install simply open "Terminal" and run the following command:

ruby -e "$(curl -fsSL"

You can ensure your installation is up to date by running the following terminal command:

brew doctor

Step 03:

Install Ruby Version Manager (RVM) and Ruby. 

Run the following terminal command:

\curl -L | bash -s stable

Once the RVM installation is complete, close and reopen terminal. Now run the following terminal command:

type rvm | head -n 1

To install Ruby, run the following terminal command:

rvm use ruby --install --default

Step 04:

Install Xcode Command Line Tools by running the following terminal command:

xcode-select --install

When prompted, click install.

Step 05:

Install Rails by running the following terminal command:

gem install rails --no-ri --no-rdoc

That's it! You should now be able to create a new rails app and access it via your local host.

Dreamforce 2014 - Introducing Lightning

I recently attended and spoke at Dreamforce 2014,'s (SFDC) premier conference.

I’ve tried a number of times to describe Dreamforce to people who haven’t been before, but to be honest, there are no words that fully cover the experience. The Dreamforce website states: 

“Dreamforce is four high-energy days of innovation, fun, and giving back. It’s your chance to learn from industry visionaries, product experts, and world leaders who can help you transform your business and your life.”

Unfortunately this doesn’t come close to describing the event, which this year saw more than 150,000 people attend from all over the world. 

My best effort would be:

“Dreamforce is the biggest conference / festival you will ever attend. The entire event feels like it was designed by Walt Disney, with the marketing polish of Apple! Between the 2000 sessions,  you’ll also find a music festival, all night meet-ups and a 1million dollar hackathon. It’s a techies dream!”

With that said and to quote Morpheus, “Unfortunately, no one can be told what the Matrix (*Dreamforce*) is. You have to see it for yourself.”

Due to the scale of Dreamforce it would be impossible to cover all of the announcements in a single blog post. As a result I’ve decided to focus on what I believe to be the most important announcement.

Saleforce1 Lightning

Salesforce1 Lightning is not new, in fact SFDC have been working on it for years. We got our first official glimpse of Lightning last year at Dreamforce with the release of the Salesforce1 Mobile App. This application was built using Lightning, or as it was known then “Aura”.

What is Lightning?

Lightning is the next major release of the Salesforce1 platform (AKA and in my opinion is the biggest change since SFDC first introduced the ability to create custom objects.

SFDC have released an overview video which highlights the Salesforce1 Platform with Lightning:

Why is Lightning so important?

It’s clear the future will be device agnostic, meaning users will be accessing applications and services across a magnitude of different end points. This could be anything from tablets and smartphones, to wearable devices like the Apple Watch, as well as other paradigms that haven’t been introduced yet.

The current Salesforce1 user interface is geared towards the browser and I think everyone would agree, has a certain “90’s look”. It also doesn’t consistently optimise for other screen sizes, requiring a complete user interface shift, like with the Salesforce1 Mobile App. This results in a confusing user experience that causes SFDC and developers pain, as they still have to individually think about desktop and mobile when developing complex applications.

With Lightning, SFDC have delivered a new mobile-optimised, modular user interface, which delivers a consistent experience across all devices and enables rapid application development through re-use.

How does Lightning work?

The easiest way to explain Lightning is to focus on two parts, the "Lightning Framework" and "Lightning Components". Essentially, Lightning applications are built using the Lightning Framework and comprise of Lightning Components.

Lightning Framework:

I’ve heard some people describe Lightning as “just another JavaScript framework”. However, it’s really more than that. 

The Lightning framework supports partitioned multi-tier component development that bridges the client and server. It uses JavaScript on the client side and Apex on the server side. This means that developers gain access to a stateful client and a stateless server architecture. 

For example, JavaScript is leveraged on the client side to manage UI component metadata and application data. The framework then uses JSON to exchange data between the client and the server.

This architecture helps drive great performance by intelligently utilising the device (client), server and network, enabling developers to focus on the logic and interactions of their app.

Lightning Components:

Every Lightning application is made up of Lightning Components. Each component is a self-contained, re-usable unit, that can range from a simple line of text to a fully functioning capability. Components interact with their environment by listening to or publishing events.

SFDC have already created a number of  prebuilt Lightning Components (e.g. Chatter feed, search bar, charts, etc.) which can be used as part of app development. You can also expect partners to build Lightning Components and make them available on the AppExchange.

Developers can use or expand the prebuilt components, as well as build their own custom components. Any component can then be re-used across different applications on the Salesforce1 platform.

The goal is for developers to build components instead of apps, enabling speed to value through re-use, as well as guaranteeing each component will be fast, secure and device agnostic, thanks to the Lightning Framework.

What about Declarative Development?

One of the great aspects of the Salesforce1 platform, is the ability to develop declaratively (click’s, not code). This opens up app development to non-developers (e.g. business users), which SFDC call “Citizen Developers”.

As part of the move to Lightning, SFDC have created the Lightning App Builder. Put simply, this is a GUI driven process to “compose applications” from Lighting Components.

The Lightning App Builder will enable non-developers to build even better applications, quicker, through a drop and drag interface. For example, if you want a Chatter Feed in your app, simply drag one in. Everything else (e.g. performance, scale, security) is taken care of by the Salesforce1 platform and thanks to the Lightning Framework, the components will optmise perfectly for any device.

The video below shows the Lightning App Builder in action:


I believe Lightning is a game changer for the Saleforce1 platform. I can see a huge opportunity for developers as they shift from creating standalone apps, to re-usable Lightning Components.

This is the potential I have always seen in the Salesforce1 platform, where app development is no longer a time consuming, costly process, but instead fast and efficient. For the first time, this is achievable through a building block approach, that doesn’t sacrifice quality, performance or security.

I fully expect to see a lot of energy around Lightning over the next twelve months as SFDC continue to make more of the capabilities available to the community. There will also be a judgment day for the current browser user interface, when SFDC will enable Lightning across the entire platform (my guess is Dreamforce 2015).

Overall I think Lightning will change the way apps are built and I’m excited to see what developers (including citizen developers) do with the new capabilities!

Developing on

The PaaS provides an enormous amount of functionality and flexibility, all of which is driven by the underlying metadata architecture.

There are a number of different ways to develop applications on, ranging from declarative development (click’s, not code) to APEX, which is an object orientated programming language similar to Java.

Depending on your use case you may only need to leverage the declarative development, however for most mid-sized builds the 80 / 20 rule can be applied (80% Click's / 20% Code).

Regardless of your development, there are number of developer good practices and tools that will help set you up for success. This is especially important if you are developing for a shared org (see " Org Strategy").

This article will outline some of the good practices and tools that I recommend:

Integrated Development Environment (IDE)

An IDE is a software application that provides facilities to programmers for software development. This includes a source code editor, build automation tools, debugger and (if you’re lucky) code completion features.

With you have two IDE options:

  1. Eclipse, which can almost be considered the industry standard. It’s cross platform (Windows, Mac, Linux), has a rich community and supports thousands of plugins for many programming languages. have an Eclipse plugin specifically for development, which can be downloaded for free from “”. They also have comprehensive reference library for beginners and experienced developers
  2. MavensMate (my choice) is an open source IDE developed by Mavens. This is quickly becoming the community standard for development due to its integration with the popular text editor Sublime Text. themselves also point people towards MavensMate as an alternative to Eclipse.

If you are developing a mid-complexity application (APEX, Visualforce, etc.) I would certainly recommend the use of an IDE.

Source Code Management (SCM)

A common misunderstanding when developing on is that you don’t need source code management. The reason that I often hear is that “I have no code, it’s all declarative”.

The important thing to remember is that is a metadata driven architecture, therefore metadata describes the data structures in your environment and the declarative functionality implemented on the platform (e.g. your application).

As a result it’s still import to use source code management for version control, even if it’s just tracking the metadata changes.

Personally I always recommend Git, which is a distributed source code management system with an emphasis on speed and data integrity. There are plenty of Git services available, however the most popular are GitHub and BitBucket. It’s here where we store all of the source code (e.g. metadata) for all of the projects that are developing for (just remember to make your repositories private).

Continuous Integration (CI)

Continuous integration is the practice of merging all developer code with a shared mainline on a regular cadence. It enables automated testing and reporting on isolated changes in a larger code base, allowing developers to rapidly find and solve defects.

As a result, continuous integration facilitates the process of agile development, where you are constantly testing your code, ensuing that it doesn’t break the build (small and often).

The industry standard for continues integration is Jenkins, which is an open-source tool. You can host your own Jenkins instance, but I would recommend a cloud-based service such as CloudBees.

Continuous integration will help facilitate the testing process when moving code between environments (DEV, TEST, PRD, etc.)

Development Environments

Every production environment comes with a suite of development sandboxes. These sandboxes can be created ad-hoc and are a direct replica of production (however do not include any data). Test data will need to be loaded post sandbox creation, either manually or via an integration (e.g. MuleSoft, etc.)

When provisioning sandboxes to development teams I recommend the following approach, which includes three sandboxes:

DEV = Main development environment.
CI = Continues integration merge / build test environment.
TEST = Formal user testing environment.

A traditional development pattern would be...

  1. All development will occur in the DEV environment, leveraging an IDE (e.g. MavensMate) and source code management (e.g. GitHub).
  2. The project teams will then build their code (multiple times per day) into the CI environment leveraging continues integration (CloudBees). This will confirm that their development does not break the build.
  3. Finally, any code positioned for a release to the GSO will be moved into the TEST environment (leveraging CloudBees), where formal testing (including UAT) can occur.

This approach ensures the entire development process can be owed and managed by the development team, offering complete autonomy.


In summary, is an amazingly flexible development PaaS, which is extended further if you include Heroku (I’ll save that for another time).

Hopefully this information is useful and as always, please don’t hesitate to comment below if you have any questions.

Docker - Containerisation is the new Virtualisation

I'm a huge advocate of Platform as a Service (PaaS), specifically Heroku.

Heroku enables developers to forget about the infrastructure and middleware, allowing them to focus on their application (simply push code and let the platform do the rest).

The "secret source" of Heroku is the understanding that web apps, databases and worker jobs are just Unix processes and Unix doesn't care about the stack. It's this philosophy that enables Heroku to be cross-language, by focusing on Unix processes and producing environments (via Buildpacks) that can run any server process.

This is where containers come in! Heroku uses lightweight containers called Dynos, which run a single user-specified command. With containers you can very quickly and efficiently run thousands of services on a single virtual machine, each thinking they have their own system.

This is all great and it's a core part of why I love Heroku. However, it's what happens when you expand on this concept that things become really interesting...

Introducing Docker

Docker is an open-source project to easily create lightweight, self-sufficient containers from any application, that will run anywhere.

Let's break that down:

Open Source - Although Docker was created by a commercial company (dotCloud), it's open-source and has a thriving community.

Lightweight - Containers are insanely fast, providing bare-metal access to the hardware. No need to worry about a hypervisor layer.

Self-sufficient - Each container comprises of just the application and its dependencies. It runs as an isolated process in userspace on the host operating system, sharing the kernel with other containers.

Application - Containers package applications, not machines (making it application centric). Unlike traditional virtual machines a Docker container does not include a separate operating system.

Run Anywhere - Run on any machine, with guaranteed consistency, for example: local (OS X, Linux, Windows), Data Centre (Red Hat, etc.) and Cloud Infrastructure (AWS EC2, Rackspace, etc.)

The primary difference between a traditional Virtual Machine stack and a Docker stack, is that the Docker Engine container includes just the application and its dependencies. 

Virtual Machine Stack VS. Docker Stack

Why Docker?

With Docker, developers can build any application in any language using any toolchain. Just like shipping containers, “Dockerised” apps are completely portable and can be loaded anywhere. This provides developers complete flexibility and consistency, when developing applications.

Docker also has an impressive community offering, with over 13,000+ images available on Docker Hub. This enables rapid application development, through the use of pre-built capabilities.

However, Docker is not just great for developers, system admins can use Docker to provide standardised environments for their development, QA, and production teams, removing the challenges of ensuring consistency across different environments.

Get Started

I plan to post a lot more regarding Docker, but the easiest way to learn is to experience it for yourself. I suggest you head over to the official Docker Installation guide and look-up the instructions for your system, you can then grab an image from the Docker Hub (for example, Ghost, WordPress, PHP, etc.) and start playing.

I've already got a number of web application containers up and running across OS X and Ubuntu and I have set-up my own portable "Heroku-like" PaaS using Dokku, all powered by Docker! Org Strategy

As highlighted in my previous article ( Enterprise Application Platform), I have recently spent a lot of time positioning as an enterprise application platform.

This article focuses on one of the first design decisions, the org strategy!

What is a Org?

A fundamental part of the platform is an “organisation” or “org”. 

At a high level, a org is a logical instance of data and metadata for a set of users. An org is bound by both capacity limits (number of users and storage) and execution computing resources (query sizes and API limits). These limits will depend on your agreement with and/or your license type.

Every user working on the platform will do so inside an org.

Enterprise Org Strategy

In a perfect world every company would have just one org for their organisation, however this is not always advisable. There are certain scenarios where a “multi-org” strategy will be required to meet the business need.

One thing that is clear is that you should plan your org strategy from day one! If you don’t it’s possible for your company to unintentionally expand across a number of orgs, which could significantly limit your future deployments, as well as dramatically increase the support complexity.

Single-Org Strategy

Single-Org Advantages:

+ Improved user experience (one login)
+ Simplified integration (data access and movement)
+ Reuse of objects, data and capabilities
+ Simplified collaboration (no cross-org Chatter)
+ Encourages consistency (developer standards and reviews)
+ Simplified licensing model
+ Cost benefits (purchase integrations and add-ons only once)

Single-Org Disadvantages:

- Complex security (one org = many apps / many business units)
- Complex release / code management (e.g. code merging)
- Increased pressure on limits (e.g. governor limits)
- Large data volumes (complicating archiving and backup)
- Requires “good citizen developers” to work in a shared environment

A single-org strategy places an emphasis on consistent global business processes and company-wide collaboration.

The key advantage being reuse of objects, data and capabilities, enabling faster development and reducing the need to replicate functionality across multiple orgs.

The key disadvantage is the complexity of managing a shared environment where multiple business units and developers will be deploying applications (often simultaneously). This creates a complex security model and forces the need for a dedicated DevOps team to manage code movement, merging, etc.

Multi-Org Strategy

Multi-Org Advantages

+ Simplified security (one org = one app)
+ Simplified release / code management
+ Reduced pressure on limits (e.g. governor limits)
+ Smaller data volumes (simplifying archiving and backup)
+ Provides business unit autonomy via a dedicated org

Multi-Org Disadvantages

- Poor user experience (multi-logins, can be mitigated via SSO)
- Complex integration (data access and movement)
- Minimal reuse of objects, data and capabilities (e.g. AppExchange Packages)
- Complex collaboration (cross-org Chatter)
- Risk of design inconsistencies across multiple orgs
- Complicates license model
- Anticipated higher costs

A multi-org strategy is acceptable when business units or regions want autonomy to have direct control, with limited need for company wide collaboration or data sharing.

The key advantage to a multi-org strategy is the autonomy provided for specific business units or regions. This enables them to push enhancements and/or bug fixes with minimal impact on other applications.

The key disadvantage is the loss of efficiency, meaning that any capability you deploy in one org, will need to be individually deployed and managed in all other orgs (often resulting in additional cost). For example, if you deploy an AppExchange package to manage eSignatures, you would need to deploy the same package across others orgs if this capability was needed by other applications.

It should also be noted that cross-org Chatter and data access can be challenging, generally forcing you to purchase and deploy third party services such as Make Positive Passport (cross-org Chatter) and/or MuleSoft (integration platform).


I would alway recommend starting with a single-org strategy, only expanding to multi-org if you have a very specific business need. However, if you are negotiating a contract with, it's worth taking into consideration the potential for multi-org, even if you only plan to use one org. This is to ensure you have flexibility to use your existing user licenses across other orgs, in the event you need to expand (reducing the need for future procurement conversations).

Hopefully, by starting with a single-org strategy, but having the commercial flexibility to expand will provide the best of both worlds. It should also be noted that with every release, continue to expand the capabilities of an org. Therefore some of the disadvantages highlighted above (e.g. governor limits) will likely become less of an issue.

Hopefully this article has helped highlight the advantages and disadvantages of different org strategies. As always, the business need should drive your decision, but at least now you can proceed with your eyes open!